id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.14804 | Collective behavior from surprise minimization | Collective motion is ubiquitous in nature; groups of animals, such as fish,
birds, and ungulates appear to move as a whole, exhibiting a rich behavioral
repertoire that ranges from directed movement to milling to disordered
swarming. Typically, such macroscopic patterns arise from decentralized, local
interactions among constituent components (e.g., individual fish in a school).
Preeminent models of this process describe individuals as self-propelled
particles, subject to self-generated motion and 'social forces' such as
short-range repulsion and long-range attraction or alignment. However,
organisms are not particles; they are probabilistic decision-makers. Here, we
introduce an approach to modelling collective behavior based on active
inference. This cognitive framework casts behavior as the consequence of a
single imperative: to minimize surprise. We demonstrate that many
empirically-observed collective phenomena, including cohesion, milling and
directed motion, emerge naturally when considering behavior as driven by active
Bayesian inference -- without explicitly building behavioral rules or goals
into individual agents. Furthermore, we show that active inference can recover
and generalize the classical notion of social forces as agents attempt to
suppress prediction errors that conflict with their expectations. By exploring
the parameter space of the belief-based model, we reveal non-trivial
relationships between the individual beliefs and group properties like
polarization and the tendency to visit different collective states. We also
explore how individual beliefs about uncertainty determine collective
decision-making accuracy. Finally, we show how agents can update their
generative model over time, resulting in groups that are collectively more
sensitive to external fluctuations and encode information more robustly. | Conor Heins, Beren Millidge, Lancelot da Costa, Richard Mann, Karl Friston, Iain Couzin | 2023-07-27T12:19:09Z | http://arxiv.org/abs/2307.14804v4 | # Collective behavior from surprise minimization
###### Abstract
Collective motion is ubiquitous in nature; groups of animals, such as fish, birds, and ungulates appear to move as a whole, exhibiting a rich behavioral repertoire that ranges from directed movement to milling to disordered swarming. Typically, such macroscopic patterns arise from decentralized, local interactions among constituent components (e.g., individual fish in a school). Preeminent models of this process describe individuals as self-propelled particles, subject to self-generated motion and'social forces' such as short-range repulsion and long-range attraction or alignment. However, organisms are not particles; they are probabilistic decision-makers. Here, we introduce an approach to modelling collective behavior based on active inference. This cognitive framework casts behavior as the consequence of a single imperative: to minimize surprise. We demonstrate that many empirically-observed collective phenomena, including cohesion, milling and directed motion, emerge naturally when considering behavior as driven by active Bayesian inference -- without explicitly building behavioral rules or goals into individual agents. Furthermore, we show that active inference can recover and generalize the classical notion of social forces as agents attempt to suppress prediction errors that conflict with their expectations. By exploring the parameter space of the belief-based model, we reveal non-trivial relationships between the individual beliefs and group properties like polarization and the tendency to visit different collective states. We also explore how individual beliefs about uncertainty determine collective decision-making accuracy. Finally, we show how agents can update their generative model over time, resulting in groups that are collectively more sensitive to external fluctuations and encode information more robustly.
## Introduction
The principles underlying coordinated group behaviors in animals have inspired research in disciplines ranging from zoology to engineering to physics [1, 2, 3]. Collective motion in particular has been a popular phenomenon to study, due in part to its striking visual manifestation and ubiquity (e.g., swarming locusts, schooling fish, flocking birds and herding ungulates), and in part to the simplicity of models that can reproduce many of its qualitative features; like cohesive, directed movement [4, 5, 6, 7]. Because of this, collective motion is often cited as a canonical example of a self-organizing complex system, wherein collective properties emerge from simple interactions among distributed components.
Popular theoretical models cast collective motion as groups composed of self-propelled particles (SPPs) that influence one another via simple'social forces.' Early models like the Vicsek model [6] consider only a simple alignment interaction, where each particle aligns its direction of travel with the average heading of its neighbors. While oversimplifying the biological mechanisms in play, SPP models -- like the Vicsek model -- are useful for their amenability to formal understanding, e.g., the computation of universal quantities and relations through hydrodynamic and mean-field limits [8, 9, 10, 11].
Recent research has shifted towards more biologically-motivated, agent-based approaches that aim to model the specific behavioral circuits and decision-rules that govern individual behaviors [12, 13, 14]. While these models are less analytically-tractable than SPP models, they are more appealing to domain specialists like biologists, as they can generate predictions about sensory features in an individual's environment that are necessary and sufficient for evoking behavior. Furthermore, these predictions can be tested experimentally [14, 15]. This data-driven approach can thus provide mechanistic insights into the biological and cognitive origins of decision-making [13, 16].
In this work, we propose a model class that blends the first-principles, theoretical approaches of physical models with biological-plausibility, resulting in an ecologically-valid but theoretically-grounded agent-based model of collective behavior. Our model class is based on _active inference_, a framework for designing and describing adaptive systems where all aspects of cognition -- learning, planning, perception, and action -- are viewed as a process of inference [17, 18, 19]. Active inference originated in theoretical neuroscience as a normative account of self-organizing, biological systems as constantly engaged in predictive exchanges with their sensory environments [20, 21, 22].
## Collective motion models: from self-propelled particles to Bayesian agents
In popular self-propelled particle models, an individual's movement is described as driven by a combination of social and environmental forces. These forces are often treated as vectors that capture various tendencies seen in biological collective motion, such as repulsion, attraction (to neighbors or external targets), and alignment. These forces can then be combined with various nonlinearities and weights to capture mechanisms of interaction.
In contrast, the active inference approach forgoes specifying explicit vectorial forces, and instead starts by modelling all behavior as the solution to an inference problem, namely the problem of inferring the latent causes of sensations. Perception and action are updated to ensure that the agent better predicts its sensory inputs, using an internal model of its world (see Figure 1A). By equipping this internal model with expectations about the environment's underlying tendencies,'social forces' can emerge naturally as agents attempt to suppress sensory data that are mismatched with their expectations. This perspective-shift offers a unifying modelling ontology for describing adaptive behavior, while also resonating with cybernetic principles like homeostatic regulation and process theories of neural function like predictive coding [20, 23, 24].
Active inference blends the construct validity of cognitivist approaches with the first-principles elegance of physics-based approaches by invoking minimization of a single, all-encompassing objective function that explains behavior: surprise, or, under certain assumptions, _prediction error_. As an example of this perspective shift, in this work we investigate a specific class of generative models that can be used to account for the types of collective behaviors exhibited by animal groups. In doing so, we hope to showcase the benefits of the framework, while also proposing a testable model class for use in studies of biological collective motion.
## Active inference and generative models of behavior
A common pipeline in the quantitative study of animal behavior involves selecting a candidate behavioral algorithm or decision rule that may explain a given behavior, and then fitting the parameters of the candidate model to experimental or observational data [15, 25]. While these approaches often yield strong quantitative fits to data, the explanatory power of the models reduces to the interpretation of hard-coded parameters, which often have opaque relationships to real biological mechanisms or constructs [26].
In the active inference framework we rather ask: what is the minimal model an organism might have of its environment that is sufficient to explain its behavior? Behavior is then cast as the process by which the agent minimizes surprise or prediction error, with respect to this model of the world [20, 27]. The principle of prediction-error minimization enjoys empirical support in neuroscience [23, 28] and a theoretical basis in the form of the Free Energy Principle [20, 21], an account of all self-organizing systems that casts them as implicit models of their environments, ultimately in the service of minimizing the surprise (a.k.a., self-information) associated with sensory states [29, 30, 31].
What states-of-affairs count as surprising hinges on a generative model that can assign a likelihood to sensory data. When it comes to modelling behavior driven by this principle, the challenge then becomes specifying a generative or world model, whereby a particular pattern of behavior simply emerges by minimizing surprise.
According to active inference, agents minimize surprise by changing their beliefs about the world (changing which observations are considered surprising) or by acting on the world to avoid surprising sensory data. The former strategy is thought to correspond to passive
Figure 1: **A**: Schematic illustrating the Bayesian perspective in the context of our single agents, where the hidden states of the environment are segregated from a focal agent by means of sensory data \(y_{t}\) (right panel of **A**). This contrasts with classic self-propelled particle models (left panel of **A**), where environmental or social information manifests in terms of social forces on the focal individual, who emits its own actions based on hand-crafted decision-rules (e.g., changes to heading direction). **B**: Schematic illustration of the sector-specific distance tracking. The left panel shows a Bayesian network representation of a Markovian state space model, that captures time-evolution of a latent variable \(x_{1,...,T}\) and simultaneous observations \(y_{1,...T}\). Note that in practice we represent instantaneous _paths_ of \(x\) using generalized coordinates of motion \(\tilde{x}_{t}=(x_{t},x_{t}^{\prime},x_{t}^{\prime\prime},...)\). The middle panel of **B** shows how each component of the vectorial hidden state \(\textbf{x}=(x_{h,1},...,x_{h,L})\) is computed as the average nearest-neighbor distance for the neighbors within each visual sector. Observations are generated as noisy, Gaussian samples centered on the sector-wise distance hidden state (right panel of **B**). This requires the agent to estimate the true hidden state \(x_{t}\) by performing inference with respect to a generative model of how sensory data are generated \(p(\tilde{\textbf{y}},\tilde{\textbf{x}})\).
processes such as perception and learning, whereas the latter corresponds to processes like active sensing and movement. Action is thus motivated by the desire to generate sensations that are as least surprising as possible.
In this paper, we describe the motion of mobile, mutually-sensing agents as emerging from a process of collective active inference, whereby agents both estimate the hidden causes of their sensations, while also actively changing their position in space in order to minimize prediction error. In contrast to models that use pre-specified behavioral rules for generating behavior, generative models entail collective behavior by appealing to a probabilistic representation of how an organism's sensory inputs are generated.
## A generative model for a (social) particle
We now consider a sufficient generative model for an individual in a moving group. We equip this individual, hereafter referred as the focal agent, with a representation of a simple random variable: the local distance \(x\) between itself and its neighbors. For generality, we can expand this into a multivariate random variable to describe a set of distances \(\mathbf{x}=(x_{1},x_{2},...,x_{L})\) that track the distance between the focal agent and its neighbors within \(L\) different sensory sectors (see Figure 1B). We analogize these \(L\) sectors to adjacent visual fields of an agent's field of view [32, 33].
The focal agent possesses a model of the distance(s) \(\mathbf{x}\) and its sensations thereof \(\mathbf{y}\). In particular, our focal agent represents the dynamics of \(\mathbf{x}\) using a stochastic differential equation (a.k.a., a state-space model) defined by a drift \(\mathbf{f}\) and some stochastic forcing \(\omega\) -- we refer to this component of the generative model as the _dynamics model_. The stochastic term \(\omega\) captures the agent's uncertainty about paths of \(\mathbf{x}\) over time. The agent also believes it can sense \(\mathbf{x}\) via observations \(\mathbf{y}\), mediated by a sensory map, which we call the _observation model_. This is defined by some function \(\mathbf{g}\) with additive noise \(z\). The agent's generative model is then fully described by a pair of equations that detail 1) the time-evolution of the distance and 2) the simultaneous generation of sensory samples of the distance:
\[D\tilde{\mathbf{x}}=\tilde{\mathbf{f}}+\tilde{\boldsymbol{\omega}} \tilde{\mathbf{y}}=\tilde{\mathbf{g}}+\tilde{\mathbf{z}} \tag{1}\]
All random variables are described using generalized coordinates of motion with the convention \(\tilde{\mathbf{q}}=\{\mathbf{q},\mathbf{q}^{\prime},\mathbf{q}^{\prime\prime},...\}\). Generalized coordinates allow us to represent the trajectory of a random variable using a vector of local time derivatives (position, velocity, acceleration, etc.). The matrix \(D\) is a generalized shift operator that moves a vector of generalized coordinates up one order of motion \(D(x,x^{\prime},x^{\prime\prime},...)^{\top}=(x^{\prime},x^{\prime\prime},x^{ \prime\prime\prime},...)^{\top}\). The generalized functions \(\tilde{\mathbf{f}}\) and \(\tilde{\mathbf{g}}\) therefore operate on vectors of generalized coordinates (see Appendix A for details on generalized filtering).
## Generalized filtering and active inference
An agent equipped with this dynamic generative model then performs active inference by updating its beliefs (state estimation, or filtering) and control states (action) to minimize
surprise.
Inference entails updating a probabilistic belief over hidden states \(\tilde{\mathbf{x}}\) in the face of sensory data \(\tilde{\mathbf{y}}\). Our agents solve this filtering problem using _generalized filtering_[34, 35], an approximate algorithm for Bayesian inference and parameter estimation on dynamic state-space models. This is achieved by minimizing the variational free energy \(F\), a tractable upper bound on surprise (i.e., negative log evidence or marginal likelihood). The agent minimizes the free energy with respect to a belief distribution \(q(\tilde{\mathbf{x}})\) with parameters \(\nu\); this approximates the true posterior \(q_{\nu}(\tilde{\mathbf{x}})\approx p(\tilde{\mathbf{x}}|\tilde{\mathbf{y}})\), which is the optimal solution in the context of Bayesian inference. The true posterior \(p(\tilde{\mathbf{x}}|\tilde{\mathbf{y}})\) is difficult to compute for many generative models due to the difficult calculation of the marginal (log) likelihood \(\ln p(\tilde{\mathbf{y}})\). Variational methods circumvent this intractable marginalization problem by replacing it with a tractable optimization problem: namely, adjusting an approximate posterior to match the true posterior by minimizing \(F\) with respect to its (variational) parameters \(\nu\).
We parameterize \(q(\tilde{\mathbf{x}})\) as a Gaussian with mean-vector \(\tilde{\mathbf{\mu}}\); according to generalized filtering, \(\tilde{\mathbf{\mu}}\) is updated using a weighted sum of prediction errors:
\[\frac{d\tilde{\mathbf{\mu}}}{dt} \propto-\nabla_{\tilde{\mathbf{\mu}}}F(\tilde{\mathbf{\mu}},\tilde{ \mathbf{y}})\] \[\propto\tilde{\mathbf{\xi}}_{z}-\tilde{\mathbf{\xi}}_{\omega}\] \[\text{where }\tilde{\mathbf{\xi}}_{z} =\tilde{\Pi}^{z}(\tilde{\mathbf{y}}-\tilde{\mathbf{g}}(\tilde{ \mathbf{\mu}}))\] \[\tilde{\mathbf{\xi}}_{\omega} =\tilde{\Pi}^{\omega}(D\tilde{\mathbf{\mu}}-\tilde{\mathbf{f}}(\tilde {\mathbf{\mu}})) \tag{2}\]
The ensuing evidence accumulation can also be regarded as a generalisation of predictive coding [23, 36], where beliefs are updating using a running integration of prediction errors \(\tilde{\mathbf{\xi}}_{z},\tilde{\mathbf{\xi}}_{\omega}\).
While inference entails changing the approximate posterior means \(\tilde{\mathbf{\mu}}\) to account for sensory data, action entails changing the _data itself_ to better match the data to one's current beliefs. Similar to the update scheme in (2), actions are also updated by minimizing free energy:
\[\frac{da}{dt} =-\nabla_{a}F(\tilde{\mathbf{\mu}},\tilde{\mathbf{y}}(a))\] \[=-\nabla_{\tilde{\mathbf{y}}}F(\tilde{\mathbf{\mu}},\tilde{\mathbf{y }}(a))\nabla_{a}\tilde{\mathbf{y}}(a)\] \[=-\tilde{\mathbf{\xi}}_{z}^{\top}\nabla_{a}\tilde{\mathbf{y}}(a) \tag{3}\]
Actions thus are updated using a product of precision-weighted prediction errors and \(\tilde{\mathbf{\xi}}_{z}\) and a'sensorimotor contingency' \(\nabla_{a}\tilde{\mathbf{y}}(a)\) or reflex arc. This sort of'reflexive action' -- where control is simply targeted at minimizing sensory prediction errors -- underlies active inference accounts of motor control [22, 24], and can be formally related to proportional-integral-derivative (PID) control [37]. In the presence of precise prior beliefs (i.e., \(\tilde{\Pi}^{\omega}\gg\tilde{\Pi}^{z}\)), these prediction errors measure how far an agent's observations are from its expectations; the
agent then acts using (3) to minimize this deviation. Active inference agents are thus driven to act in a way that aligns with their (biased) expectations about the world [38]. In the next section, we will see how building a particular type of bias into each agent's generative model leads to the appearance of social forces-like terms in (3).
## Social forces as a consequence of predictive control
In particular, we take the agent's action to be its heading direction \(a=\mathbf{v}\), and examine the case where the agent observes the distance to its neighbors within a single sensory sector, i.e., \(L=1\), \(\mathbf{x}=(x_{1})\). We distinguish the agent's representation of the distance \(\mathbf{x}\) from the actual distance using the subscript \(h\). Therefore \(\mathbf{x}_{h}=(x_{h,1},x_{h,2},...,x_{h,L})\) denotes the average distances (and corresponding sensory samples\(\mathbf{y}_{h}\)) calculated using the actual positions of other agents. For the case of \(L=1\), and assuming the agent observes both the distance and its rate of change \(y^{\prime}_{h,1}\), this is:
\[x_{h,1} =\frac{1}{K}\sum_{j\in N_{in}}\|\mathbf{r}_{j}-\mathbf{r}\| y_{h,1} =x_{h,1}+z_{h,1}\] \[x^{\prime}_{h,1} =\frac{dx_{h,1}}{dt} y^{\prime}_{h,1} =x^{\prime}_{h,1}+z^{\prime}_{h,1} \tag{4}\]
\(N_{in}\) is the set of neighbors within the agent's single sensory sector, \(K\) is the size of this set, \(\mathbf{r}\) is the focal agent's position vector, and \(\mathbf{r}_{j}\) is the position vectors of neighbor \(j\). The sensory observation of the generalized distance \(\tilde{y}_{h}=(y_{h,1},y^{\prime}_{h,1})\) is a sample of the hidden state, perturbed by some additive noises \(\tilde{z}=(z_{h,1},z^{\prime}_{h,1})\). By expanding the active inference control rule in (3), we arrive at the following differential equation for the heading vector:
\[\frac{d\mathbf{v}}{dt} =\xi^{\prime}_{z}\Delta\hat{\mathbf{r}}\] \[\Delta\hat{\mathbf{r}} =\frac{1}{K}\sum_{j\in N_{in}}\frac{\Delta\mathbf{r}_{j}}{\| \Delta\mathbf{r}_{j}\|},\Delta\mathbf{r}_{j}=\mathbf{r}_{j}-\mathbf{r} \tag{5}\]
The average vector \(\Delta\hat{\mathbf{r}}\) is exactly the (negative)'sensorimotor contingency' term \(\nabla_{a}\tilde{\mathbf{y}}(a)\) from (3) (see Appendix A for detailed derivations):
\[\nabla_{\mathbf{v}}\tilde{y}(\mathbf{v})=\nabla_{\mathbf{r}}y=\frac{1}{K}\sum _{j\in N_{in}}\frac{\mathbf{r}-\mathbf{r}_{j}}{\|\mathbf{r}-\mathbf{r}_{j}\| }=-\Delta\hat{\mathbf{r}} \tag{6}\]
The simple action update in (5) means that the focal agent moves along a vector pointing towards the average position of its neighbors. Whether this movement is attractive or repulsive is determined by the sign of the sensory prediction error \(\xi^{\prime}_{z}\), and its magnitude
depends on the scale of the prediction error, i.e., how much observations deviate from the agent's predictions.
The presence of both attractive and repulsive forces depends on the agent's model of the distance dynamics, captured by the functional form of \(\tilde{\mathbf{f}}\). In particular, consider forms of \(\tilde{\mathbf{f}}\) that relax \(\mathbf{x}\) to some attracting fixed point \(\eta>0\). Equipped with such a stationary model of the local distance, inference dynamics (c.f., (2)) will constantly bias its predictions \(\mu\) according to the prior belief that the distance is pulled to \(\eta\). Given this biased dynamics model and the action update in (3), such an agent will move to ensure that distance observations \(\tilde{y}_{h}\) are consistent with the fixed point \(\eta\).
This action update shows immediate resemblance to the attractive and repulsive vectors common to social force-based models [4; 5; 7], which often share the following general form:
\[F_{attr} \propto\sum_{j\in Z_{A}}\frac{\mathbf{r}_{ij}}{\|\mathbf{r}_{ij}\|}\] \[F_{repul} \propto-\frac{1}{K}\sum_{j\in Z_{R}}\frac{\mathbf{r}_{ij}}{\| \mathbf{r}_{ij}\|}\]
where \(Z_{A},Z_{R}\) refer to distance-defined zones of attraction or repulsion, respectively. In the active inference framework, these social forces emerge as the derivative of the observations with respect to action \(\nabla_{a}\tilde{y}\), where the sign and magnitude of the sensory prediction error \(\xi^{\prime}_{z}\) determines whether the vector is attractive (towards neighbors) or repulsive (away from neighbors). The transition point between attraction and repulsion is therefore given by \(\eta\), the point at which prediction errors switch sign.
An important consequence of this formulation is that, unlike the action rule used in social force-based models, the'steady-state' solution occurs when all social forces disappear (when prediction errors vanish). In this case, the agent ceases to change its heading direction and adopts its previous velocity. This occurs when the agent's sensations align with its (biased) predictions \(y_{h,1}\approx\eta\). In classic SPP models, this is equivalent to the different social force vectors exactly cancelling each other.
We can therefore interpret social force-based models as limiting cases of distance-inferring active inference agents, because one can conceive of social forces as just those forces induced by free energy gradients; namely, the forces that drive belief-updating. In the case of our active inference agents, attractive and repulsive forces emerge naturally when we assume A) agents model the local distance dynamics as an attractor with some positive-valued fixed point \(\eta\); B) agents can act by changing their heading direction and C) agents observe at least the first time derivative of their observations (e.g., \(y^{\prime}_{h,1}\), but see Appendix A for detailed derivations).
It is worth highlighting the absence of an explicit alignment force in this model, consistent with experimental findings in several species of fish [16]. The heading vectors of neighbors nevertheless implicitly incorporated into the calculation of first-order prediction errors \(\xi^{\prime}_{z}\) (c.f., (A.40) in Appendix A). However, alignment forces as seen in the Vicsek model [6]
and Couzin model [7] can also be recovered if we assume agents have a generative model of the average angle between their heading vector and those of their neighbors (see B for derivations).
## Multivariate sensorimotor control
Having recovered social forces as free energy gradients in the case of a single sensory sector (\(L=1\)), we now revisit the general formulation of the generative model's state-space, where the hidden variable \(x\) is treated as an \(L\)-dimensional vector state: \(\mathbf{x}=(x_{1},x_{2},...,x_{L})\), with correspondingly \(L\)-dimensional observations \(\mathbf{y}=(y_{1},y_{2},...,y_{L})\).
Specifically, we consider each \(x_{l}\) to represent the average distance-to-neighbors within one of a subset of adjacent sensory sectors, where each sector is offset from the next by a fixed inter-sector angle (see Figure 1B for a schematic of the multi-sector set-up). The rest of the generative model is identical; the agents estimate these distances (and their temporal derivatives \(x^{\prime}_{l},x^{\prime\prime}_{l},...\)) while changing their heading direction to minimize free energy. Following the same steps as in the case of a single sector, the resulting update rule for \(\mathbf{v}\) is a weighted sum of'sector-vectors', where generalized observations from each sector-specific modality \(\tilde{y}_{l}\) are used to compute the prediction errors that scale the corresponding sector-vector. This generalizes the scalar-vector product in (3) to a matrix-vector product:
\[\frac{d\mathbf{v}}{dt}=\tilde{\boldsymbol{\xi}}_{z}^{\top}\Delta \hat{\mathbf{R}}\] \[\Delta\hat{\mathbf{R}}=-\begin{bmatrix}\nabla_{\mathbf{v}} \tilde{y}_{1}\\ \nabla_{\mathbf{v}}\tilde{y}_{2}\\ \vdots\\ \nabla_{\mathbf{v}}\tilde{y}_{L}\end{bmatrix} \tag{8}\]
where now the (negative) sensorimotor contingency \(-\nabla_{a}\tilde{\mathbf{y}}=\Delta\hat{\mathbf{R}}\) is a matrix whose rows contain the partial derivatives \(\nabla_{\mathbf{v}}\tilde{y}_{l}\) (i.e. the'sector-vectors'). Each sector-vector is a vector pointing towards the average neighbor within sector \(l\).
## Numerical results
Given a group of active inference agents -- equipped with the generative models described in previous sections -- it is straightforward to generate trajectories of collective motion by integrating each agent's heading vector over time: \(\dot{\mathbf{r}}_{i}=\mathbf{v}_{i},i\in\{1,2,...,N\}\) where \(N\) is the number of agents. We update all heading directions \(\{\mathbf{v}_{i}\}_{i=1}^{N}\) and beliefs \(\{\tilde{\boldsymbol{\mu}}_{i}\}_{i=1}^{N}\) in parallel via a joint gradient descent on their respective free energies:
Figure 2: **A**: Example snapshots of different collective states in schools of \(N=50\) active inference agents. Each line represents the trajectory of one individual, and color gradient represents time, from earliest (light blue) to latest (purple). The polarized regime in the left panel was simulated with the default parameters listed in Table E.1 in Appendix E. The milling regime (middle panel) was achieved by increasing the variance of velocity fluctuations (encoded in \(\sigma_{z^{\prime},h}^{2}\)) from 0.01 to 0.05 (relative to the default configuration) and increasing \(\lambda_{z}\) from 1.0 to 1.2. The disordered regime was achieved by increasing the sensory smoothness parameter to 2.0 and decreasing \(\eta\) from 1.0 to 0.5 and \(\alpha\) from 0.5 to 0.1 (relative to the default configuration). **B**: Average polarization (left) and milling probability (right) shown as a function of the two factorized components of the sensory precision, \(\Gamma_{z}\) (log-transformed) and \(\lambda_{z}\). For each combination of precision parameters, we ran 500 independent trials of ‘free schooling,’ and then averaged the quantities of interest across trials. Each ‘free schooling’ trial lasted 15 seconds (1500 time steps with \(dt=0.01s\)); the time-averaged metrics (polarization and milling probability, respectively, were computed from the last 10 seconds of the trial.
\[\dot{\mathbf{v}}_{1} =-\nabla_{\mathbf{v}_{1}}F(\tilde{\boldsymbol{\mu}}_{1},\tilde{ \mathbf{y}}_{1}) \dot{\tilde{\boldsymbol{\mu}}}_{1} =-\nabla_{\tilde{\boldsymbol{\mu}}_{1}}F(\tilde{\boldsymbol{\mu}}_ {1},\tilde{\mathbf{y}}_{1})\] \[\dot{\mathbf{v}}_{2} =-\nabla_{\mathbf{v}_{2}}F(\tilde{\boldsymbol{\mu}}_{2},\tilde{ \mathbf{y}}_{2}) \dot{\tilde{\boldsymbol{\mu}}}_{2} =-\nabla_{\tilde{\boldsymbol{\mu}}_{2}}F(\tilde{\boldsymbol{\mu} }_{2},\tilde{\mathbf{y}}_{2})\] \[\vdots \dot{\tilde{\boldsymbol{\mu}}}_{N} =-\nabla_{\mathbf{v}_{N}}F(\tilde{\boldsymbol{\mu}}_{N},\tilde{ \mathbf{y}}_{N}) \dot{\tilde{\boldsymbol{\mu}}}_{N} =-\nabla_{\tilde{\boldsymbol{\mu}}_{N}}F(\tilde{\boldsymbol{\mu} }_{N},\tilde{\mathbf{y}}_{N}) \tag{9}\]
For the simulation results shown here, each agent tracks the average distance \(x_{l}\) within a total of \(L=4\) sensory sectors that each subtend \(60^{\circ}\) (starting at \(-120^{\circ}\) and ending at \(+120^{\circ}\), relative to the focal agent's heading direction) and observe the sector-specific distances calculated using all neighbors lying within \(5.0\) units of the focal agent's position. Each agent represents the vector of local distances as a generalized state with 3 orders of motion: \(\tilde{\mathbf{x}}=\{\mathbf{x},\mathbf{x}^{\prime},\mathbf{x}^{\prime\prime}\}\), \(\tilde{\boldsymbol{\mu}}=\{\boldsymbol{\mu},\boldsymbol{\mu}^{\prime}, \boldsymbol{\mu}^{\prime\prime}\}\). Agents can observe the first and second orders of the distance \(\tilde{\mathbf{y}}=\{\mathbf{y},\mathbf{y}^{\prime}\}\), i.e. the distance itself and its instantaneous rate-of-change. In the numerical results to follow, we use active inference to study the relationship between the properties of individual cognition (e.g., the parameters of agent-level generative models) and collective phenomenology.
### Collective regimes
Simulated groups of these distance-inferring agents display robust, cohesive collective motion (see Figure 2A and Supplemental Movies 1-5). Figure 2A displays examples of different types of group phenomena exhibited in groups of active inference agents, whose diversity and types resemble those observed in animal groups [39, 40] and in other collective motion models [6, 7, 41]. These range from directed, coherent movement with strong inter-agent velocity correlations ('polarized motion') to group rotational patterns, like milling, which features high angular momentum around the group's center-of-mass.
### Relating individual beliefs to collective outcomes
In all but the most carefully constructed systems [26, 42], the relationship between individual and collective representations is often opaque. In particular, the relationship between individual level uncertainty or 'risk' and collective behavior is an open area of research. For instance, increased risk-sensitivity at the level of the individual may lead to to decreased risk-encoding at the collective level [43]. Inspired by these observations, we use active inference to examine the quantitative relationship between uncertainty at the individual level and collective phenomenology. We begin by examining common metrics of group motion like polarization and angular momentum [7]. In Figure 2B we explore how polarization and angular momentum are affected by two components of agent-level sensory uncertainty (i.e., inverse sensory precision): 1) the absolute precision that agents associate to sensory noise, encoded in the parameter \(\Gamma_{z}\) and; 2) the autocorrelation or'smoothness' of that noise, encoded in the parameter \(\lambda_{z}\). Intuitively, \(\Gamma_{z}\) encodes the variance or amplitude that the agent
associates with the noise in each of its \(L\) sensory sectors \(y_{l}\), and \(\lambda_{z}\) is a how'smooth' the agent believes the noise i [44, 35]. A higher value of \(\lambda_{z}\) implies that the agent believes sensory noise are more serially-correlated (e.g., random fluctuations in optical signals caused by smooth variations in refraction due to turbulence in water). We refer the reader to Appendix C for details on how these two parameters \(\Gamma_{z}\) and \(\lambda_{z}\) jointly the parameterize the precision matrix of the agent's observation model.
Figure 2B shows how the different components (amplitude and autocorrelation) of the agent's sensory uncertainty determine group behavior, as quantified by average polarization and milling probability. Average polarization is defined here as the time average of the polarization of the group, where the polarization at a given time \(p(t)\) measures the alignment of velocities of agents comprising the group [39, 7]:
\[\hat{p}=\frac{1}{T}\sum_{t=1}^{T}p(t)\hskip 28.452756ptp(t)=\frac{1}{N}\|\sum_{i =1}^{N}\mathbf{v}_{i}(t)\| \tag{10}\]
High average polarization indicates directed, coherent group movement. The left panel of Figure 2B shows how \(\Gamma_{z}\) and \(\lambda_{z}\) contribute to the average polarization of the group. An increase in either parameter causes polarization to decrease and angular momentum to increase, reflecting the transition from directed motion to a milling regime, where the group rotates around its center of mass. We calculate the milling probability (c.f. right panel of Figure 2B) as the proportion of trials where the time-averaged angular momentum surpassed 0.5. The average angular momentum can be used to quantify the degree of rotational motion, and is calculated as the time- and group-average of the individual angular momenta around the groups' center of mass \(\mathbf{c}\):
\[\hat{m}=\frac{1}{T}\sum_{t=1}^{T}m(t)\hskip 28.452756ptm(t)=\frac{1}{N}\|\sum_{i =1}^{N}\mathbf{r}_{ic}(t)\times\mathbf{v}_{i}(t)\| \tag{11}\]
where \(\mathbf{r}_{ic}\) is a relative position vector for agent \(i\), defined as the vector pointing from the group center \(\mathbf{c}\) to agent \(i\)'s position: \(\mathbf{r}_{i}-\mathbf{c}\).
These collective changes can be understood in light of the magnitude of action updates, which depend on the how the scale of first-order prediction errors \(\boldsymbol{\xi}_{z}^{\prime}\) is tuned by \(\Gamma_{z}\) and \(\lambda_{z}\):
\[\boldsymbol{\xi}_{z}^{\prime}\propto\Gamma_{z}\lambda_{z}^{2} \tag{12}\]
In practice, this means that as the group believes in more predictable (less rough) first-order sensory information \(\mathbf{y}_{z}^{\prime}\), the group as a whole is more likely to enter rotational, milling-like regimes. However, the enhancing effect of these first-order prediction errors \(\boldsymbol{\xi}_{z}^{\prime}\) on rotational motion is bounded; if prediction errors are over-weighted (e.g. high \(\Gamma_{z}\) and/or \(\lambda_{z}\)), the group becomes more polarized again and likely to fragment. This fragmentation probability occurs at both low and high levels of \(\Gamma_{z}\) and \(\lambda_{z}\), implying that there is an optimal range of individual-level sensory precision where cohesive group behavior (whether polarized or
milling) is stable. Thus, our model predicts that maintaining beliefs about reliable information is neither required, or in fact even desirable, for animals in order to facilitate collective motion.
We have seen how one can use active inference to relate features of individual-level beliefs (in this case, beliefs about sensory precision) to collective patterns, focusing in the present case on common metrics for studying collective motion like polarization and the tendency to mill.
In the following sections, we move from looking at group-level patterns that occur during free movement, to studying the consequences of individual-level uncertainty for collective information-processing. We begin by investigating how collective information transfer depends on individual-level beliefs about the relative precisions associated with different types of sensory information.
Figure 3: **A**: Collective accuracy as a function of proportion informed or \(p_{inf}\) for differing values of the sensory precision assigned to social observations \(\Gamma_{z-\text{Social}}\). Average accuracy for each condition (combination of \(p_{inf},\Gamma_{z\text{Social}},\Gamma_{z-\text{Target}}\)) was computed as the proportion of successful hits across 500 trials. Here, the average accuracy is further averaged across all the values of the \(\Gamma_{z-\text{Target}}\) parameter, meaning each accuracy here is computed as the average of 15000 total trials (500 trials per condition \(\times\) 30 different values of \(\Gamma_{z-\text{Target}}\)). **B**: Collective accuracy as a function of both the social and target precisions (\(\Gamma_{z-Social},\Gamma_{z-Target}\), shown in log-scale) averaged across values of \(p_{inf}\) ranging from \(p_{inf}=0.15\) to \(p_{inf}=0.40\). Each condition’s accuracy was computed as the proportion of accurate decisions from 500 trials.
### Collective information transfer
In this section, we take inspiration from the collective leadership and decision-making literature to investigate how individuals in animal groups can collectively navigate to a distant target [45, 46, 47, 48]. This phenomenon is an example of effective leadership through collective information transfer and is remarkable for a number of reasons; one that speaks to its emergent nature, is the fact that these collective decisions are possible despite -- and indeed even _because of_ -- the presence of uninformed individuals in the group [46]. Figure 3A shows that active inference agents engaged in this task reproduce a result from earlier work [45] on the relationship between the proportion of uninformed individuals and collective accuracy. Namely, as the proportion of informed individuals increases, so does the accuracy of reaching the majority-preferred target. In the same vein as earlier sections, we also investigated the dependence of this effect, as well as the average target-reaching accuracy, on individual-level beliefs.
We operationalize the notion of an agent being 'informed' (about an external target) by introducing a new latent variable to its generative model; this variable \(x_{\text{target}}\) represents the distance between the informed agent's position \(\mathbf{r}\) and a point-mass-like target with position vector \(\mathbf{T}=[T_{1},T_{2}]\). We thus define this new hidden state and observation as follows: \(x_{\text{target}}=\|\mathbf{T}-\mathbf{r}\|\), \(y_{\text{target}}=x_{\text{target}}+z_{\text{target}}\). Just like the'social' distance observations \(\mathbf{y}_{h}\), this target distance observation \(y_{\text{target}}\) represents a (potentially-noisy) observation of the true distance \(x_{\text{target}}\). As before, the agent s represent both the target distance \(x_{\text{target}}\) and its observations \(y_{\text{target}}\) using generalized coordinates of motion. Each informed agent has a dynamics model of \(\tilde{x}_{\text{target}}\), whereby they assume the target-distance is driven by some drift function \(f_{\text{target}}(x_{\text{target}})=-\alpha_{t}x_{\text{target}}\) which relaxes to \(0\). As with the social distances, we truncate the agent's generalized coordinates embedding of the target distance to three orders of motion and the generalized observations to two orders of motion.
Each informed agent maintains a full posterior belief \(\tilde{\mathbf{\mu}}=(\tilde{\mu}_{1},\tilde{\mu}_{2},...,\tilde{\mu}_{L},\tilde {\mu}_{\text{target}})\) about the local distances \(\tilde{x}_{1},\tilde{x}_{2},...,\tilde{x}_{L}\) as well as the target distance \(\tilde{x}_{\text{target}}\).
Using identical reasoning to arrive at the action updates in (5) and (8), one can augment the matrix-vector product in (8) with an extra sensorimotor contingency and prediction error that represents target-relevant information:
\[\frac{d\mathbf{v}}{dt}=\tilde{\mathbf{\xi}}_{z}^{\top}\begin{bmatrix} \Delta\hat{\mathbf{R}}\\ \Delta\mathbf{T}\end{bmatrix}\] \[\Delta\mathbf{T}=-\nabla_{\mathbf{v}}\tilde{y}_{\text{target}}= \frac{\mathbf{T}-\mathbf{r}}{\|\mathbf{T}-\mathbf{r}\|} \tag{13}\]
This matrix-vector product can then be seen as a weighted combination of social and target vectors, with the weights afforded to each equal to their respective precision-weighted prediction errors:
\[\frac{d\mathbf{v}}{dt}=\underbrace{\xi_{\text{social}}\Delta\hat{\mathbf{R}}}_{ \text{Social vector}}+\underbrace{\xi_{\text{target}}\Delta\mathbf{T}}_{\text{ Target vector}} \tag{14}\]
This expression is analogous to the velocity update in Equation (3) of Ref. [45], where a 'preferred direction' vector is integrated into the agent's action update with some pre-determined weight. This weight is described as controlling the relative strengths of non-social vs. social information. For active inference agents, the weighting of target-relevant information emerges naturally as a precision-weighted prediction error (here represented as \(\xi_{\text{target}}\)), and the target-vector itself is equivalent to a sensorimotor reflex arc, that represents the agent's assumptions about how the local flow of the target distance \(y^{\prime}_{\text{target}}\) changes as a function of the agent's heading direction \(\mathbf{v}\). An important consequence of this construction, is that, unlike in previous models where this weight is 'baked-in' as a fixed parameter, the weight assigned to the target vector is dynamic, and fluctuates according to how much the agent's expectations about the target distance \(\tilde{\mu}_{\text{target}}\) predict the sensed target distance \(y_{\text{target}}\).
Using this new construction, we can simulate a group of active inference agents, in which some proportion \(p_{inf}\) of agents represent this extra set of target-related variables as described above. To generate \(\tilde{y}_{\text{target}}\) observations for these informed individuals, we placed a spatial target at a fixed distance away from the group's center-of-mass and then allowed the informed individuals to observe the generalized target distance \(\tilde{y}_{\text{target}}=(y_{\text{target}},y^{\prime}_{\text{target}})\). We then integrated the collective dynamics over time and measured the accuracy with which the group was able to navigate to the target (see Materials and Methods for details). By performing hundreds of these trials for different values of \(p_{inf}\), we reproduced the results of Ref. [45] in Figure 3. We see that as the number of informed individuals increases, collective accuracy increases. However, this performance gain depends on the agents' beliefs about sensory precision, which we now dissociate into two components: \(\Gamma_{\text{z-Social}}\) ( the precision assigned to the social distance observations) and \(\Gamma_{\text{z-Target}}\) (the precision assigned to target distance observations). By varying these two precisions independently, which respectively scale \(\xi_{\text{social}}\) and \(\xi_{\text{target}}\) in (14), we can investigate the dependence of collective accuracy on the beliefs of individual agents about the uncertainty attributed to different sources of information.
Figure 3A shows the average collective accuracy as a function of \(p_{inf}\), for different levels of the social distance precision \(\Gamma_{\text{zSocial}}\). The pattern that emerges is that the social precision, that optimizes collective decision-making, sits within a bounded range. The general effect of social precision is to essentially balance the amplification of target-relevant information throughout the school, with the need for the group to maintain cohesion. When social precision is too high, agents over-attend to social information and are not sensitive to the information provided by informed individuals; when it is too low, the group is likely to fragment and will not accurately share target-relevant information; meaning only the informed individuals will successfully reach the target. Figure 3B shows that a similar optimal precision-balance exists for \(\Gamma_{\text{zTarget}}\). Here, we show average collective accuracy (averaged across values of \(p_{inf}\) as a function of social- and target-precision. Maximizing collective accuracy appears to rely on agents balancing the sensory precision they assign to different
sources of information; under the active inference model proposed here, this balancing act can be exactly formulated in terms of the variances (inverse precisions) afforded to different types of sensory cues.
### Online plasticity through parameter learning
The ability of groups to tune their response to changing environmental contexts, such as rapid perturbations or informational changes, is a key feature of natural collective behavior [43, 49]. However, many self-propelled particle models lack a generic way to incorporate this behavioral sensitivity [45] and exhibit damped, 'averaging'-like responses to external inputs [50]. This results from classical models usually equipping individuals with fixed interaction rules and constant weights for integrating different information sources. While online weight-updating rules and evolutionary algorithms have been used to adaptively tune single-agent parameters in some cases [45, 48, 51], these approaches are often not theoretically principled (with some exceptions [52, 53]) and driven by specific use-cases.
Active inference offers an account of tune-able sensitivity, using the same principle used to derive action and belief-updating in previous sections: minimizing surprise. In practice, this sensitivity emerges when we allow agents to update their generative models in real-time. Updating generative model parameters over time is often referred to as "learning" in the active inference literature [54], since it invokes the notion of updating beliefs about parameters rather than states, where parameters and states distinguish themselves by the fast and slow timescales of updating, respectively. We leverage this idea to allow agents to adapt their generative models and thus adapt their behavioral rules, referring to this process as _plasticity_, in-line with the notion of short-term plasticity in neural circuits [55]. To enable agents to update generative model parameters, we can simply augment the coupled gradient descent in (9) with an additional dynamical equation, this time by minimizing free energy with respect to model parameters, which we subsume into a set \(\theta\):
\[\dot{\theta}=-\nabla_{\theta}F(\tilde{\mathbf{\mu}},\tilde{\mathbf{y}},\theta) \tag{15}\]
The generative model parameters \(\theta\) represent the statistical contingencies or regularities agents believes govern their sensory world; this includes the various precisions associated with sensory and process noises \(\tilde{\Pi}^{z},\tilde{\Pi}^{\omega}\) and the parameters of the dynamics and observation models, \(\tilde{\mathbf{f}},\tilde{\mathbf{g}}\). Since the free energy is a smooth function of all the generative model parameters, in theory learning can be done with respect to any parameter using procedure entailed by (15).
In practice, combining parameter-learning with active inference usually implies a separation of timescales, whereby learning or plasticity occurs concurrently to state inference and action but at a slower update rate. In all the results shown here, agents update parameters an order of magnitude more slowly than they update beliefs or actions. To furnish a interpretable example of plasticity, in the simulations described here, we enabled agents to update their beliefs about the sensory smoothness parameter \(\lambda_{z}\). We chose sensory smoothness due
Figure 4: **A**: Schematic of the sensory perturbation protocol. The ‘pseudo-motion’ stimulus consists of repetitively perturbing the agent’s sensory sectors with a moving wave of prediction errors in the agent’s velocity-observation modality \(\mathbf{y}_{h}^{\prime}\). The top panel shows stimulus pattern as a heatmap over (amplitude over time) with two repetitions, starting from negative (red, sectors 1 and 2) and transitioning to positive (blue, sectors 3 and 4) prediction errors. The sign-switch in the stimulus (from negative to positive) mimics a moving object that first moves towards focal individual and then moves away. The temporal order of the stimulus across the sectors can be used to selectively emulate a right-moving vs. left-moving object, relative to the focal individual’s heading-direction. The bottom panel shows how the stimulated agent’s beliefs about the distance hidden state \(\boldsymbol{\mu}\) changes over the course of the motion stimulus. **B**: Response magnitude to a perturbation in presence or absence of parameter learning. Left panel: example pair of 2-D trajectories of active inference agents with matched pre-perturbation histories, in response to an individual perturbation. The ability to perform parameter-learning is left on in one stochastic realization (green) and turned off in the other (blue), following the perturbation. Right panel: initialization-averaged collective responses (group turning angle) to perturbation of active inference agents when learning is enabled or disabled. The perturbation response of a 2-zone self-propelled particle model (purple line) based on [45] is also shown for reference. **C**: Collective response as a function of the number of perturbed individuals, comparing simulations where parameter-learning is enabled to those where it’s disabled. Shown is the mean response with highest density regions (HDRs) of integrated turning magnitude (left) and response probability (right) computed from \(N_{i}=200\) independent initializations of each condition. For each initialization, the average metric is computed across \(N_{r}=50\) independent realizations that were run forward from the same point in time, following a sensory prediction error perturbation (to a randomly-chosen set of perturbed agents). Response probability is computed as the proportion of independent realizations, per initialization, where the group turning rate exceeded \(\pi\) radians within the first 10 seconds of the perturbation.
to its straightforward relationship to the magnitude of sensory prediction errors (c.f. the relation in (12) and Appendix C). As agents tune \(\lambda_{z}\) to minimize free energy, belief updating and action will at the same time become quadratically more or less responsive to sensory information.
One example of where behavioral plasticity is crucial for collective information processing is a group's ability to rapidly amplify behaviorally-relevant information, e.g., detecting the presence of a predator [43, 56, 57]. To study the effect of behavioral plasticity on collective responsiveness, we perturbed single agents in groups of active inference agents while enabling or disabling online plasticity. We perturbed groups by inducing transient 'phantom' prediction errors in random subsets of agents and measuring the resulting turning response of the group (see Materials and Methods for details). These prediction errors were structured (see Figure 4A) to mimic a transient visual stimulus, e.g., a loom stimulus or approaching predator [58], which reliably induces a sustained turning response in the chosen individual [50]. Figure 4 shows the effect of enabling plasticity on the size and sensitivity of collective responses to these perturbations. Not only do plasticity-enabled groups respond more strongly to perturbations of single-agents, compared to their plasticity-disabled counterparts (4B), but the magnitude of the collective response is also more sensitive to the size of the perturbation (4C). As has been measured in biological collectives [59], the plasticity-enabled groups collectively encode the size of perturbations with higher dynamic range than plasticity-disabled controls. This can be interpreted as an enhanced ability to collectively discriminate between inputs of different magnitude
By updating generative models over time, the active inference framework provides a flexible and theoretically-principled approach to modeling adaptive, collective behavior with tuneable sensitivity, that eschews ad-hoc update rules or expensive simulations driven by evolutionary algorithms. Recall that the plasticity mechanism proposed here is not limited to updating beliefs about sensory smoothness: it can be extended to update beliefs about any model parameter in a similar manner. The ability to adapt generative model parameters, and hence individual-level behavioral rules, in real-time represents a promising avenue for future research in active inference and collective behavior, and may lead to more biologically-plausible hypotheses about the mechanisms underlying collective behavior in the natural world.
## Discussion
In this work, we proposed active inference as a flexible, cognitively-inspired model class that can both be used in the theoretical study of collective motion, and in an empirical setting as an individual-level model of collective animal behaviors. By framing behavior as the consequence of prediction-error minimization -- with respect to an individual's world model -- we offer examples of how naturalistic collective motion emerges in, where individual behavior is driven by the imperative to minimize the surprisal associated with sensory signals. Under mild distributional assumptions, this surprise is scored by an interpretable proxy; namely, prediction error. In the particular case of collective motion, we equipped a group of active in
ference agents with a simple generative model of local social information, operationalized as the average distance-to-neighbors and its rates-of-change. Using this individual-level model, we recovered and generalized the social forces that have been the core mechanism in classical SPP models of collective motion. The active inference framework also provides a probabilistic interpretation of ad-hoc 'weight' parameters that are often used in these models, in terms of the precisions that agents associate to different types of sensory information.
We have also shown how the active inference framework can be used to characterize the relationship between generative model parameters and emergent information-processing capacities, as measured by collective information transfer and responsiveness to external perturbations. Active inference's generality allows us to relax the typically-static behavioral rules of SPP models, by enabling agents to flexibly tune their sensitivity to prediction errors. This is achieved via principled processes like parameter learning (i.e., 'plasticity'), and can be used to model naturalistic features of collective behavior, such as the tendency to amplify salient (i.e., precise) information, that have largely evaded modelling in the SPP paradigm, except in cases where adaptation rules are explicitly introduced [45, 48]. However, when we simply allow agents to update parameters, in addition to beliefs and agents, using the principle of surprise-minimization, many hallmarks of these naturalistic behaviors can be easily obtained.
By providing a flexible modeling approach that casts perception, action, and learning as manifestations of the single drive to minimize surprise, we have highlighted active inference as a novel toolbox for studying collective behavior in natural systems. Future work in this area could explore how the framework can be used to investigate other forms of collective behavior (not just collective motion), like multi-choice decision-making and social communication [60]. The results shown in the current work serve primarily as a proof of concept: we started by writing down a specific, hypothetical active inference model of agents engaged in group movement, and then generated naturalistic behaviors by integrating the resulting equations of motion for this particular model. Taking inspiration from fields like computational psychiatry [61, 62], we emphasize the ability to move from simple forward modelling of behavior to data-driven _model inversion_, whereby one hopes to infers the values of parameters that best explain empirical data (of e.g. behavioral movement data). Both the selection of model structure and the fitting of model parameters can be performed through methods of Bayesian model inversion and system identification methods like Bayesian model selection, averaging or reduction.
## Materials and Methods
For all simulations, we randomly initialized the positions and (unit-magnitude) velocities of \(N\) particles, and integrated the equations of motion for active inference and generalized filtering using a forwards Euler-Maruyama scheme with an integration window of \(\Delta t=0.01s\). Group size \(N\) the length of the simulation \(T\) (in seconds) varied based on the experiment. At any timestep \(\tau\) of a simulation, we integrate the active inference equations for perception (filtering) and control (action) for one timestep each before using the updated ve
locity to displacement the positions of all particles using the following discrete equation: \(\mathbf{r}(\tau+\Delta t)=\mathbf{r}(\tau)+\Delta t\mathbf{v}(\tau)+z_{a}\) where \(z_{a}\) is normally-distributed 'action noise' with statistics \(z_{a}\sim\mathcal{N}(z_{a};0,\sigma_{a}^{2}\Delta t)\), where \(\sigma_{a}^{2}=0.01\) unless stated otherwise. Detailed background on generalized filtering, active inference, and derivations specific to the generative model we used for collective motion can be found in Appendix A. All other parameters used for simulations, unless stated otherwise, are listed in Table E.1 of Appendix E. The code (written in JAX and Julia) used to perform simulations can be found in the following open-source repository: [https://github.com/conorheins/collective_motion_actinf](https://github.com/conorheins/collective_motion_actinf).
### Collective information transfer experiments
For each trial of collective target-navigation, we initialized a group of \(N=30\) agents with random positions and velocities (centered on the origin) and augmented the generative models of a fixed proportion \(p_{inf}\) of the total number of agents, where \(p_{inf}\) ranged from \(0.05\) to \(1.0\), with an extra sensory modality and hidden state that represents the distance to the target with position vector \(\mathbf{T}\), where distance of the target was always \(10\) units from the origin. We measured collective accuracy as follows: we count a given trial as successful if the group is able to navigate to within \(0.25\) units of the target without losing cohesion within \(T=15\) seconds (the length of each trial). The accuracy for a given experimental condition was then computed as the proportion of successes observed in \(500\) total trials.
### Perturbation experiments
For the perturbation experiments, we simulated \(N_{i}=200\) independent runs of \(N=50\) agents, which we term independent _initializations_. Each initialization is distinguished by the agents' random starting positions, velocities, and seeds used to sample trajectories of action and observation noise. For each initialization, we integrated the collective dynamics until a steady state dynamic was reached (the pre-perturbation period) We chose this to be \(T=100\) seconds, a point at which metrics like average polarization, angular momentum, and median nearest-neighbor distance were highly likely to have stopped changing and fluctuate around a stationary value. At the end of each initialization's pre-perturbation period, we then split each initialization into two further sets of \(N_{r}=50\) parallel runs, each of which we deem a _realization_. Each realization is distinguished from the others based on the random seed used to A) generate the noises on the actions and noises for that realization; and B) select the candidate agent(s) for perturbation. Note that the splitting of seeds at the end of the pre-perturbation period means that each realization has an identical history up for its first \(100\) seconds. In the first set of \(N_{r}=50\) realizations, we enabled plasticity (parameter learning of \(\lambda_{z}\)), and in the second set, we left it disabled. After enabling learning in one set of realizations, we included an additional 'burn-in' period of \(12\) seconds of continued dynamics, to account for any transient group effects introduced by enabling learning _per se_. After the burn-in period ended, we perturbed random subsets of agents in both learning-enabled and -disabled realizations (\(2\%\) - \(50\%\) of the group, i.e., \(1\) to \(25\) agents). We added to the ongoing zeroth-order prediction errors \(\boldsymbol{\xi}_{z}^{\prime}\) of the perturbed individuals, two sequential waves
of negative (\(-1\)) to positive values (\(+1\)), each lasting \(0.8s\) and moving from left to right, relative to the perturbed agent's heading vector. We tracked the group turning angle, relative to its initial heading direction at the beginning of the perturbation for \(20s\) to generate the plots in Figure 4B and C.
Acknowledgements:The authors would like to thank Brennan Klein, Jake Graving, and Armin Bahl for helpful comments and discussion during the writing of this manuscript. CH would like to thank Dimitrije Markovic, Thomas Parr, and Manuel Baltieri for helpful discussions related to generalized filtering and continuous-time and -space active inference, and Maya Polovitskaya for creating the fish schematic used in the figures. CH and IDC acknowledge support from the Office of Naval Research Grant N0001419-1-2556, Germany's Excellence Strategy-EXC 2117-422037984 (to IDC) and the Max Planck Society, as well as the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant agreement (to IDC; #860949). CH acknowledges the support of a grant from the John Templeton Foundation (61780). LD is supported by the Fonds National de la Recherche, Luxembourg (Project code: 13568875). This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). RPM is supported by UK Research and Innovation Future Leaders Fellowship MR/S032525/1 and the Templeton World Charity Foundation Inc. TWCF-2021-20647. KF is supported by funding for the Wellcome Centre for Human Neuroimaging (Ref: 205103/Z/16/Z), a Canada-UK Artificial Intelligence Initiative (Ref: ES/T01279X/1) and the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).
## References
* [1] Peter F Major and Lawrence M Dill. "The three-dimensional structure of airborne bird flocks". In: _Behavioral Ecology and Sociobiology_ 4.2 (1978), pp. 111-122.
* [2] Scott Camazine, Jean-Louis Deneubourg, Nigel R Franks, James Sneyd, Eric Bonabeau, and Guy Theraulaz. _Self-organization in biological systems._ Princeton university press, 2003.
* [3] Michael Rubenstein, Christian Ahler, and Radhika Nagpal. "Kilobot: A low cost scalable robot system for collective behaviors". In: _2012 IEEE International Conference on Robotics and Automation_. IEEE. 2012, pp. 3293-3298.
* [4] Ichiro AOKI. "A Simulation Study on the Schooling Mechanism in Fish". In: _NIPPON SUISAN GAKKAISHI_ 48.8 (1982), pp. 1081-1088. doi: 10.2331/suisan.48.1081.
* [5] Craig W Reynolds. "Flocks, herds and schools: A distributed behavioral model". In: _Proceedings of the 14th annual conference on Computer graphics and interactive techniques_. 1987, pp. 25-34.
* [6] Tamas Vicsek, Andras Czirok, Eshel Ben-Jacob, Inon Cohen, and Ofer Shochet. "Novel type of phase transition in a system of self-driven particles". In: _Physical review letters_ 75.6 (1995), p. 1226.
* [7] Iain D Couzin, Jens Krause, Richard James, Graeme D Ruxton, and Nigel R Franks. "Collective memory and spatial sorting in animal groups". In: _Journal of theoretical biology_ 218.1 (2002), pp. 1-12.
* [8] David JT Sumpter. "The principles of collective animal behaviour". In: _Philosophical transactions of the royal society B: Biological Sciences_ 361.1465 (2006), pp. 5-22.
* [9] John Toner and Yuhai Tu. "Flocks, herds, and schools: A quantitative theory of flocking". In: _Physical review E_ 58.4 (1998), p. 4828.
* [10] Eric Bertin, Michel Droz, and Guillaume Gregoire. "Boltzmann and hydrodynamic description for self-propelled particles". In: _Physical Review E_ 74.2 (2006), p. 022101.
* [11] Pierre Degond and Sebastien Motsch. "Continuum limit of self-driven particles with orientation interaction". In: _Mathematical Models and Methods in Applied Sciences_ 18.supp01 (2008), pp. 1193-1215.
* [12] James E Herbert-Read, Andrea Perna, Richard P Mann, Timothy M Schaerf, David JT Sumpter, and Ashley JW Ward. "Inferring the rules of interaction of shoaling fish". In: _Proceedings of the National Academy of Sciences_ 108.46 (2011), pp. 18726-18731.
* [13] Daniel S Calovi, Ugo Lopez, Sandrine Ngo, Clement Sire, Hugues Chate, and Guy Theraulaz. "Swarming, schooling, milling: phase diagram of a data-driven fish school model". In: _New journal of Physics_ 16.1 (2014), p. 015026.
* [14] Andrew M Hein, Michael A Gil, Colin R Twomey, Iain D Couzin, and Simon A Levin. "Conserved behavioral circuits govern high-speed decision-making in wild fish shoals". In: _Proceedings of the National Academy of Sciences_ 115.48 (2018), pp. 12224-12228.
* [15] Jacques Gautrais, Francesco Ginelli, Richard Fournier, Stephane Blanco, Marc Soria, Hugues Chate, and Guy Theraulaz. "Deciphering interactions in moving animal groups". In: _PLoS computational biology_ 8.9 (2012), e1002678.
* [16] Yael Katz, Kolbjorn Tunstrom, Christos C Ioannou, Cristian Huepe, and Iain D Couzin. "Inferring the structure and dynamics of interactions in schooling fish". In: _Proceedings of the National Academy of Sciences_ 108.46 (2011), pp. 18720-18725.
* [17] Karl J Friston, Jean Daunizeau, and Stefan J Kiebel. "Reinforcement learning or active inference?" In: _PloS one_ 4.7 (2009), e6421.
* [18] Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, and Giovanni Pezzulo. "Active inference: a process theory". In: _Neural computation_ 29.1 (2017), pp. 1-49.
* [19] Thomas Parr, Giovanni Pezzulo, and Karl J Friston. _Active inference: the free energy principle in mind, brain, and behavior_. MIT Press, 2022.
* [20] Karl Friston. "A theory of cortical responses". In: _Philosophical transactions of the Royal Society B: Biological sciences_ 360.1456 (2005), pp. 815-836.
* [21] Karl Friston, James Kilner, and Lee Harrison. "A free energy principle for the brain". In: _Journal of Physiology-Paris_ 100.1-3 (2006), pp. 70-87.
* [22] Karl Friston. "What is optimal about motor control?" In: _Neuron_ 72.3 (2011), pp. 488-498.
* [23] Rajesh PN Rao and Dana H Ballard. "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects". In: _Nature neuroscience_ 2.1 (1999), pp. 79-87.
* [24] Rick A Adams, Stewart Shipp, and Karl J Friston. "Predictions not commands: active inference in the motor system". In: _Brain Structure and Function_ 218.3 (2013), pp. 611-643.
* [25] Kevin N Laland. "Social learning strategies". In: _Animal Learning & Behavior_ 32.1 (2004), pp. 4-14.
* [26] Peter M Krafft, Erez Shmueli, Thomas L Griffiths, Joshua B Tenenbaum, et al. "Bayesian collective learning emerges from heuristic social learning". In: _Cognition_ 212 (2021), p. 104469.
* [27] Manuel Baltieri and Christopher L Buckley. "Generative models as parsimonious descriptions of sensorimotor loops". In: _arXiv preprint arXiv:1904.12937_ (2019).
* [28] Cem Uran, Alina Peter, Andreea Lazar, William Barnes, Johanna Klon-Lipok, Katharine A. Shapcott, Rasmus Roese, Pascal Fries, Wolf Singer, and Martin Vinck. "Predictive coding of natural images by V1 firing rates and rhythmic synchronization". In: _Neuron_ 110.7 (2022), 1240-1257.e8. issn: 0896-6273. doi: [https://doi.org/10.1016/j.neuron.2022.01.002](https://doi.org/10.1016/j.neuron.2022.01.002). url: [https://www.sciencedirect.com/science/article/pii/S0896627322000022](https://www.sciencedirect.com/science/article/pii/S0896627322000022).
* [29] Karl Friston. "The free-energy principle: a rough guide to the brain?" In: _Trends in cognitive sciences_ 13.7 (2009), pp. 293-301.
* [30] Jakob Hohwy. "The self-evidencing brain". In: _Noas_ 50.2 (2016), pp. 259-285.
* [31] Karl Friston. "A free energy principle for a particular physics". In: _arXiv preprint arXiv:1906.10184_ (2019).
* [32] Bertrand Collignon, Axel Seguret, and Jose Halloy. "A stochastic vision-based model inspired by zebrafish collective behaviour in heterogeneous environments". In: _Royal Society open science_ 3.1 (2016), p. 150473.
* [33] Renaud Bastien and Pawel Romanczuk. "A model of collective behavior based purely on vision". In: _Science advances_ 6.6 (2020), eaay0792.
* [34] Karl Friston, Klaas Stephan, Baojuan Li, and Jean Daunizeau. "Generalised filtering". In: _Mathematical Problems in Engineering_ 2010 (2010).
* [35] Karl Friston, Jeremie Mattout, Nelson Trujillo-Barreto, John Ashburner, and Will Penny. "Variational free energy and the Laplace approximation". In: _Neuroimage_ 34.1 (2007), pp. 220-234.
* [36] Karl Friston and Stefan Kiebel. "Predictive coding under the free-energy principle". In: _Philosophical Transactions of the Royal Society B: Biological Sciences_ 364.1521 (2009), pp. 1211-1221.
* [37] Manuel Baltieri and Christopher L Buckley. "PID control as a process of active inference with linear generative models". In: _Entropy_ 21.3 (2019), p. 257.
* [38] Christopher L Buckley, Chang Sub Kim, Simon McGregor, and Anil K Seth. "The free energy principle for action and perception: A mathematical review". In: _Journal of Mathematical Psychology_ 81 (2017), pp. 55-79.
* [39] Jerome Buhl, David JT Sumpter, Iain D Couzin, Joe J Hale, Emma Despland, Edgar R Miller, and Steve J Simpson. "From disorder to order in marching locusts". In: _Science_ 312.5778 (2006), pp. 1402-1406.
* [40] Kolbjorn Tunstrom, Yael Katz, Christos C Ioannou, Cristian Huepe, Matthew J Lutz, and Iain D Couzin. "Collective states, multistability and transitional behavior in schooling fish". In: _PLoS computational biology_ 9.2 (2013), e1002915.
* [41] Irene Giardina. "Collective behavior in animal groups: theoretical models and empirical studies". In: _HFSP journal_ 2.4 (2008), pp. 205-219.
* [42] Conor Heins, Brennan Klein, Daphne Demekas, Miguel Aguilera, and Christopher L Buckley. "Spin glass systems as collective active inference". In: _Active Inference: Third International Workshop, IWAI 2022, Grenoble, France, September 19, 2022, Revised Selected Papers_. Springer. 2023, pp. 75-98.
* [43] Matthew MG Sosna, Colin R Twomey, Joseph Bak-Coleman, Winnie Poel, Bryan C Daniels, Pawel Romanczuk, and Iain D Couzin. "Individual and collective encoding of risk in animal groups". In: _Proceedings of the National Academy of Sciences_ 116.41 (2019), pp. 20556-20561.
* [44] Thomas Parr, Jakub Limanowski, Vishal Rawji, and Karl Friston. "The computational neurology of movement under active inference". In: _Brain_ (2021).
* [45] Iain D Couzin, Jens Krause, Nigel R Franks, and Simon A Levin. "Effective leadership and decision-making in animal groups on the move". In: _Nature_ 433.7025 (2005), pp. 513-516.
* [46] Iain D Couzin, Christos C Ioannou, Guven Demirel, Thilo Gross, Colin J Torney, Andrew Hartnett, Larissa Conradt, Simon A Levin, and Naomi E Leonard. "Uninformed individuals promote democratic consensus in animal groups". In: _science_ 334.6062 (2011), pp. 1578-1580.
* [47] Ariana Strandburg-Peshkin, Damien R Farine, Iain D Couzin, and Margaret C Crofoot. "Shared decision-making drives collective movement in wild baboons". In: _Science_ 348.6241 (2015), pp. 1358-1361.
* [48] Vivek H Sridhar, Liang Li, Dan Gorbonos, Mate Nagy, Bianca R Schell, Timothy Sorochkin, Nir S Gov, and Iain D Couzin. "The geometry of decision-making in individuals and collectives". In: _Proceedings of the National Academy of Sciences_ 118.50 (2021).
* [49] Ashkaan K Fahimipour, Michael A Gil, Maria Rosa Celis, Gabriel F Hein, Benjamin T Martin, and Andrew M Hein. "Wild animals suppress the spread of socially transmitted misinformation". In: _Proceedings of the National Academy of Sciences_ 120.14 (2023), e2215428120.
* [50] Allison Kolpas, Michael Busch, Hong Li, Iain D Couzin, Linda Petzold, and Jeff Moehlis. "How the spatial position of individuals affects their influence on swarms: a numerical comparison of two popular swarm dynamics models". In: _PloS one_ 8.3 (2013), e58525.
* [51] Andrew M Hein, Sara Brin Rosenthal, George I Hagstrom, Andrew Berdahl, Colin J Torney, and Iain D Couzin. "The evolution of distributed sensing and collective computation in animal populations". In: _Elife_ 4 (2015), e10955.
* [52] Heiko Hamann. "Evolution of collective behaviors by minimizing surprise". In: _Artificial Life Conference Proceedings_. MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info... 2014, pp. 344-351.
* [53] Tanja Katharina Kaiser and Heiko Hamann. "Innate Motivation for Robot Swarms by Minimizing Surprise: From Simple Simulations to Real-World Experiments". In: _IEEE Transactions on Robotics_ 38.6 (2022), pp. 3582-3601.
* [54] Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, Giovanni Pezzulo, et al. "Active inference and learning". In: _Neuroscience & Biobehavioral Reviews_ 68 (2016), pp. 862-879.
* [55] Matthias H Hennig. "Theoretical models of synaptic short term plasticity". In: _Frontiers in computational neuroscience_ 7 (2013), p. 45.
* [56] Ariana Strandburg-Peshkin, Colin R Twomey, Nikolai WF Bode, Albert B Kao, Yael Katz, Christos C Ioannou, Sara B Rosenthal, Colin J Torney, Hai Shan Wu, Simon A Levin, et al. "Visual sensory networks and effective information transfer in animal groups". In: _Current Biology_ 23.17 (2013), R709-R711.
* [57] Jacob D Davidson, Matthew MG Sosna, Colin R Twomey, Vivek H Sridhar, Simon P Leblanc, and Iain D Couzin. "Collective detection based on visual information in animal groups". In: _Journal of the Royal Society Interface_ 18.180 (2021), p. 20210142.
* [58] Roy Harpaz, Minh Nguyet Nguyen, Armin Bahl, and Florian Engert. "Precise visuomotor transformations underlying collective behavior in larval zebrafish". In: _Nature communications_ 12.1 (2021), p. 6578.
* [59] Luis Gomez-Nava, Robert T Lange, Pascal P Klamser, Juliane Lukas, Lenin Arias-Rodriguez, David Bierbach, Jens Krause, Henning Sprekeler, and Pawel Romanczuk. "Fish shoals resemble a stochastic excitable system driven by environmental perturbations". In: _Nature Physics_ (2023), pp. 1-7.
* [60] Mahault Albarracin, Daphne Demekas, Maxwell JD Ramstead, and Conor Heins. "Epistemic communities under active inference". In: _Entropy_ 24.4 (2022), p. 476.
* [61] P Read Montague, Raymond J Dolan, Karl J Friston, and Peter Dayan. "Computational psychiatry". In: _Trends in cognitive sciences_ 16.1 (2012), pp. 72-80.
* [62] Ryan Smith, Paul Badcock, and Karl J Friston. "Recent advances in the application of predictive coding and active inference models within clinical neuroscience". In: _Psychi-arity and Clinical Neurosciences_ (2020). url: [https://onlinelibrary.wiley.com/doi/abs/10.1111/pcn.13138](https://onlinelibrary.wiley.com/doi/abs/10.1111/pcn.13138).
An active inference model of collective motion
Each agent within our model of collective motion maintains an internal model of its local environment represented by average distances to its neighbours. These distances are partitioned into \(L\) sensory sectors \(\mathbf{x}=x_{1},x_{2},...,x_{L}\), with each agent observing noisy versions of these distances through a corresponding sensory channel \(\mathbf{y}=y_{1},y_{2},...,y_{L}\). Each agent estimates the hidden distance variable(s) \(\mathbf{x}\) over time using its observed sensory states \(\mathbf{y}\). In practice, each agent implements this through a form of variational Bayesian inference developed for continuous data-assimilation in dynamic environments called _generalized filtering_, which can be seen as a variational, more flexible version of Kalman filters. This dynamic inference process entails updating posterior beliefs about \(\mathbf{x}\) using a gradient descent on variational free energy. In the case of Gaussian assumptions about observation and state noise, these free energy gradients resemble a precision-weighted average of sensory and state prediction errors. This comprises the state-estimation component of active inference and is unpacked in detail in Section A.1.
In addition to estimating the hidden distance variable with generalized filtering, each agent also changes its heading direction \(\mathbf{v}\) in order to minimize the same variational free energy functional. When the agent's model of the distance dynamics is strongly 'biased' by a prior belief that the steady-state value of the distance variable(s) \(\tilde{\mathbf{x}}\) hovers around a particular value \(\boldsymbol{\eta}\), then agents will change their heading in a way that appears like they 'want' to maintain this target distance between them and their neighbours. Concretely, this means they move closer to neighbors when the sensed distance \(\mathbf{y}\) is larger than expected, and move away from neighbors when \(\mathbf{y}\) is smaller than expected.
This symmetry between belief updating and action, as both following the gradients of the same loss function, is what theoretically distinguishes active inference from other continuous control schemes, which often use different objectives for estimation and control. In the following sections we detail the processes of state-estimation and action under active inference.
### Generalized filtering overview
Agents estimate hidden states \(x\) as the variational solution to a Bayesian inference problem; they achieve this in practice using an online-filtering algorithm known as generalized filtering [1, 2]. Generalized filtering is a generic Bayesian filtering scheme for non-linear state-space models formulated in generalized coordinates of motion [3]. It subsumes, as special cases, variational filtering [4], dynamic expectation maximization [5] and generalized predictive coding. This inversion scheme relies on a simple dynamical generative specification of hidden states \(x\) and how they relate to observations \(y\). The generative model starts by postulating that the time evolution of a variable \(x\) is given by a stochastic differential equation with the following form:
\[\frac{dx_{t}}{dt}=f(x_{t})+\omega_{t}\] (A.1)
where \(f\) is some deterministic flow function (i.e., a vector field) that depends on the current state \(x_{t}\), and \(\omega_{t}\) is a (smooth) additive Gaussian noise process. Under generalized filtering, we successively differentiate (A.1), to finesse the difficult computation of the _paths_ or trajectories of \(x_{t}\) locally in time, by instead focusing on the much easier problem of computing the serial derivatives of \(x_{t}\). This allows one to express a local trajectory of \(\vec{x}=\{x_{t},x_{t+1},\ldots,x_{t+T}\}\) in terms of the derivatives of \(x_{t}\), i.e., \(\tilde{x}_{t}=(x_{t}^{\prime},x_{t}^{\prime\prime},x_{t}^{\prime\prime\prime },\ldots,x_{t}^{[n]},\ldots)\), where \(x_{t}^{[n]}:=\frac{d^{n}}{dt^{n}}x_{t}\). We used the notation \(\tilde{x}_{t}\) to denote a vector of these higher orders of motion at time \(t\), a representation known as _generalized coordinates_. The equivalence between generalized coordinates and paths follows from Taylor's theorem, where the path of \(x\) around some time \(t\) can be expressed as a combination of its higher order derivatives:
\[x_{t+h}=x_{t}^{[0]}+\sum_{n=1}^{\infty}\frac{x^{[n]}}{n!}h^{n}\] (A.2)
Note that the (local in time) equality between a path \(\vec{x}\) and its Taylor series only holds when the sample paths of \(x_{t}\) are analytic functions, which itself requires \(f\) to be analytic and the noise process \(\omega_{t}\) to be analytic (in particular non-white noise fluctuations) [6]. Successively differentiating the base equation in (A.1) (and ignoring contributions of the flow of order higher than one) yields a series of stochastic differential equations that describe the evolution of each order of motion \(x_{t}^{[n]}\) as depending on its own state and the \(n^{\text{th}}\) derivative of the noise [3]:
\[\dot{x} =f(x)+\omega\] \[\dot{x}^{\prime} =f_{x}x^{\prime}+\omega^{\prime}\] \[\dot{x}^{\prime\prime} =f_{x}x^{\prime\prime}+\omega^{\prime\prime}\] \[\vdots\] \[\Rightarrow D\tilde{x} =\tilde{f}+\tilde{\omega}\]
where, following the notation used in [1, 2, 3], we use the notation \(f_{x}\) for the Jacobian (i.e., matrix of first order partial derivatives) of the flow function \(f\) evaluated at \(x\), i.e., \(Jf(x)\), and omit the time variable from our notation for conciseness. Note that the above construction assumes a local linearization of \(f\) around \(x\), thus ignoring the contribution of higher order terms to the flow. When \(f\) is itself a linear function, this approximation is exact because contributions of the higher orders vanish [3]. The \(D\) is the time derivative operator in generalised coordinates, with identity matrices along the first leading (block) diagonal and \(\tilde{f},\tilde{\omega}\) are the generalized flow function and generalized noises, respectively:
\[D=\begin{bmatrix}0&I&&\\ &\ddots&\ddots&\\ &&\ddots&I\\ &&&0\end{bmatrix}\qquad\tilde{f}=\begin{bmatrix}f(x^{[0]})\\ f_{x}x^{[1]}\\ \vdots\\ f_{x}x^{[n]}\end{bmatrix}\qquad\tilde{\omega}=\begin{bmatrix}\omega^{[0]}\\ \omega^{[1]}\\ \vdots\\ \omega^{[n]}\end{bmatrix}\]
Here, \(n\) is some chosen order at which to truncate the derivatives. This truncation means that the Taylor expansion of a path \(\tilde{x}\) in (A.2) is rendered an approximation. Having specified a dynamics over \(x\) (and its reformulation in generalized coordinates), we are in a position to specify the _observation model_. In generalized filtering, the generative model of state dynamics is supplemented with an observation model that maps hidden states \(x\) to their sensory consequences \(y\) via some (differentiable) sensory map \(g(x)\) and additive Gaussian smooth fluctuations \(z\):
\[y_{t}=g(x_{t})+z_{t}\] (A.3)
Like the states, we can similarly express observations in generalized coordinates by successively differentiating (A.3) to obtain a similar single expression for the generalized observation equation:
\[y =g(x)+z\] \[y^{\prime} =g_{x}x^{\prime}+z^{\prime}\] \[y^{\prime\prime} =g_{x}x^{\prime\prime}+z^{\prime\prime}\] \[\vdots\] \[\Rightarrow\tilde{y} =\tilde{g}+\tilde{z}\]
where here the \(i^{\text{th}}\) motion of observations \(y^{[i]}\) is not a function of itself but rather that of the motion of the (generalized) hidden states \(x^{[i]}\) and fluctuations \(z^{[i]}\). In other words, the motion of observations tracks the simultaneous motion of the states, subject to any nonlinearities in the sensory map \(g\) and the motion of the noise \(z\). Given Gaussian assumptions on the generalised noises \(\tilde{\omega}\) and \(\tilde{z}\), we can then write down the full hidden state and observation model \(p(\tilde{y},\tilde{x})\) as a joint Gaussian density:
\[D\tilde{x} =\tilde{f}+\tilde{\omega} \tilde{\omega} \sim\mathcal{N}(\tilde{\omega};\mathbf{0},\tilde{\Sigma}^{\omega})\] \[\tilde{y} =\tilde{g}+\tilde{z} \tilde{z} \sim\mathcal{N}(\tilde{z};\mathbf{0},\tilde{\Sigma}^{z})\] \[\implies p(\tilde{y},\tilde{x}) =p(\tilde{y}|\tilde{x})p(D\tilde{x}|\tilde{x})\] \[=\mathcal{N}(\tilde{y};\tilde{g},\tilde{\Sigma}^{z})\mathcal{N}( D\tilde{x};\tilde{f},\tilde{\Sigma}^{\omega})\] (A.4)
This joint Gaussian specification of the generative model enables derivation of efficient, online update rules for the sufficient statistics of approximate posterior beliefs that track the expected value of the generalised hidden state \(\tilde{x}\). This relies on a simple expression for the variational free energy of this Gaussian state-space model; as we will see in the following sections, this not only enables efficient state estimation (a.k.a, updating beliefs about hidden states \(\tilde{x}\)), but also algorithms for inferring generative model parameters.
### State estimation under generalized filtering
Generalized filtering relies on optimizing posterior beliefs in order to minimize _variational free energy_\(F\), an upper bound on the _surprise_ associated with observations \(y\) under some generative model \(m\):
\[F\geq\underbrace{-\ln p(y;m)}_{\text{surprise}}\] (A.5)
where the model \(m\) defines a joint distribution over observations and latent variables \(p(y,\vartheta)\). The latent variables themselves \(\vartheta\) are often split into hidden states \(x\) and parameters \(\theta\). Exact Bayesian inference entails obtaining the posterior distribution over latent variables \(p(\vartheta|y)\), which can be expressed using Bayes rule:
\[p(\vartheta|y) =\frac{p(y,\vartheta)}{p(y)}\] (A.6) \[p(y) \triangleq\int p(y,\vartheta)d\vartheta\] (A.7)
where hereafter we leave out the dependence on the model \(m\).
In order to compute the posterior exactly, one has to compute the marginal probability of observations \(p(y)\), also known as the marginal likelihood or model evidence. Computing the marginal likelihood is often intractable or difficult in practice, motivating the introduction of the variational bound, the free energy \(F\), also known as the (negative) evidence lower-bound or ELBO. This can be shown by writing \(F\) as the Kullback-Leibler divergence between some "variational" distribution \(q(\vartheta;\nu)\) over latent variables with parameters \(\nu\) and the true posterior \(p(\vartheta|y)\):
\[F =\mathbb{E}_{q}\left[\ln q(\vartheta)-\ln p(y,\vartheta)\right]\] \[=D_{KL}\left(q(\vartheta;\nu)||p(\vartheta|y)\right)\underbrace{ -\ln p(y)}_{\text{surprise}}\] (A.8) \[\implies F \geq-\ln p(y)\] (A.9)
The upper bound holds because the Kullback-Leibler divergence is always non-negative \(D_{KL}(p||q)\geq 0\). Intuitively, as the variational distribution \(q(\vartheta;\nu)\) better approximates the true posterior distribution \(p(\vartheta|y)\), where the (in)accuracy of the approximation is measured by the KL divergence, then the tighter the free energy bounds the surprise. This decomposition also makes clear why minimizing \(F\) with respect to variational parameters \(\nu\) is a way to update the variational distribution \(q\) to approximate the true posterior \(p(\vartheta|y)\). The variational distribution is thus often referred to as an approximate posterior, where the exact posterior obtained by applying Bayesian rule as in Equation (A.6) corresponds to the variational posterior that minimises \(F\).
Now we turn to deriving the Laplace-approximation to the variational free energy (VFE) for the Gaussian state-space models used in generalised filtering. The Laplace approximation is an analytically tractable way to approximate the true posterior with a Gaussian distribution, which simplifies inference to an online filtering algorithm that corresponds to minimizing a sum of squared prediction errors.
Recall that our goal is to perform inference on the latent variables \(\vartheta\) by optimizing an approximate posterior distribution \(q(\vartheta;\nu)\). In our case, we let \(\vartheta=\{x,\theta\}\) where \(x\) are hidden states and \(\theta\) encompass other generative model parameters (e.g., hyperparameters of the generative model like \(\tilde{f},\tilde{g},\tilde{\Sigma}^{z},\tilde{\Sigma}^{\omega}\)). For now we focus on inference over hidden states \(x\) and treat parameter inference later. The approximate posterior distribution \(q(x;\nu)\). Under the Laplace approximation we use a Gaussian distribution for the approximate posterior:
\[q(x;\nu)=N(x;\underbrace{\mu,\Sigma^{\nu}}_{\nu})\] (A.10)
where the variational parameters \(\nu\) are comprised of the sufficient statistics of a Gaussian distribution: the mean \(\mu\) and covariance \(\Sigma^{\nu}\). We add the subscript \(\nu\) to the variational variance to distinguish it from generative model covariances, e.g. \(\tilde{\Sigma}^{z},\tilde{\Sigma}^{\omega}\).
We can now arrive at a more specific expression for the variational free energy using the Gaussian form of the variational distribution. We start by decomposing the free energy into the sum of an expected energy term and a (negative) entropy, where the energy is defined as the negative log joint density over states and observations: \(-\ln p(x,y)\) and the negative entropy is that of the variational posterior i.e., \(\mathbb{E}_{q}[\ln q(x;\nu)]\):
\[F=\mathbb{E}_{q}\left[-\ln p(x,y)\right]-\frac{1}{2}\left[\ln|\Sigma|+d\ln 2\pi\right]\] (A.11)
where \(d\) is the dimensionality of \(x\) and the full term on the right follows from the entropy of a multivariate Gaussian: \(\mathrm{H}[\mathcal{N}(x;\mu,\Sigma)]=\frac{1}{2}\left[\ln|\Sigma|+d\ln 2\pi\right]\).
Additional assumptions allow one to further simplify the expected energy term \(\mathbb{E}_{q}\left[-\ln p(x,y)\right]\); namely, if we assume that the posterior is tightly peaked around the mean \(\mu\) and that \(p(x,y)\) is twice-differentiable in \(x\), we can motivate a \(2^{\mathrm{nd}}\)-order Taylor expansion of the expected energy term around its mode, i.e. when \(x=\mu\):
\[\mathbb{E}_{q}\left[-\ln p(x,y)\right] \approx\mathbb{E}_{q}\left[-\ln p(\mu,y)-\nabla_{x}\ln p(x,y) \bigg{|}_{x=\mu}(x-\mu)-\frac{1}{2}(x-\mu)^{\top}\nabla_{x}^{2}\ln p(x,y) \bigg{|}_{x=\mu}(x-\mu)\right]\] \[=-\ln p(\mu,y)-\frac{1}{2}\operatorname{tr}\left(\Sigma\nabla_{x }^{2}\ln p(x,y)\bigg{|}_{x=\mu}\right)\] (A.12)
Combining this approximation of the expected energy with the remaining terms in the variational free energy, we can now write the full expression of the Laplace-approximated free energy \(F_{L}\):
\[F_{L}=-\ln p(\mu,y)-\frac{1}{2}\operatorname{tr}\left(\Sigma\nabla_{x}^{2}\ln p(x,y )\bigg{|}_{x=\mu}\right)-\frac{1}{2}\left(\ln|\Sigma|+d\ln 2\pi\right)\] (A.13)
A useful feature of this expression is that the optimal variational covariance \(\Sigma^{\nu}\) can obtained by setting the derivative of \(F_{L}\) with respect to the covariance \(\Sigma\) equal to \(0\) and solving for \(\Sigma\), i.e. finding the values of the covariance that minimize the \(F_{L}\):
\[\frac{\partial F_{L}}{\partial\Sigma}=0\iff\Sigma^{\nu}=-\left(\nabla_{x}^{2 }\ln p(x,y)\bigg{|}_{x=\mu}\right)^{-1}\] (A.14)
i.e., the optimal variance of the variational distribution is the curvature of the Laplace energy around its mode. Substituting this expression back into the full free energy, we can then write an expression that only depends on the mean vector \(\mu\) of the variational density, since the variatonal variance \(\Sigma^{\nu}\) is now expressed as a function of the mean:
\[F_{L} =-\ln p(\mu,y)+\frac{1}{2}\underbrace{\operatorname{tr}\left( \Sigma^{\nu}(\Sigma^{\nu})^{-1}\right)}_{=d}-\frac{1}{2}\left(\ln|\Sigma^{\nu }|+d\ln 2\pi\right)\] \[=-\ln p(\mu,y)-\frac{1}{2}\left(\ln|\Sigma^{\nu}|+d\ln 2\pi\right)\] (A.15)
This means that the Laplace approximation to the variational free energy is a function of only the variational mean \(\mu\) and sensory observations \(y\), because the variational variance \(\Sigma^{\nu}\) is itself a function of \(\mu\). Belief updating then consists in minimizing the Laplace-approximated free energy \(F_{L}\) with respect to \(\mu\):
\[\dot{\mu}\propto-\nabla_{\mu}F_{L}(\mu,y)\] (A.16)
When the generative model \(p(x,y)\) is Gaussian, then the Laplace-approximated variational free energy is quadratic in \(\mu\) and \(y\), meaning that the updates to \(\mu\) can be written in terms of precision-weighted _prediction errors_, which score the difference between the expected observations (given the current value of \(\mu\)) and the actual observations \(y\). This notion of using prediction errors to estimate hidden quantities is also known as predictive coding [7, 8, 9]. The simple form of the belief updates derives from the fact that the energy term of the Laplace-approximated free energy \(-\ln p(\mu,y)\) can be written as a precision-weighted sum of (squared) prediction errors. To show this, we can consider a simple, static generative model where the prior over hidden states \(p(x)\) is a Gaussian density with mean \(\eta\) and covariance \(\Sigma^{\omega}\), and the observation model \(p(x|y)\) is a Gaussian density with mean \(g(x)\), i.e., some function of the hidden state:
\[y\sim N(g(x),\Sigma^{\bar{z}}),\quad x\sim N(\eta,\Sigma^{\omega}).\] (A.17)
Because the variational mean \(\mu\) only depends on the expected energy term of \(F_{L}\), we leave out the entropy term and can write out \(-\ln p(y,\mu)\) as a sum of precision-weighted prediction errors:
\[-\ln p(\mu,y) =-\ln p(y|\mu)-\ln p(\mu)\] \[=\frac{1}{2}\left[\varepsilon_{z}^{T}\Pi^{z}\varepsilon_{z}+ \varepsilon_{\omega}^{T}\Pi^{\omega}\varepsilon_{\omega}-\ln\left(|\Pi^{z}|| \Pi^{\omega}|\right)\right]\] \[\text{where }\Pi^{z}=(\Sigma^{z})^{-1},\ \ \Pi^{\omega}=(\Sigma^{ \omega})^{-1}\] \[\text{and }\varepsilon_{z}=y-g(\mu),\ \ \varepsilon_{\omega}=\mu-\eta\] (A.18)
We can write out gradients of this quadratic energy function to yield the update equation for the means \(\mu\) as in (A.16), and see that \(\mu\) changes as a precision-weighted sum of'sensory' and'model' prediction errors:
\[\dot{\mu} =-\nabla_{\mu}F_{L}(\mu,y)\] \[=-\nabla_{\mu}\left[\frac{1}{2}\left(\varepsilon_{z}^{T}\Pi^{z} \varepsilon_{z}+\varepsilon_{\omega}^{T}\Pi^{\omega}\varepsilon_{\omega} \right)\right]\] \[=-\left[\frac{\partial g}{\partial\mu}\Pi^{z}\varepsilon_{z}+\Pi^ {\omega}\varepsilon_{\omega}\right]\] (A.19)
Note that the variational means only depend on the terms of \(F_{L}\) containing \(\varepsilon_{z}\) and \(\varepsilon_{\omega}\), so that the update reduces to a gradient descent on a sum of squared prediction errors. This belief update scheme illustrates the key principles of predictive coding under the Laplace approximation: conditional means, denoted as \(\mu\), change as a function of precision-weighted prediction errors. The concept of precision-weighting in belief updating is intuitive: if the generative model attributes higher variance to sensory fluctuations as compared to state variance (i.e., \(\Pi^{z}<\Pi^{\omega}\)), then sensory data is relatively unreliable and consequently makes a smaller impact on posterior beliefs. Therefore, the adjustment to the posterior mean \(\mu\) in (A.19) is primarily influenced by the state prediction error term \(\Pi^{\omega}\varepsilon_{\omega}\) or the prior. Conversely, when sensory information is allocated higher precision (lower variance) relative to prior beliefs (i.e., \(\Pi^{z}>\Pi^{\omega}\)), belief updates will strongly rely on sensory data.
We apply the above steps to derive the Laplace-approximated free energy with a Gaussian posterior \(q(x;\nu)\) to the dynamical generative model in (A.4), which is by construction a joint Gaussian density. Note that we use the tilde notation to now indicate that all variables are vectors of generalised coordinates, e.g., \(\tilde{y},\tilde{x}\), etc. Showing only the \(\mu\)-dependent terms of Laplace energy term \(-\ln p(\tilde{\mu},\tilde{y})\approx\mathbb{E}_{q}\left[-\ln p(\tilde{x}, \tilde{y})\right]\):
\[F_{L} \propto\tilde{\varepsilon}_{z}^{T}\tilde{\Pi}^{z}\tilde{ \varepsilon}_{z}+\tilde{\varepsilon}_{\omega}^{T}\tilde{\Pi}^{\omega}\tilde{ \varepsilon}_{\omega}\] (A.20) \[\tilde{\varepsilon}_{z} \triangleq\tilde{y}-\tilde{y}\] \[\tilde{\varepsilon}_{\omega} \triangleq D\tilde{\mu}-\tilde{f}\]
Here, the so-called 'generalised errors' \(\tilde{\varepsilon}_{z}\) and \(\tilde{\varepsilon}_{\omega}\) encapsulate sensory and state prediction errors across orders of motion. Belief updating is again performed using a gradient descent on free energy, but the dynamic nature of inference necessitates an additional'motion' term:
\[\frac{d\tilde{\mu}}{dt} =D\tilde{\mu}-\nabla_{\tilde{\mu}}F_{L}\] \[=D\tilde{\mu}+\frac{\partial g}{\partial\tilde{\mu}}\tilde{ \xi}_{z}+\frac{\partial f}{\partial\tilde{\mu}}\tilde{\xi}_{\omega}-D^{\top} \tilde{\xi}_{\omega}\] \[\text{where}\;\;\tilde{\xi}_{z} =\tilde{\Pi}^{z}\tilde{\varepsilon}_{z}\] \[\tilde{\xi}_{\omega} =\tilde{\Pi}^{\omega}\tilde{\varepsilon}_{\omega}\] (A.21)
The additional term \(D\tilde{\mu}\) places the gradient descent within the context of the expected movement of the conditional means \(\tilde{\mu}\), and hence of the free energy minimum. This concept has been referred to as 'gradient descent in a moving frame of reference' [1]. This implies that free energy minimization does not occur when the beliefs cease moving, but rather when the belief update rate \(\frac{d\tilde{\mu}}{dt}\) is identical to the beliefs about the motion itself \(D\tilde{\mu}\), in other words when \(\frac{\partial F}{\partial\tilde{\mu}}=0\iff D\tilde{\mu}=\frac{d\tilde{\mu} }{dt}\). This additional temporal correction proves beneficial in a dynamic data assimilation regime, where incoming observations are integrated online with beliefs that are evolving according to their own prior dynamics [1].
### Active inference for continuous control
Active inference casts action or control as issuing from the same process of free energy minimization as used for state estimation; the only difference is that we now have an additional set of variables, actions \(a\), that can be changed to minimize free energy as well. The update equation for actions \(a\) closely resembles that used to update the variational mean \(\mu\), i.e., a gradient descent on the (Laplace-encoded) variational free energy:
\[\frac{da}{dt} =-\frac{\partial F_{L}(\mu,y(a))}{\partial a}\] \[=-\frac{\partial F_{L}}{\partial y(a)}\frac{\partial y(a)}{ \partial a}\] (A.22)
where we have now introduced a dependence between of observations \(y\) on actions \(a\). This allows us to express the free energy gradient with respect to action as the product
of the derivative of the free energy with respect to observations \(\nabla_{y}F_{L}(\mu,y(a))\) and the derivative of the function mapping from actions to observations \(\frac{\partial y(a)}{\partial a}\). The free energy gradient with respect to observations is exactly the sensory prediction error \(\nabla_{y}F_{L}(\mu,y(a))=\xi_{z}=\Pi(y-g(x))\). This assumed dependence of observations on actions underwrites the notion that active inference agents cannot directly measure how their actions affect hidden states, but may only do so via their sensory consequences. This has been speculated to explain the architecture of descending motor pathways in corticospinal systems, where motor commands are 'unpacked' into proprioceptive predictions at the level of spinal circuits and other lower motor nuclei. Action is thus realized by minimizing proprioceptive prediction errors via classical reflex arcs [10]. The reflex arc term \(\frac{\partial y(a)}{\partial a}\) of (A.22) is analogous to a forward model in motor control [11], because it reflects the agent's implicit assumptions about how the agent's own actions lead to their (anticipated) sensory consequences. This sort of update rule leads active inference agents to minimize sensory prediction errors via these 'baked-in' sensorimotor contingencies. In this way active inference has been referred to as 'action by self-fulfilling prophecy' [12]. In other words, the agent generates top-down expectations of 'preferred' sensory inputs, which then generates prediction errors which can then be suppressed through low-level motoric reflexes [10].
### Filtering and control for a self-propelled particle
Having derived a routine for state estimation and action through a generalized gradient flow on the Laplace-approximated variational free energy \(F_{L}\), we can now apply this to the simulation of collective motion. In what follows, we write down a sufficient generative model for a single self-propelled agent and unpack the corresponding free energy gradients ((A.20) and (A.22)) using the structure and parameters of the chosen generative model. In this section we unpack the per-agent generative model of local distances described in the main text and demonstrate how a more parametric, unconstrained version of social forces are reproduced by minimizing free energy with respect to the distance-tracking generative model.
#### a.4.1 A generalised filter for local distances and their time evolution
As described in the main text, each agent represents a an \(L\)-dimensional vector \(\mathbf{x}\) where \(\mathbf{x}=(x_{1},x_{2},...,x_{L})\).1 The agent not only represents the instantaneous value (or 'position') of \(\mathbf{x}\) but also its generalized motion, which we truncate at \(3^{\text{rd}}\) order:
Footnote 1: We use the bold notation \(\mathbf{x}\) to represent a vector-valued variable
\[\dot{\mathbf{x}} =f(\mathbf{x})+\mathbf{\omega}\] \[\dot{\mathbf{x}}^{\prime} =f_{\mathbf{x}}\mathbf{x}^{\prime}+\mathbf{\omega}^{\prime}\] \[\dot{\mathbf{x}}^{\prime\prime} =f_{\mathbf{x}}\mathbf{x}^{\prime\prime}+\mathbf{\omega}^{\prime\prime}\] \[\Rightarrow D\tilde{\mathbf{x}} =\tilde{\mathbf{f}}+\tilde{\mathbf{\omega}}\]
The flow at the first order \(f\) is a linear dynamical system with drift matrix \(A\) and fixed point with value \(\mathbf{\eta}\):
\[f(\mathbf{x})=-\mathbf{A}(\mathbf{x}-\mathbf{\eta})\] (A.23)
The eigenvalues of the \(L\times L\) matrix \(\mathbf{A}\) determine the rate at which the hidden states \(\mathbf{x}\) are assumed to relax to their expected value of \(\mathbf{\eta}\). In general, this matrix can be parameterized arbitrarily to encode different kinds of linear couplings among the different hidden states \(x_{1},x_{2},...,x_{L}\). In the present work we parameterize \(\mathbf{A}\) simply as a diagonal matrix with a single diagonal value \(\alpha>0\), which can also be expressed as an \(\alpha\)-scaled version of the identity matrix \(L\times L\) identity matrix \(I_{L}\):
\[\mathbf{A}=-\alpha I_{L}\] (A.24)
In combination with the amplitude of random fluctuations \(\Sigma^{\omega}\), \(\alpha\) determines how quickly the hidden states relax to their mean value of \(\mathbf{\eta}\).2 The generalised flow function \(\tilde{\mathbf{f}}\) can thus be written as a linear function of the generalised state \(\tilde{\mathbf{x}}\):
Footnote 2: Heuristically, it is an exponential decay rate.
\[\tilde{\mathbf{f}}=\begin{bmatrix}f(\mathbf{x})\\ f_{\mathbf{x}}\mathbf{x}^{\prime}\\ f_{\mathbf{x}}\mathbf{x}^{\prime\prime}\end{bmatrix} =-\begin{bmatrix}\mathbf{A}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{A}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{A}\end{bmatrix}\begin{bmatrix}\mathbf{x}-\mathbf{ \eta}\\ \mathbf{x}^{\prime}\\ \mathbf{x}^{\prime\prime}\end{bmatrix}\] \[=\begin{bmatrix}-\alpha I_{L}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&-\alpha I_{L}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&-\alpha I_{L}\end{bmatrix}\begin{bmatrix}\mathbf{x}- \mathbf{\eta}\\ \mathbf{x}^{\prime}\\ \mathbf{x}^{\prime\prime}\end{bmatrix}=-\alpha\begin{bmatrix}\mathbf{x}-\bm {\eta}\\ \mathbf{x}^{\prime}\\ \mathbf{x}^{\prime\prime}\end{bmatrix}\] (A.25)
where \(\mathbf{0}\) are \(L\times L\) matrices of zeros. We assume a multivariate Gaussian form for the generalized noises \(\tilde{\mathbf{\omega}}\), meaning the density over the generalized motion \(D\tilde{\mathbf{x}}\) is a Gaussian density, which we hereafter refer to as the 'dynamics model' or 'dynamical prior':
\[P(D\tilde{\mathbf{x}}|\tilde{\mathbf{x}})=N(D\tilde{\mathbf{x}};\tilde{ \mathbf{f}},\tilde{\Sigma}^{\mathbf{\omega}})\] (A.26)
Consistent with the block diagonal form of the generalised flow function \(\tilde{\mathbf{f}}\), we also assume the covariance of the generalized noises \(\tilde{\Sigma}^{\mathbf{\omega}}\) factorizes into a Kronecker product of'spatial' and 'temporal' covariance matrices, i.e.,
\[\tilde{\Sigma}^{\mathbf{\omega}}=\Sigma^{\mathbf{\omega}}\otimes\tilde{ \Sigma}^{\omega}\] (A.27)
where the spatial covariance \(\Sigma^{\mathbf{\omega}}\) (note the bold superscript \(\mathbf{\omega}\)) represents covariance between \(L\) noise processes at the zero-th order \(\mathbf{\omega}^{[0]}\), i.e., \(\Sigma^{\mathbf{\omega}}=\mathbb{E}[\mathbf{\omega}^{[0]}\otimes\mathbf{\omega}^{[0]}]\), and \(\tilde{\Sigma}^{\omega}\) encodes
covariance between different derivatives of the first order noise, i.e., \(\forall m,n:\left(\tilde{\Sigma}^{\omega}\right)_{nm}=\mathbb{E}[\omega^{[n]} \cdot\omega^{[m]}]\). The entries of this covariance matrix can be written in terms of the derivatives of the autocorrelation function of the random fluctuations evaluated at lag \(0\), \(\rho(0)\):
\[\rho(h) \triangleq(\Sigma^{\boldsymbol{\omega}})^{-1}\mathbb{E}[\omega^ {[0]}(\tau)\cdot\omega^{[0]}(\tau+h)]\] \[\Rightarrow\tilde{\Sigma}^{\omega} =\begin{bmatrix}1&0&\ddot{\rho}(0)\\ 0&-\ddot{\rho}(0)&0\\ \ddot{\rho}(0)&0&\ddot{\ddot{\rho}}(0)\\ &&&\ddots\end{bmatrix}\] (A.28)
The checkerboard structure in the matrix reflects the fact that fluctuations at the first order are orthogonal to their motion (first derivative), but anti-correlated with their \(2^{\text{nd}}\), \(4^{\text{th}}\),..., etc. derivatives. A derivation of the temporal covariance matrix from the autocorrelation function of the first-order fluctuations can be found in Appendix A.5.3 of [13]. In the generative models of our agents, we assume a Gaussian autocorrelation function with "smoothness" parameter \(\lambda_{\omega}\), which yields a simple parameterization of \(\tilde{\Sigma}^{\omega}\):
\[\rho(h) =e^{-\frac{h}{2\lambda_{\omega}}^{2}}\] (A.29) \[\Rightarrow\tilde{\Sigma}^{\omega} =\begin{bmatrix}1&0&-\frac{1}{2\lambda_{\omega}^{2}}&\cdots\\ 0&\frac{1}{2\lambda_{\omega}^{2}}&0&\\ -\frac{1}{2\lambda_{\omega}^{2}}&0&\frac{3}{4\lambda_{\omega}^{4}}&\\ \vdots&&\ddots\end{bmatrix}\] (A.30)
A higher value of \(\lambda_{\omega}\) dampens the variance of the generalised fluctuations at higher orders of differentiation. The correspondence of increasing \(\lambda_{\omega}\) to an increasingly-autocorrelated process at the first order becomes intuitive once we consider the case of standard white noise, i.e., the derivative of the Wiener process, whose higher orders of motion have infinite variance (the state of the process at a given time changes infinitely quickly). This ability to handle differentiable noise goes beyond the usual Markovian assumptions made in standard state space models (e.g., Kalman-Bucy filters), which assume that the driving noise is white.
We parameterize the \(L\times L\) spatial covariance \(\Sigma^{\boldsymbol{\omega}}\) through its precision matrix \(\Pi^{\boldsymbol{\omega}}\), as a diagonal matrix whose entries are given by a single precision (inverse variance) \(\Gamma_{\omega}\):
\[\Sigma^{\boldsymbol{\omega}}=(\Pi^{\boldsymbol{\omega}})^{-1}=\begin{bmatrix} \Gamma_{\omega}&0&0&\ldots\\ 0&\Gamma_{\omega}&0&\\ 0&0&\Gamma_{\omega}&\\ \vdots&&&\ddots\end{bmatrix}^{-1}\] (A.31)
The observation likelihood describes sensory observations \(\mathbf{y}=\{y_{1},y_{2},...,y_{L}\}\) as noise-perturbed copies of the hidden states \(\mathbf{x}\). We truncate generalized observations at second order, i.e., agents can sense the first order hidden state \(\mathbf{x}\) and its motion \(\mathbf{x}^{\prime}\):
\[\mathbf{y} =\mathbf{x}+\mathbf{z}\] \[\mathbf{y}^{\prime} =\mathbf{x}^{\prime}+\mathbf{z}^{\prime}\] (A.32)
This can be equivalently expressed as a linear function \(\tilde{\mathbf{g}}\) of the full generalised state \(\tilde{\mathbf{x}}=\{\mathbf{x},\mathbf{x}^{\prime},\mathbf{x}^{\prime\prime}\}\), where \(\tilde{\mathbf{g}}\) represents multiplication with a non-invertible matrix that discards acceleration information \(\mathbf{x}^{\prime\prime}\):
\[\tilde{\mathbf{y}} =\tilde{\mathbf{g}}+\tilde{\mathbf{z}}\] \[\begin{bmatrix}\mathbf{y}\\ \mathbf{y}^{\prime}\end{bmatrix} =\begin{bmatrix}I_{L}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&I_{L}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{x}\\ \mathbf{x}^{\prime}\\ \mathbf{x}^{\prime\prime}\end{bmatrix}+\begin{bmatrix}\mathbf{z}\\ \mathbf{z}^{\prime}\end{bmatrix}\] (A.33)
We leverage the same assumptions about the sensory noises \(\tilde{\mathbf{z}}\) as we did for the state noises \(\tilde{\omega}\) to end up with the following multivariate Gaussian form for the observation model:
\[p(\tilde{\mathbf{y}}|\tilde{\mathbf{x}})=N(\tilde{\mathbf{y}};\tilde{\mathbf{ g}},\tilde{\Sigma}^{\mathbf{z}})\] (A.34)
We parameterize the likelihood model's sensory noises \(\tilde{\mathbf{z}}\) identically to the state noises \(\tilde{\boldsymbol{\omega}}\), namely using a spatial precision parameter \(\Gamma_{z}\) and temporal smoothness parameter \(\lambda_{z}\).
Having specified the dynamics and observation models in terms of Gaussian distributions, we can write out the full generative model as a joint Gaussian density over (generalized) hidden states and observations. We can furthermore define an approximate posterior over the hidden states \(\tilde{\mathbf{x}}\) that has a multivariate Gaussian form \(Q(\tilde{\mathbf{x}})=N(\tilde{\mathbf{x}};\tilde{\boldsymbol{\mu}};\Sigma^{ \nu})\), which can be summarized entirely in terms of its posterior mean vector \(\tilde{\boldsymbol{\mu}}\), due to the fact that under the Laplace approximation the variational covariance depends directly on the mean. From here, we can define the Laplace-approximated variational free energy for this generative model as proportional to a sum of squared prediction errors:
\[p(\tilde{\mathbf{y}},\tilde{\mathbf{x}}) =p(\tilde{\mathbf{y}}|\tilde{\mathbf{x}})p(D\tilde{\mathbf{x}}| \tilde{\mathbf{x}})\] \[=N(\tilde{\mathbf{y}};\tilde{\mathbf{g}},\tilde{\Sigma}^{ \mathbf{z}})N(D\tilde{\mathbf{x}};\tilde{f},\tilde{\Sigma}^{\boldsymbol{ \omega}})\] (A.35) \[F_{L} =\frac{1}{2}\left[\tilde{\boldsymbol{\varepsilon}}_{z}^{\top} \tilde{\Pi}^{\mathbf{z}}\tilde{\boldsymbol{\varepsilon}}_{z}+\tilde{ \boldsymbol{\varepsilon}}_{\omega}^{\top}\tilde{\Pi}^{\boldsymbol{\omega}} \tilde{\boldsymbol{\varepsilon}}_{\omega}-\ln\left(|\tilde{\Pi}^{\mathbf{z}}| |\tilde{\Pi}^{\boldsymbol{\omega}}||\Pi^{\nu}|\right)+3L\ln 2\pi\right]\] where \(\Pi^{\nu}\triangleq(\Sigma^{\nu})^{-1}\) \[\tilde{\boldsymbol{\varepsilon}}_{z} =\tilde{\mathbf{y}}-\tilde{\mathbf{g}}(\tilde{\boldsymbol{\mu}} )=\begin{bmatrix}\mathbf{y}-\boldsymbol{\mu}\\ \mathbf{y}^{\prime}-\boldsymbol{\mu}^{\prime}\end{bmatrix},\;\;\tilde{ \boldsymbol{\varepsilon}}_{\omega}=D\tilde{\boldsymbol{\mu}}-\tilde{\mathbf{ f}}(\tilde{\boldsymbol{\mu}})=\begin{bmatrix}\boldsymbol{\mu}^{\prime}+\alpha( \boldsymbol{\mu}-\boldsymbol{\eta})\\ \boldsymbol{\mu}^{\prime\prime}+\alpha\boldsymbol{\mu}^{\prime}\\ \alpha\boldsymbol{\mu}^{\prime\prime}\end{bmatrix}\] (A.36)
where the sensory prediction errors \(\tilde{\mathbf{\varepsilon}}_{z}\) score the difference between the generalized observations \(\mathbf{y},\mathbf{y}^{\prime}\) and their expected values \(\mathbf{\mu},\mathbf{\mu}^{\prime}\), and the model or process prediction errors \(\tilde{\mathbf{\varepsilon}}_{\omega}\) score the difference between the motion of the generalized means \(D\tilde{\mathbf{\mu}}\) and their expected motion \(\tilde{\mathbf{f}}(\tilde{\mathbf{\mu}})\), which has been expanded above using the linear form of the flow function detailed in (A.25). Note that here, due to the Laplace approximation, the generative model's expectation functions \(\tilde{\mathbf{g}},\tilde{\mathbf{f}}\) are evaluated at the variational mean \(\tilde{\mathbf{\mu}}\), rendering the variational beliefs a moving point-estimate of the hidden states \(\tilde{\mathbf{x}}\).
Filtering consists of updating \(\tilde{\mathbf{\mu}}\) as a generalized gradient flow on this energy functional \(F_{L}\) as in (A.21). To be explicit, below we expand these free energy gradients using the particular forms of \(\tilde{\mathbf{g}},\tilde{\mathbf{f}}\) used by our self-propelled particle agent:
\[\frac{d\tilde{\mathbf{\mu}}}{dt} =D\tilde{\mathbf{\mu}}-\nabla_{\tilde{\mathbf{\mu}}}F_{L}\] \[=D\tilde{\mathbf{\mu}}+\nabla_{\tilde{\mathbf{\mu}}}\tilde{\mathbf{g}} \cdot\tilde{\mathbf{\xi}}_{z}+\nabla_{\tilde{\mathbf{\mu}}}\tilde{\mathbf{f}}\cdot \tilde{\mathbf{\xi}}_{\omega}-D^{\top}\tilde{\mathbf{\xi}}_{\omega}\] \[\text{where }\,\tilde{\mathbf{\xi}}_{z} =\tilde{\Pi}^{\mathbf{\alpha}}\begin{bmatrix}\mathbf{y}-\mathbf{\mu}\\ \mathbf{y}^{\prime}-\mathbf{\mu}^{\prime}\end{bmatrix}\] \[\tilde{\mathbf{\xi}}_{\omega} =\tilde{\Pi}^{\mathbf{\alpha}}\begin{bmatrix}\mathbf{\mu}^{\prime}+ \alpha(\mathbf{\mu}-\mathbf{\eta})\\ \mathbf{\mu}^{\prime\prime}+\alpha\mathbf{\mu}^{\prime}\\ \alpha\mathbf{\mu}^{\prime\prime}\end{bmatrix}\] \[\nabla_{\tilde{\mathbf{\mu}}}\tilde{\mathbf{g}} =\begin{bmatrix}I_{L}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&I_{L}&\mathbf{0}\end{bmatrix}^{\top},\,\,\,\nabla_{\tilde{\mathbf{\mu}}} \tilde{\mathbf{f}}=\begin{bmatrix}-\alpha I_{L}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&-\alpha I_{L}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&-\alpha I_{L}\end{bmatrix}^{\top}\] (A.37)
This sort of filtering scheme means that the agent's beliefs \(\tilde{\mathbf{\mu}}\) will evolve as a moving average of incoming sensory data \(\tilde{\mathbf{y}}\) subject to a dynamical bias or "drag", which is a consequence of the latent belief that hidden states \(\mathbf{x}\) continuously relax towards a fixed point at \(\mathbf{\eta}\). Specifically, the beliefs are constantly pulled closer to the data in order to minimize sensory prediction errors \(\tilde{\mathbf{\xi}}_{z}\); however, this process itself incurs state prediction errors \(\tilde{\mathbf{\xi}}_{\omega}\) that will pull the beliefs back towards the fixed point. This constant tug of war between sensory and process prediction errors can be shifted disproportionately in one direction by adjusting the relative precisions of the likelihood vs. dynamical models, respectively. If the process precision \(\tilde{\Pi}^{\mathbf{\omega}}\) is high relative to the observation precision \(\tilde{\Pi}^{\mathbf{\mathbf{z}}}\), then the beliefs will tend to their expected fixed point of \(\mathbf{\eta}\). A similar enhancement of prior bias can be achieved by increasing the drift rate \(\alpha\) of the dynamics model, which increases the force driving \(\mathbf{\mu}\) towards \(\mathbf{\eta}\) -- this was the approach taken in [14], for example.
Note that when numerically integrating the differential equation in (A.37) with a forwards Euler scheme, one uses a finite number of iterations to update the variational means \(\tilde{\mathbf{\mu}}\), which we term \(n_{\text{Inferlter}}\), and a step-size \(\kappa_{\mu}\) which scales the size of the increment to \(\tilde{\mathbf{\mu}}\)[12]. In all simulations shown here, we set \(n_{\text{Inferlter}}=1,\kappa_{\mu}=0.1\) (see Table E.1 for details).
#### a.4.2 Closing the loop with observations and action
In order to interpret the random variables of the generative model as representing behaviorally-relevant features of an agent's world, we now turn to specifying the _generative process_, i.e., the actual physics of the world that our self-propelled particle agents will inhabit. In this section we detail how the observations \(\tilde{\mathbf{y}}\) for a single agent are generated from the positions and velocities of other active inference agents, and how actions can be generated through _active inference_, which in this contexts means changing continuous control variables using a gradient descent on the same free energy used to derive the belief update equations of the previous section.
We now shift our perspective to that of a single agent, hereafter referred to as the _focal individual_ or _focal agent_, and specify how its sensory data \(\tilde{\mathbf{y}}\) are generated. We start by describing univariate hidden states and corresponding observations, where the true hidden variable is an average nearest-neighbor distance \(x_{h}\). We add the \(h\) subscript to distinguish these'real' variables (hidden states, observations, noise terms) from their representations in the generative model (e.g., \(\tilde{x}\), \(\tilde{y}\)).
We indicate the focal individual with index \(i\); so the agent \(i\)-relative hidden state \(x_{h,i}\) denotes the average nearest-neighbor distance from the perspective of agent \(i\). This average distance \(x_{h,i}\) is calculated from the \(K\) neighbors that form the interaction set \(N_{in}\) of the \(i^{\text{th}}\) focal individual. How to define the interaction set \(N_{in}\) is a choice to make in each simulation, but for the case of recapitulating classical, distance-dependent social forces models, we define \(N_{in}\) as those neighbors that are within a fixed distance \(R_{0}\) of the focal individual's position:
\[x_{h,i} \triangleq\frac{1}{K}\sum_{j\in N_{in}}\|\Delta\mathbf{r}_{ij}\|\] \[\text{where}\;\;N_{in} \triangleq\{j\neq i\,:\,\|\Delta\mathbf{r}_{ij}\|\leq R_{0}\}\] (A.38) \[K \triangleq|N_{in}|\] \[\Delta\mathbf{r}_{ij} \triangleq\mathbf{r}_{j}-\mathbf{r}_{i}\] (A.39)
An additional filter on \(N_{in}\) that is common to self-propelled particle models, is to only include neighbors that subtend some angular extent (also known as a 'vision cone' or 'visual field') relative to the focal agent's velocity vector \(\mathbf{v}_{i}\). This is the approach taken in [15], for instance, and in the simulations examined in the main text we do the same.
The vector \(\mathbf{r}_{i}\) denotes the 2-D coordinate of the focal agent, and \(\mathbf{r}_{j}\) is that of neighbor \(j\). \(\mathbf{r}_{ij}\) thus represents the relative displacement vector of neighbour \(j\), from the perspective of the focal agent \(i\).
We also define the first temporal derivative of the local average distance \(x^{\prime}_{h,i}\):
\[\tilde{x}_{h,i} \triangleq(x_{h,i},x^{\prime}_{h,i})\] \[x^{\prime}_{h,i} \triangleq\frac{dx_{h,i}}{dt}=\nabla_{\mathbf{r}_{i}}x_{h,i}\cdot \mathbf{v}_{i}+\sum_{j\in N_{in}}\left(\nabla_{\mathbf{r}_{j}}x_{h,i}\cdot \mathbf{v}_{j}\right)\] (A.40)
where \(\mathbf{v}_{j}\) is the velocity or heading vector of neighbour \(j\). The expression in (A.40) means that we can compute the first derivative or velocity of the distance \(x^{\prime}_{h,i}\) as a function of the positions and velocities of all agents, as opposed to some discrete-time approximation, e.g., \(x^{\prime}_{h,i}\approx\frac{x_{h,i}(t+\Delta t)-x_{h,i}(t)}{\Delta t}\) for some small \(\Delta t\). Note that this expression for \(x^{\prime}_{h,i}\) assumes a local linearization of \(x_{h,i}\) at the radius defined by \(R_{0}\), i.e., this linearization will be a poor predictor of the actual change in the state \(x_{h,i}(t+\Delta t)-x_{h,i}(t)\) when neighbors are instantaneously leaving or entering the interaction set \(N_{in}\). Observations \(\tilde{y}_{h,i}\) are perturbed versions of the hidden states with additive generalised fluctuations \(\tilde{z}_{h,i}\):
\[y_{h,i} =x_{h,i}+z_{h,i}\] \[y^{\prime}_{h,i} =x^{\prime}_{h,i}+z^{\prime}_{h,i}\] \[\text{where}\quad p(\tilde{z}_{h,i}) =N(\tilde{z}_{h,i};\mathbf{0},\tilde{\Sigma}_{z,h})\] (A.41)
In all simulations we parameterize the \(\tilde{z}_{h,i}\) as independent Gaussian variables, i.e.,
\[\tilde{\Sigma}_{z,h}=\begin{bmatrix}\sigma_{z,h}^{2}&0\\ 0&\sigma_{z^{\prime},h}^{2}\end{bmatrix}\] (A.42)
where the two variances \(\sigma_{z,h}^{2}\) and \(\sigma_{z^{\prime},h}^{2}\) can be set independently. The 'perception' step of our active inference process proceeds by providing these observations to the filtering equations in (A.37). The result is that posterior means \(\tilde{\mu}\) appears to track \(\tilde{x}_{h,i}\) over time, while additionally estimating its higher-order motion (acceleration) via \(\mu^{\prime\prime\prime}\).
Finally, we now furnish a scheme for updating actions by mapping the control variables \(a\) and sensorimotor contingency terms of (A.22) to the case of our distance-tracking self-propelled agent.
We let actions be identifiable with the heading vector \(\mathbf{v}_{i}\) of the focal individual, i.e., \(a=\mathbf{v}_{i}\). For the simulations presented in the current paper, we always asserted that this heading have unit magnitude, but in general this constraint is not necessary.
Given this definition of actions, we can unpack the sensorimotor contingency term \(\frac{\partial y(a)}{\partial a}\) that appeared in the active inference control equation of (A.22), now letting \(a=\mathbf{v}\) and turning partial derivatives into Jacobians to account for vectorial nature of actions (being a velocity in 2-D) and observations (being comprised of two generalized coordinates):
\[\frac{d\mathbf{v}_{i}}{dt}=-\nabla_{\mathbf{v}_{i}}\tilde{y}_{h,i}(\mathbf{v} _{i})^{\top}\nabla_{\tilde{y}_{h,i}(\mathbf{v}_{i})}F_{L}\] (A.43)
Note here that observations \(\tilde{y}_{h,i}\) are a function of actions; this is because observations are a linear function of hidden states, which themselves are linear in the velocity vector of the focal individual \(\mathbf{v}_{i}\) via the relation in (A.40). Importantly, however, the distance observation \(y_{h,i}\) does not directly depend on the \(\mathbf{v}_{i}\) -- only the distance velocity \(y^{\prime}_{h,i}\) does. This means the sensorimotor contingency in (A.43) is comprised of non-zero partial derivatives only for \(y^{\prime}_{h,i}\):
\[\nabla_{\mathbf{v}_{i}}\tilde{y}_{h,i}(\mathbf{v}_{i})=\begin{bmatrix}\nabla_{ \mathbf{v}_{i}}y_{h,i}(\mathbf{v}_{i})\\ \nabla_{\mathbf{v}_{i}}y^{\prime}_{h,i}(\mathbf{v}_{i})\end{bmatrix}= \begin{bmatrix}\mathbf{0}\\ \nabla_{\mathbf{r}_{i}}x_{h,i}\end{bmatrix}\] (A.44)
This has an important consequence for action, when we consider the form of the second part of the action update in (A.43), the free energy gradient term \(\nabla_{\tilde{y}_{h,i}}F_{L}\):
\[\nabla_{\tilde{y}_{h,i}}F_{L}=\tilde{\xi}_{z}=\tilde{\Pi}_{z}\tilde{\varepsilon }_{z}=\begin{bmatrix}\Gamma_{z}(y_{h,i}-\mu)\\ 2\Gamma_{z}\lambda_{z}^{2}(y^{\prime}_{h,i}-\mu^{\prime})\end{bmatrix}\] (A.45)
The free energy gradient with respect to observations is simply the generalized (precision-weighted) sensory error \(\tilde{\xi}_{z}\), which we have written in terms of the observations \(\tilde{y}_{h,i}\), posterior beliefs \(\tilde{\mu}\) and precision parameters \(\Gamma_{z},\lambda_{z}\). The sparse form of the sensorimotor contingency in (A.44) means that the \(0^{\text{th}}\)-order prediction error \(\xi_{z}\) will have no effect on behavior and only the velocity prediction errors \(\xi^{\prime}_{z}\) will be relevant for the update to \(\mathbf{v}_{i}\), i.e.,
\[\frac{d\mathbf{v}_{i}}{dt} =-\left(\xi_{z}\underbrace{\nabla_{\mathbf{v}_{i}}y_{h,i}( \mathbf{v}_{i})}_{=\mathbf{0}}+\xi^{\prime}_{z}\nabla_{\mathbf{v}_{i}}y^{ \prime}_{h,i}(\mathbf{v}_{i})\right)\] \[=-\xi^{\prime}_{z}\nabla_{\mathbf{r}_{i}}x_{h,i}\] \[=2\Gamma_{z}\lambda_{z}^{2}(y^{\prime}_{h,i}-\mu^{\prime})\Delta \hat{\mathbf{r}}\] \[\text{where}\,\,\,\Delta\hat{\mathbf{r}} =\frac{1}{K}\sum_{j\in N_{in}}\frac{\Delta\mathbf{r}_{ij}}{\| \Delta\mathbf{r}_{ij}\|}\] (A.46)
Note that, as for the inference update in (A.37), we update \(\mathbf{v}_{i}\) using a fixed number of action iterations \(n_{\text{ActionIter}}\) and step-size \(\kappa_{a}\), where here we set \(n_{\text{ActionIter}}=1,\kappa_{a}=0.1\). This action update equation has a few key implications for the behavior of active inference agents equipped with this type of generative model, and its relationship to 'classical' self-propelled particle models like the Couzin-Aoki model and the Reynolds or BOIDS model [15, 16, 17]. The first is the fact that the sensorimotor contingency is identical to the'social force' vector used to drive interactions in self-propelled particle models \(\Delta\hat{\mathbf{r}}\); this the average of the vectors pointing from each neighbor in the interacting set to the focal agent's position \(\mathbf{r}_{i}\). The sign of the precision-weighted prediction error \(\epsilon^{\prime}_{z}\) determines whether the social force is attractive (pointing towards other agents) or repulsive (pointing away from other agents). Secondly, the fact that actions only depend on velocity observations, rather than state observations, means that agents will adjust their heading according to how the (sensed) distance is instantaneously changing (its velocity), rather than its value. This lends action a predictive, anticipatory power and accounts for why we observe robust polarized motion in the absence of an explicit alignment term like in classic self-propelled particle models [15, 18]. The alignment-like forces emerges from the fact that the velocity vectors of other agents
\(v_{j},j\in N_{in}\) are integrated into the computation of \(y^{\prime}_{h,i}\) via the relation in the second line of (A.40).
One of the defining features of other self-propelled particle models like the Couzin-Aoki model [15, 16] is the presence and priorization of interaction zones. The two main zones used in these models, and which on their own are sufficient for group cohesion, are a narrow repulsion zone defined by some radius \(r_{r}\) and a wider attraction zone with radius \(r_{a}\), where \(r_{a}>r_{r}\). Neighboring agents within the repulsive radius exert repulsive forces on the focal agent, while those beyond the repulsion radius but within the attraction zone exert attractive forces, where the difference between attraction and repulsion is given by the sign of the force vector \(\Delta\hat{\mathbf{r}}\). The active inference model leads to an effective notion of zones, but rather than being explicitly encoded, these zones emerge through the fixed-point attractor \(\eta\) parameterizing the generative model's dynamics model \(\mathbf{f}\). This is made clear when we examine the precision-weighted prediction error \(\xi^{\prime}_{z}\), which itself is a function of velocity observations \(y^{\prime}_{h,i}\) and velocity beliefs \(\mu^{\prime}\). Consider the limiting case of when inference is strongly biased by the dynamics model \(\mathbf{f}\) (i.e., in the case that \(\Gamma_{\omega}>\Gamma_{z}\) or large \(\alpha\)); the generalised beliefs \(\tilde{\mu}\) will be strongly drawn to the setpoint \(\eta\) of the dynamics prior, i.e.,
\[\tilde{\mu}=\begin{bmatrix}\mu\\ \mu^{\prime}\end{bmatrix}\approx\begin{bmatrix}\eta\\ 0\\ 0\end{bmatrix}\]
Under this assumption, the precision-weighted prediction error \(\xi^{\prime}_{z}\) approximates \(2\Gamma_{z}\lambda_{z}^{2}y^{\prime}_{h,i}\), and thus signals whether neighbors are instantaneously approaching or moving away from the focal agent, where \(\xi^{\prime}_{z}<0\) indicates they are approaching and \(\xi^{\prime}_{z}>0\) indicates they are moving away. This in turn determines whether the update to the focal agent's action \(\mathbf{v}_{i}\) is repulsive or attractive, as its sign determines the direction of the social force vector \(\Delta\hat{\mathbf{r}}\). Although the first order distance \(y_{h,i}\) does not directly drive action, it does so indirectly through its effect on inference of \(\mu^{\prime}\). If we consider the case when the sensed distance \(y_{h,i}\) drops below the setpoint \(\eta\), then one can reason through the cascade of prediction errors that ultimately lead to a repulsive force. As a direct consequence of a drop in \(y_{h,i}\) below \(\mu\), sensory prediction errors \(\xi_{z}\) will become negative, whose minimization will require \(\mu\) to move below \(\eta\). This process in turn incurs slower-moving (negative) model prediction errors \(\xi_{\omega}\), whose minimization drives \(\mu\) back to its fixed point of \(\eta\), given the dynamic constraint for the beliefs to relax to their fixed point. In order to accomplish this upward movement of \(\mu\), either the rate of change of \(\mu\) or the sensed distance itself must be positive, i.e., \(\dot{\mu}>0\) or \(y_{h,i}>0\). In the absence of positive \(y_{h,i}\), model prediction errors will drive \(\mu^{\prime}\) (and hence \(\dot{\mu}\)) above \(0\). This temporarily sets a larger radius of repulsion, i.e., a larger range of \(y^{\prime}_{h,i}\) for which \(\xi^{\prime}_{z}\) is negative and for which repulsive forces impact the focal agent's velocity. This causes the agent to move away from its neighbors and thus further increase \(y_{h,i}\), under the assumption that the agent's prediction of the distance dynamics are correlated with the true change in \(x_{h,i}\). Belief updating and action thus work together to accelerate the return of \(\mu\) towards \(\eta\) and \(\tilde{\xi}_{z},\tilde{\xi}_{\omega}\) towards \(0\); for this reason active inference is often described as an account of action and perception driven by'self-fulfilling prophecy' [12].
In order to imbue action with a more direct coupling to the neighbors' distances as is done in the classical self-propelled particle models, rather than the velocity of the distance, one could hand-craft the sensorimotor contingency term \(\nabla_{\mathbf{v}_{i}}\tilde{y}_{h,i}\) to enforce a coupling between \(y_{h,i}\) and \(\mathbf{v}_{i}\). This would render the action rule equivalent to a'soft'-form of PD control [14], where errors on both the first order state \((y_{h,i}-\mu)\approx(y_{h,i}-\eta)\) and its derivative \((y^{\prime}_{h,i}-\mu^{\prime})\approx y^{\prime}_{h,i}\) would drive changes to the velocity.
### Extending to multiple sensory sectors
The results of the previous sections can be straightforwardly extended to the multivariate case as explored in the main text. The focal agent now senses the local distance computed across a set of distinct sensory sectors. For the model explored in the current work, we split up the computation of the local distance variable into a set of \(L\) sensory adjacent sectors that comprise an arc of a given angle, relative to the agent's heading vector \(\mathbf{v}_{i}\). We define the multivariate distance hidden state as follows (dropping the focal agent index \(i\) from the sector-specific hidden states to avoid subscript overload):
\[\mathbf{x}_{h,i} =\begin{bmatrix}x_{h,1}\\ x_{h,2}\\ \vdots\\ x_{h,L}\end{bmatrix}\] (A.47) \[\text{where}\;\;x_{h,l} \triangleq\frac{1}{K_{l}}\sum_{j\in N_{l}}\|\Delta\mathbf{r}_{ ij}\|\]
where \(N_{l}\) is the set of neighbors in the \(l^{\text{th}}\) sensory sector, and \(K_{l}=|N_{l}|\) (c.f., (A.39)). As with the scalar hidden state defined above, we also equip the vector of sector distances \(\mathbf{x}_{h,i}\) with corresponding sector-specific, generalized observations \(\tilde{\mathbf{y}}_{h,i}\), i.e.
\[\mathbf{y}_{h,i} =\mathbf{x}_{h,i}+\mathbf{z}_{h,i}\] (A.48) \[\mathbf{y}^{\prime}_{h,i} =\mathbf{x}^{\prime}_{h,i}+\mathbf{z}^{\prime}_{h,i}\] \[\text{where}\quad p(\tilde{\mathbf{z}}_{h,i}) =N(\tilde{\mathbf{z}}_{h,i};\mathbf{0},\tilde{\Sigma}_{\mathbf{z},h})\] (A.49)
such that the focal individual now observes a vector of local (noise-perturbed) distances and their first orders of motion. Note that the generalized covariance matrix here \(\tilde{\Sigma}_{\mathbf{z},h}\) is now a \(2L\times 2L\) size matrix, that encodes the covariance structure between sector-specific noise and their generalized orders. For all simulations we generated uncorrelated noise across the different sectors, although spatially-smooth noise could be modelled by introducing off diagonal elements in \(\tilde{\Sigma}_{\mathbf{z},h}\), i.e., \(\mathbb{E}[z_{h,l}z_{h,k}]\neq 0\).
The agent's generative model is also extended to the multivariate state-space formulation we began with, using a vector of generalised hidden states \(\tilde{\mathbf{x}}=(\tilde{x}_{1},\tilde{x}_{2},...,\tilde{x}_{L})\) to estimate
the local distance within each sensory sector. Belief-updating consists in updating a vector of generalised means \(\tilde{\mathbf{\mu}}\) through integration of (A.37).
The action update has an identical form as before, except now the sensorimotor contingency term \(\nabla_{\mathbf{v}_{i}}\tilde{\mathbf{y}}_{h,i}(\mathbf{v_{i}})\) is a collection of partial derivative vectors, one for each sensory sector:
\[\frac{d\mathbf{v}_{i}}{dt}=-\nabla_{\mathbf{v}_{i}}\tilde{\mathbf{y}}_{h,i}( \mathbf{v}_{i})^{\top}\nabla_{\tilde{\mathbf{y}}_{h,i}}F_{L}\]
\[\nabla_{\mathbf{v}_{i}}\tilde{\mathbf{y}}_{h,i}(\mathbf{v_{i}})=\begin{bmatrix} \mathbf{0}\\ \vdots\\ \mathbf{0}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \vdots\\ \mathbf{0}\end{bmatrix}\] (A.50)
The last \(L\) rows of this Jacobian matrix encode the gradients of the sector-specific distance velocities \(y^{\prime}_{h,l}\) with respect to the focal agent's action; these partial derivatives are vectors pointing from the average position of the neighbors in sector \(l\) towards the focal individual. When we combine the Jacobian matrix in (A.50) with the sensory prediction error term \(\tilde{\mathbf{y}}_{h,i}\) (i.e., the free energy gradients \(\nabla_{\tilde{\mathbf{y}}_{h,i}}F_{L}\)), we are left with the following update for the velocity:
\[\frac{d\mathbf{v}_{i}}{dt} =\mathbf{\xi}^{\prime}_{z}\cdot\Delta\hat{\mathbf{R}}=\begin{bmatrix} \xi^{\prime}_{z,1}&\xi^{\prime}_{z,2}&\ldots&\xi^{\prime}_{z,L}\end{bmatrix} \cdot\begin{bmatrix}\Delta\hat{\mathbf{r}}_{1}\\ \Delta\hat{\mathbf{r}}_{2}\\ \vdots\\ \Delta\hat{\mathbf{r}}_{L}\end{bmatrix}\] (A.51) \[=\sum_{l=1}^{L}\xi^{\prime}_{z,l}\Delta\hat{\mathbf{r}}_{l}=2 \Gamma_{z}\lambda_{z}^{2}\sum_{l=1}^{L}(y^{\prime}_{h,l}-\mu^{\prime}_{l}) \Delta\hat{\mathbf{r}}_{l}\] \[\text{where}\;\;\Delta\hat{\mathbf{r}}_{l} =\frac{1}{K_{l}}\sum_{j\in N_{l}}\frac{\Delta\mathbf{r}_{ij}}{|| \Delta\mathbf{r}_{ij}||}\]
The action thus becomes a weighted sum of'sector-vectors' \(\Delta\hat{\mathbf{r}}_{l}\), which are vectors pointing from the focal agent's position \(\mathbf{r}_{i}\) towards the average position of the neighbors in \(N_{l}\). The weights that scale each \(\Delta\hat{\mathbf{r}}_{l}\) are the precision-weighted prediction errors associated with velocity observations emanating from the appropriate sector \(\xi^{\prime}_{z,l}\propto(y^{\prime}_{h,l}-\mu^{\prime}_{l})\). The fact we can pull the spatiotemporal precision terms \(2\Gamma_{z}\lambda_{z}^{2}\) outside the sum over sector-vectors, inherits from a between-sector independence assumption, built into the agent's sensory likelihood model \(P(\tilde{\mathbf{y}}|\tilde{\mathbf{x}})\) (see (A.31)). If the generative model allowed for between-sector correlations
(i.e. \(\Sigma^{\mathbf{z}}\) was not diagonal), then the action update would include cross-terms that couple prediction errors from one sector to the sector-vector from another sector.
An active inference agent equipped with such a multivariate representation of the local neighbor-distances thus engages in a sort of 'predictive balancing-act', differentially responding more or less to each part of its sensory field in accordance with how much sensations deviate from their posterior expectations \(\mu^{\prime}_{l}\), where the sign and degree of this deviation is scored by \(\xi^{\prime}_{z,l}\).
## Appendix B Alignment forces from active inference on angles
In previous sections we have shown how repulsive and attractive forces emerge from active inference models in which the agent entertains a latent representation of the average local distance between itself and its neighbors, and how its heading direction couples to (the derivative of) that variable. In this section we derive alignment-based social forces, like those that appear in the Reynolds, Couzin, and Vicsek models [15, 17, 18], as a special case of active inference, where an agent infers the (cosine) angle between its own heading and that of its neighbors, and acts under the prior belief that this angle tends to 0.
As before, we start with a generative model that represents a generalised latent variable \(\tilde{x}_{\phi}\) that evolves in time with Gaussian additive fluctuations \(\tilde{\omega}_{\phi}\). We use the \(\phi\) subscript to distinguish this angle-tracking latent variable from the distance-tracking variable of the previous section. We truncate the generalized representation of this state at second order, i.e. \(\tilde{x}_{\phi}=\{x_{\phi},x^{\prime}_{\phi}\}\), leading to a dynamical equation and corresponding likelihood of the following form:
\[\dot{x}_{\phi} =-\alpha_{\phi}(x_{\phi}-1)+\omega_{\phi}\] \[\dot{x}^{\prime}_{\phi} =-\alpha_{\phi}x^{\prime}_{\phi}+\omega^{\prime}_{\phi}\] \[\implies p(D\tilde{x}_{\phi}|\tilde{x}_{\phi}) =\mathcal{N}(D\tilde{x}_{\phi};\tilde{f}_{\phi},\tilde{\Sigma}_{ \omega_{\phi}})\] (B.52) \[\text{where }\tilde{f}_{\phi} =\begin{bmatrix}-\alpha_{\phi}(x_{\phi}-1)\\ -\alpha_{\phi}x^{\prime}_{\phi}\end{bmatrix},\ \ \tilde{\Sigma}_{\omega_{\phi}}= \begin{bmatrix}\sigma^{2}_{\omega_{\phi}}&0\\ 0&\sigma^{2}_{\omega^{\prime}_{\phi}}\end{bmatrix}\] (B.53)
The observation model describes a mapping from the \(0^{\text{th}}\)-order state to a corresponding observation thereof, perturbed again by Gaussian innovations:
\[y_{\phi} =x_{\phi}+z_{\phi}\] \[\implies p(y_{\phi}|x_{\phi}) =\mathcal{N}(y_{\phi};x_{\phi},\sigma^{2}_{z_{\phi}})\] (B.54)
Following the same steps as we did previously for the multivariate, distance-tracking generative model, we can write down the Laplace-approximated variational free energy of this model as a quadratic function of the observations and generalized means \(\tilde{\mu}_{\phi}\):
\[F_{L} \propto\varepsilon_{z_{\phi}}^{\top}\Pi_{z_{\phi}}\varepsilon_{z_{ \phi}}+\tilde{\varepsilon}_{\omega_{\phi}}^{\top}\tilde{\Pi}_{\omega_{\phi}} \tilde{\varepsilon}_{\omega_{\phi}}\] \[\text{where}\;\;\varepsilon_{z_{\phi}} \triangleq y_{\phi}-\mu_{\phi}\] \[\tilde{\varepsilon}_{\omega_{\phi}} \triangleq D\tilde{\mu}_{\phi}-\tilde{f}_{\phi}\]
The agent performs a gradient descent on \(F_{L}\) to infer the value of \(\tilde{x}_{\phi}\) in light of sensory observations. This inference is encoded by a Gaussian variational posterior with mean \(\tilde{\mu}_{\phi}\). As before, we can tune model parameters such that inference is strongly biased by the dynamics model \(\tilde{f}_{\phi}\), where the zeroth-order of motion \(\mu_{\phi}\approx 1\). The reason we set the set-point at 1 becomes evident when we consider the generation of sensory data and actions.
Assume that the focal agent with index \(i\) observes the local average cosine angle between its own heading vector \(\mathbf{v}_{i}\) and those of its neighbors \(\mathbf{v}_{j},j\in N_{in}\), where neighbors are once again defined by membership in some interaction zone3:
Footnote 3: For notational convenience and because it doesn’t change the derivations, we omit observation noise on \(y_{\phi}\).
\[y_{\phi}=\frac{1}{K}\sum_{j\in N_{in}}\mathbf{v}_{i}^{\top}\mathbf{v}_{j}= \langle\cos(\theta_{ij})_{N_{in}}\] (B.55)
where the equivalence between the dot products and the cosine angle is assured when we assume all \(\mathbf{v}_{k},k\in\{i\}\cup N_{in}\) have unit magnitude. Recall that if two unit-magnitude vectors \(\mathbf{v}_{i},\mathbf{v}_{j}\) are parallel, their dot product (cosine angle) is 1. When we once again assume that agents act by adjusting their heading direction, then the action update given the continuous active inference rule in (A.22) has the following form:
\[\frac{d\mathbf{v}_{i}}{dt} =-\frac{1}{\sigma_{z_{\phi}}^{2}}(y_{\phi}-\mu_{\phi})\hat{ \mathbf{v}}\approx(1-y_{\phi})\hat{\mathbf{v}}\] (B.56) \[\text{where}\;\;\hat{\mathbf{v}} =\frac{1}{K}\sum_{j\in N_{in}}\mathbf{v}_{j}\] (B.57)
The approximation in the first line holds when we assume the sensory variance \(\sigma_{z_{\phi}}^{2}\) is 1 and the dynamics prior (either via increasing \(\alpha\) or decreasing \(\sigma_{\omega_{\phi}}^{2}\)) dominates inference such that \(\mu_{\phi}\approx 1\). In this case, the focal agent \(i\) then updates its velocity using the average neighbor velocity. This is proportional to the alignment force in e.g. [15, 18], except that it is also scaled by how unaligned the focal individual is with its neighbourhood, scored by \(1-y_{\phi}\).
## Appendix C Online parameter estimation
In this section we derive update rules for the generative model parameters using a simple gradient descent scheme on the Laplace-approximated variational free energy. In the active
inference literature this process of updating parameters, as opposed to beliefs about states, is often analogized to online learning or neural plasticity [19, 20].
### Updating sensory smoothness
In this section we derive an update equation for the sensory smoothness parameter \(\lambda_{z}\), which captures the generative model's assumptions about the temporal autocorrelation structure of sensory noise \(z\).
Recall the formulation of state space models in generalized coordinates of motion in Section A.1. In addition to providing a concise description of local paths of the state \(\vec{x}_{t}\) in terms of its higher derivatives \(x^{\prime},x^{\prime\prime},...,x^{[n]}\), stochastic differential equations in generalized coordinates also allow one to express _serial correlations_ in the noises at the first order \(z\), by assuming that it can be differentiated (has non-zero, smooth autocovariance) and represented in terms of hierarchical or generalized noises \(z^{\prime},z^{\prime\prime},z^{\prime\prime\prime},...,z^{[n]}\).
Recall the parameterization of the generalized sensory precision \(\tilde{\Pi}^{z}\) as a factorization into two precision matrices, that respectively represent agent's beliefs about the'spatial' and 'temporal' covariance structure. We parameterize these with the two precision parameters \(\Gamma_{z}\) and \(\lambda_{z}\). \(\Gamma_{z}\) encodes the agent's belief about the overall magnitude of the fluctuations, and \(\lambda_{z}\) encodes beliefs about their their serial correlations in time, assuming a Gaussian form for their autocorrelation:
\[\tilde{\Pi}^{z} =S(\lambda_{z})\otimes\Pi(\Gamma_{z})\] \[\Pi(\Gamma_{z}) =\begin{bmatrix}\Gamma_{11}&&&\\ &\Gamma_{22}&&\\ &&\ddots&\\ &&&\Gamma_{LL}\end{bmatrix}\] \[S(\lambda_{z}) =\begin{bmatrix}1&0&-\frac{1}{2\lambda_{z}^{2}}&\cdots\\ 0&\frac{1}{2\lambda_{z}^{2}}&0&\\ -\frac{1}{2\lambda_{z}^{2}}&0&\frac{3}{4\lambda_{z}^{4}}&\\ \vdots&&&\ddots\end{bmatrix}^{-1}\] (C.58)
We implement a form of behavioral plasticity by allowing agents to update \(\lambda_{z}\) using observations. We accomplish this using a gradient descent on variational free energy:
\[\frac{d\lambda_{z}}{dt}=-\kappa_{\theta}\frac{\partial F}{\partial\lambda_{z}}\] (C.59)
where the 'learning rate' \(\kappa_{\theta}\) is typically set to be at least an order of magnitude lower than the update rate of inference \(\kappa_{\mu}\); in all simulations we use \(\kappa_{\theta}=0.001\) and \(n_{\text{LearnIter}}=1\) iteration. This enforces a separation of timescales that is typical in generalized filtering and state-space models that perform simultaneous state- and parameter-estimation [3, 5, 14].
To compute the gradients of the variational free energy with respect to \(\lambda_{z}\), we can start by expressing those components of the (Laplace-approximated) variational free energy that depend on \(\lambda_{z}\):
\[F(\lambda_{z})=\tilde{\varepsilon}_{\bf z}^{\top}\tilde{\Pi}^{\bf z}\tilde{ \varepsilon}_{\bf z}-\ln\left(\det\tilde{\Pi}^{\bf z}\right)\] (C.60)
where we only have included the terms that depend on the sensory precision \(\tilde{\Pi}^{z}\) due to its dependence on \(\lambda_{z}\). The full gradient is then simply:
\[\frac{\partial F}{\partial\lambda_{z}}=\frac{\tilde{\varepsilon}_{\bf z}^{\top }\tilde{\Pi}^{\bf z}\tilde{\varepsilon}_{\bf z}}{\partial\lambda_{z}}-\frac{ \partial\ln\left(\det\tilde{\Pi}^{\bf z}\right)}{\partial\lambda_{z}}\] (C.61)
Starting with the case of a single sensory sector \(L=1\), then the generalized prediction error \(\tilde{\varepsilon}_{\bf z}\) is a vector of prediction errors, one for each order of motion: \(\tilde{\varepsilon}_{z}=\{\varepsilon_{z},\varepsilon_{z}^{\prime},\varepsilon _{z}^{\prime\prime},...\}\) where a sensory prediction error at a given order of motion is simply: \(\varepsilon_{z}^{[n]}=y^{[n]}-\tilde{g}^{[n]}\), where the \(n\) subscript refers to an order of differentiation. In the case of 3 generalized coordinates for the simple scalar case:
\[\frac{\partial F}{\partial\lambda_{z}} = 4\Gamma_{z}\lambda_{z}(\varepsilon_{z}^{\prime})^{2}+\varepsilon _{z}^{\prime\prime}(8\Gamma_{z}\varepsilon_{z}^{\prime\prime}\lambda_{z}^{3}+ 2\Gamma_{z}\varepsilon_{z}\lambda_{z})+2\Gamma_{z}\lambda_{z}\varepsilon_{z} \varepsilon_{z}^{\prime\prime}-\frac{6}{\lambda_{z}}\] (C.62) \[= 4\Gamma_{z}\lambda_{z}(\varepsilon_{z}\varepsilon_{z}^{\prime \prime}+(\varepsilon_{z}^{\prime})^{2}+2\lambda_{z}^{2}(\varepsilon_{z}^{ \prime\prime})^{2})-\frac{6}{\lambda_{z}}\]
Meaning that the update for the \(\lambda_{z}\) parameter can be simplified to (omitting the learning rate \(\kappa_{\theta}\)):
\[\frac{d\lambda_{z}}{dt}=-4\Gamma_{z}\lambda_{z}(\varepsilon_{z}\varepsilon_{z }^{\prime\prime}+(\varepsilon_{z}^{\prime})^{2}+2\lambda_{z}^{2}(\varepsilon_ {z}^{\prime\prime})^{2})+\frac{6}{\lambda_{z}}\] (C.63)
In the case of the distance-tracking generative model we explore in the main text, we assume that the agents can only observe the \(0^{\rm th}\) (position, \(y\)) and \(1^{\rm st}\) (velocity, \(y^{\prime}\)) orders of motion of the hidden states \(\tilde{x}\). This means there are no longer \(2^{\rm nd}\)-order prediction errors \(\varepsilon_{z}^{\prime\prime}\) and the update becomes even simpler:
\[\frac{d\lambda_{z}}{dt} = -4\Gamma_{z}\lambda_{z}(\varepsilon_{z}^{\prime})^{2}+\frac{6}{ \lambda_{z}}\] (C.64) \[\approx -4\Gamma_{z}\lambda_{z}(y_{h,i}^{\prime})^{2}+\frac{6}{\lambda_{z}}\]
where approximation in the second line results in the case of 'biased' inference, i.e., \(\mu\approx\eta\implies\mu^{\prime}\approx 0\), allowing us to replace the velocity prediction error \(y_{h,i}^{\prime}-\mu^{\prime}\) with \(y_{h,i}^{\prime}\).
Given that spatial and temporal precisions are independent from each other due to the factorization of the generalized precision matrix, and further given the diagonal structure of the spatial precision \(\Pi_{\mathbf{z}}\) (i.e., independence in random fluctuations across sensory sectors), we can write an update for \(\lambda_{z}\) that is a sum of squared prediction errors across sensory sectors:
\[\frac{d\lambda_{z}}{dt}\approx-4\Gamma_{z}\lambda_{z}\sum_{l=1}^{L}(y^{\prime} _{h,l})^{2}-\frac{6L}{\lambda_{z}}\] (C.65)
The quadratic form of this update means that the update to the smoothness parameter decays in proportion with the overall magnitude of the velocity prediction errors, regardless of its sign. This means that if the distance is fluctuating quickly in any direction, then the agent will infer that fluctuations are slightly-less serially-correlated at the \(0^{\text{th}}\) order, reflected by a decrease in \(\lambda_{z}\).
## Appendix D Adding a target representation into the generative model
As described in the main text, it is straightforward to add an additional observation model and dynamics model to an agent's generative model to represent the distance between itself and some abstract spatial target, which in the context of the collective information transfer experiments, we represent with \(\mathbf{T}\):
\[\dot{x}_{\text{target}} =-\alpha_{\text{target}}x_{\text{target}}+\omega_{\text{target}} y_{\text{target}} =x_{\text{target}}+z_{\text{target}}\] \[\dot{x}^{\prime}_{\text{target}} =-\alpha_{\text{target}}x^{\prime}_{\text{target}}+\omega^{ \prime}_{\text{target}} y^{\prime}_{\text{target}} =x^{\prime}_{\text{target}}+z^{\prime}_{\text{target}}\] (D.66)
We truncate the generalized hidden states at third order \(\tilde{x}_{\text{target}}=(x_{\text{target}},x^{\prime}_{\text{target}},x^{ \prime\prime}_{\text{target}})\) and the observations at second order \(\tilde{y}_{\text{target}}=(y_{\text{target}},y^{\prime}_{\text{target}})\). When the agent assumes the generalized noises \(\tilde{\omega}_{\text{target}}\) and \(\tilde{z}_{\text{target}}\) are zero-mean and normally-distributed with covariances \(\tilde{\Sigma}^{\omega_{\text{target}}}\) and \(\tilde{\Sigma}^{z_{\text{target}}}\) and leverage the Laplace approximation exactly as we did in the previous section, then we can supplement the Laplace-approximated free energy in (A.35) with additional terms corresponding to target-related prediction errors:
\[F_{L}\propto\frac{1}{2}\left[\tilde{\mathbf{\varepsilon}}_{z\text{- Soc}}^{\top}\tilde{\mathbf{\varepsilon}}_{z\text{- Soc}}+\tilde{\mathbf{\varepsilon}}_{\omega\text{-Soc}}^{\top}\tilde{\Pi}^{\mathbf{\omega \text{-Soc}}}\tilde{\mathbf{\varepsilon}}_{\omega\text{-Soc}}+\tilde{\mathbf{ \varepsilon}}_{z\text{-Tar}}^{\top}\tilde{\Pi}^{z\text{-Tar}}\tilde{\mathbf{ \varepsilon}}_{z\text{-Tar}}+\tilde{\mathbf{\varepsilon}}_{\omega\text{-Tar}}^{ \top}\tilde{\Pi}^{\omega\text{-Tar}}\tilde{\mathbf{\varepsilon}}_{\omega\text{- Tar}}\right]+C\] (D.67)
Here we use the suffixes "-Soc" or "-Tar" to indicate'social' relevant information (related to the average neighbor distance) and the 'target' prediction errors. \(C\) captures all the additional terms (log determinants of precision matrices, etc.) that are constant with respect to the posterior means \(\tilde{\mathbf{\mu}}=(\tilde{\mathbf{\mu}}_{\text{Social}},\tilde{\mathbf{\mu}}_{\text{ target}})\). Following the same reasoning as used to derive the inference and action rules for the case of the social distance hidden states
and observations(\(\tilde{\mathbf{x}}_{\text{Social}},\tilde{\mathbf{y}}_{\text{Social}}\)), we can do the same to derive active inference rules for the target-relevant hidden states and observations \(\tilde{x}_{\text{target}},\tilde{y}_{\text{target}}\):
\[\frac{d\tilde{\boldsymbol{\mu}}_{\text{Social}}}{dt} =D\tilde{\boldsymbol{\mu}}_{\text{Social}}-\nabla_{\tilde{ \boldsymbol{\mu}}_{\text{Social}}}F_{L}(\tilde{\boldsymbol{\mu}}_{\text{Social} },\tilde{\mathbf{y}}_{\text{Social}})\quad\frac{d\mathbf{v}}{dt}=-\left( \nabla_{\mathbf{v}}F_{L}(\tilde{\boldsymbol{\mu}}_{\text{Social}},\tilde{ \mathbf{y}}_{\text{Social}})+\nabla_{\mathbf{v}}F_{L}(\tilde{\mu}_{\text{Target }},\tilde{y}_{\text{Target}})\right)\] \[\frac{d\tilde{\mu}_{\text{Target}}}{dt} =D\tilde{\mu}_{\text{Target}}-\nabla_{\tilde{\mu}_{\text{Target }}}F_{L}(\tilde{\mu}_{\text{Target}},\tilde{y}_{\text{Target}})\] (D.68)
Where expanding the free energy gradients on the right equation leads to an expression for the action update in terms of a precision-weighted sum of vectors, appearing in (14) in the main text.
## Appendix E Numerical methods
We used a forwards Euler-Maruyama scheme to the integrate a (Ito-style) stochastic differential equation for the positions of all agents over time:
\[d\mathbf{r}_{t}=\mathbf{v}_{t}dt+\sigma_{a}dW_{t}\] (E.69)
where the variance of 'action noise' \(\sigma_{a}^{2}\) was set to \(0.01\) for all experiments unless explicitly stated otherwise. We used a step size of \(\Delta t=0.01s\) in the integration. For the current timestep \(\tau\) in'simulation time', we used a simple forwards Euler scheme to integrate the differential equations used for belief updating (see (A.37)) and action (see (A.43)) for each agent in parallel. We use the positions and heading vectors of all agents from the previous integration timestep (\(\tau-\Delta t\)) to generate the observations for the current timestep.
The collective information transfer experiments were performed using custom Julia code, and all other simulations were implemented in JAX using custom code. To accelerate the parameter scans over \(p_{inf}\), \(\Gamma_{z\text{-Social}}\), and \(\Gamma_{z\text{-Target}}\) to create the results in Figure 3 in the main text, we used the high-performance computing clusters (Cobra and Draco) provided by the Max Planck Computing and Data Facility.
|
2303.17622 | Volatile-to-sulfur Ratios Can Recover a Gas Giant's Accretion History | The newfound ability to detect SO2 in exoplanet atmospheres presents an
opportunity to measure sulfur abundances and so directly test between competing
modes of planet formation. In contrast to carbon and oxygen, whose dominant
molecules are frequently observed, sulfur is much less volatile and resides
almost exclusively in solid form in protoplanetary disks. This dichotomy leads
different models of planet formation to predict different compositions of gas
giant planets. Whereas planetesimal-based models predict roughly stellar C/S
and O/S ratios, pebble accretion models more often predict superstellar ratios.
To explore the detectability of SO2 in transmission spectra and its ability to
diagnose planet formation, we present a grid of atmospheric photochemical
models and corresponding synthetic spectra for WASP-39b (where SO2 has been
detected). Our 3D grid contains 11^3 models (spanning 1--100x the solar
abundance ratio of C, O, and S) for thermal profiles corresponding to the
morning and evening terminators, as well as mean terminator transmission
spectra. Our models show that for a WASP-39b-like O/H and C/H enhancement of
~10x Solar, SO2 can only be seen for C/S and O/S <~1.5, and that WASP-39b's
reported SO2 abundance of 1--10 ppm may be more consistent with planetesimal
accretion than with pebble accretion models (although some pebble models also
manage to predict similarly low ratios). More extreme C/S and O/S ratios may be
detectable in higher-metallicity atmospheres, suggesting that smaller and more
metal-rich gas and ice giants may be particularly interesting targets for
testing planet formation models. Future studies should explore the dependence
of SO2 on a wider array of planetary and stellar parameters, both for the
prototypical SO2 planet WASP-39b, as well as for other hot Jupiters and smaller
gas giants. | Ian J. M. Crossfield | 2023-03-30T18:00:00Z | http://arxiv.org/abs/2303.17622v3 | # Volatile-to-sulfur Ratios Can Recover a Gas Giant's Accretion History
###### Abstract
The newfound ability to detect SO\({}_{2}\) in exoplanet atmospheres presents an opportunity to measure sulfur abundances and so directly test between competing modes of planet formation. In contrast to carbon and oxygen, whose dominant molecules are frequently observed, sulfur is much less volatile and resides almost exclusively in solid form in protoplanetary disks. This dichotomy leads different models of planet formation to predict different compositions of gas giant planets. Whereas planetesimal-based models predict roughly stellar C/S and O/S ratios, pebble accretion models more often predict superstellar ratios. To explore the detectability of SO\({}_{2}\) in transmission spectra and its ability to diagnose planet formation, we present a grid of atmospheric photochemical models and corresponding synthetic spectra for WASP-39b (where SO\({}_{2}\) has been detected). Our 3D grid contains 11\({}^{3}\) models (spanning 1-100\(\times\) the solar abundance ratio of C, O, and S) for thermal profiles corresponding to the morning and evening terminators, as well as mean terminator transmission spectra. Our models show that for a WASP-39b-like O/H and C/H enhancement of \(\sim\)10\(\times\) Solar, SO\({}_{2}\) can only be seen for C/S and O/S \(\lesssim\) 1.5, and that WASP-39b's reported SO\({}_{2}\) abundance of 1-10 ppm may be more consistent with planetesimal accretion than with pebble accretion models (although some pebble models also manage to predict similarly low ratios). More extreme C/S and O/S ratios may be detectable in higher-metallicity atmospheres, suggesting that smaller and more metal-rich gas and ice giants may be particularly interesting targets for testing planet formation models. Future studies should explore the dependence of SO\({}_{2}\) on a wider array of planetary and stellar parameters, both for the prototypical SO\({}_{2}\) planet WASP-39b, as well as for other hot Jupiters and smaller gas giants.
0000-0002-8800-7880]Ian J. M. Crossfield
## 1 Introduction
### Elemental Ratios and Planet Formation
Sulfur's condensation temperature of \(T_{C}\)\(\sim\)660 K is far higher than that of other volatiles commonly observed in exoplanet atmospheres (e.g. C, N, O, which all have \(T_{C}\)\(\lesssim\)180 K; Lodders, 2003; Wood et al., 2019). Exoplanetary C, N, and O abundances have therefore been frequently proposed as probes of whether a given planet formed within or beyond the "snow lines" of various C/N/O-bearing molecules (e.g., Oberg et al., 2011; Ohno and Fortney, 2022).
A longstanding example is a planet's carbon-to-oxygen ratio (C/O; Seager et al., 2005). These two elements, the most common in the Sun after H and He (Lodders, 2003; Asplund et al., 2009), are expected to form many of the dominant molecular species in gas giant atmospheres. CO, H\({}_{2}\)O, CO\({}_{2}\), and CH\({}_{4}\) can all induce prominent spectral features and a planet's C/O should strongly affect the relative abundances of these different molecules (Seager et al., 2005; Madhusudhan, 2012; Heng et al., 2016). C/O was also the first such ratio proposed to hold clues to a planet's formation and evolution (e.g., Oberg et al., 2011), based on the idea that a gas giant's composition should be determined by the location(s) in its natal disk where the planet accretes most of its mass.
However, growing evidence suggests that a planet's formation history cannot be interpreted simply by reading off its C/O ratio. For example, Mordasini et al. (2016) linked a chain of planet formation, disk, and atmospheric models to find that a planet's C/O ratio may
not uniquely correlate with its initial formation location. Similarly, subsequent studies of planet assembly also indicate that the C/O ratio provides, at best, limited constraints on how and where a planet formed and accreted most of its mass (e.g., Turrini et al., 2021; Schneider and Bitsch, 2021; Pacetti et al., 2022; Bitsch et al., 2022).
Other axes beyond C/O may therefore be necessary if we hope to determine how and where a given planet may have formed. Although numerous groups have explored the dependence of atmospheric nitrogen abundance (as parameterized by N/O) on a planet's formation and accretion, (Turrini et al., 2021; Schneider and Bitsch, 2021; Ohno and Fortney, 2022, 2022) measuring a planet's N abundance is only feasible at temperatures cooler than that of most hot Jupiters (\(<\)1000 K).
### SO\({}_{2}\) in Gas Giant Atmospheres
Sulfur's high \(T_{C}\) implies that this species should be entirely in the solid phase beyond \(\sim\)0.3 AU in protoplanetary disks (e.g., Oka et al., 2011), where giant planet formation is thought to occur. At \(T\gtrsim 1000\) K, equilibrium chemistry predicts that most sulfur should reside in H\({}_{2}\)S (Zahnle et al., 2009; Polman et al., 2023; Tsai et al., 2023). However, the interaction of high-energy stellar photons with the planet's atmosphere results in the H\({}_{2}\)S abundance decreasing rapidly at pressures \(\lesssim 1\) mbar (Polman et al., 2023; Tsai et al., 2023). Specifically, some H\({}_{2}\)S is converted to SO\({}_{2}\) via photolysis of H\({}_{2}\)O through the net reaction
\[\mathrm{H_{2}S+2~{}H_{2}O+photon\to SO_{2}+3~{}H_{2}}. \tag{1}\]
This SO\({}_{2}\) resides at pressures of roughly 0.01-10 mbar, where it may be observed via transmission spectroscopy if sufficiently abundant (Polman et al., 2023; Tsai et al., 2023). Because SO\({}_{2}\) contains three heavy atoms, it may also be a useful probe of the average overall level of metal enhancement in a planet's atmosphere (Polman et al., 2023).
Transmission spectroscopy of hot Jupiter WASP-39b through the JWST Early Release Science program (Program 1366; Batalha et al., 2017) revealed the clear signatures of numerous absorbers, including SO\({}_{2}\) (JTEC Team et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023; Ahrer et al., 2023; Feinstein et al., 2023). Those observations detected excess absorption from 3.95-4.15 \(\mu m\) that was interpreted as roughly 1-10 ppm of SO\({}_{2}\) at mbar pressures (Tsai et al., 2023).
In this paper, we explore how the abundances of S, as well as C and O, determine the atmospheric SO\({}_{2}\) abundance and observable transmission spectra of short-period, irradiated gas giants. Furthermore, we suggest that SO\({}_{2}\) provides a unique opportunity to measure volatile-to-sulfur ratios that could be a key discriminant between competing planet formation theories.
We start by presenting a connection between planetary volatile-to-sulfur ratios and planet formation models in Sec. 2. In Sec. 3 we then present a new grid of photochemical models and associated synthetic transmission spectra which we use to investigate our ability to constrain atmospheric abundances via SO\({}_{2}\). Finally, we conclude in Sec. 4.
## 2 Sulfur's Connection to Planet Formation
As we describe below, the abundance ratios of volatiles to sulfur - e.g., C/S and O/S - may provide a powerful opportunity to distinguish between competing models of planet formation. Modern planet formation simulations frequently track the atmospheric elemental abundances of giant planets with a range of formation locations and migration histories, and so provide hypotheses that can be tested via measurements of atmospheric composition. However, these studies have not yet focused specifically on sulfur.
Sulfur is thought to be carried largely in the volatile phase in the ISM and during the earliest stages of protoplanetary disks, but is quickly reprocessed until \(\gtrsim\)90% of disk sulfur is carried in solid (refractory) species (Kama et al., 2019; Le Gal et al., 2021; Riviere-Marichalar et al., 2022). Its condensation temperature of \(\sim\)660 K (Lodders, 2003) is high enough that sulfur remains in refractory form throughout most of the disk, in contrast to volatile species such as C, N, and O whose dominant carriers trace "snow lines" in the disk at distances of several AU.
Pebble accretion models of planet formation predict that gas giants may become highly enriched in volatile elements as compared to refractories (Schneider and Bitsch, 2021, 2021). This occurs because giant planets induce pressure extrema in the disk that inhibit the inward migration and accretion of solids. Thus in these models less of the always-refractory S may be accreted when compared to volatiles such as C or O (so long as the accreting planet is exterior to the evaporation line of the dominant sulfur carrier). Fig. 1 shows the predicted C/S, O/S, and C/O ratios from the pebble accretion models of Schneider and Bitsch (2021); for ease of display, we present the mean and standard deviation (in log space) of their predicted ratios over all combinations of viscosity parameter \(\alpha\) and refractory grain carbon content. Although their models span a range of compositions, the general trend is that when pebble accretion is the dominant mode for accreting solids, gas giants
may frequently exhibit roughly Solar C/O ratios but much higher C/S and O/S ratios: from 3-20\(\times\) Solar.
Models in which solids are accreted as planetesimals tell a different tale. Fig. 1 shows that the planetesimal-based formation models of Pacetti et al. (2022) predict nearly Solar C/S, O/S, and C/O ratios regardless of initial planet location (again, for clarity we plot only the logarithmic mean and standard deviation of their several models). These models (consistent with and building on the initial models of Turrini et al., 2021) predict higher heavy-element abundances (C/H, etc.) for planets that started to form closer in to the star, similar to the absolute abundance trend predicted by pebble-accretion models (Schneider and Bitsch, 2021). Thus the absolute abundances of heavy elements in a planet's atmosphere may also be a useful proxy for planet formation location.
Here we focus on the C/S and O/S ratios: Fig. 1 shows that these ratios could allow a particularly powerful test of which solid accretion mode dominated a planet's formation history. Whereas both types of models predict that a planet's final atmospheric C/O ratio should be roughly stellar, pebble and planetesimal accretion models can predict volatile-to-refractory (C/S and O/S) ratios that are starkly different at all initial formation locations. In reality planet formation may involve a combination of both planetesimal and pebble accretion, in which case both these processes would shape the observed composition of giant planets (Biazzo et al., 2022). Nonetheless Fig. 1 still suggests that superstellar volatile-to-sulfur ratios may be a compelling signpost of significant pebble accretion.
Refractory elements less volatile than sulfur have already been detected in some ultra-hot planets (e.g., Lothringer et al., 2021), but these elements will condense in most gas giant atmospheres. Since sulfur only condenses at \(\lesssim 660\) K (at 0.1 mbar; Lodders, 2003, though in planetary atmospheres S vapor can exist at lower temperatures) its abundance - and its relation to that of volatile elements - may thus be an especially useful tool for constraining the dominant mechanism of planet formation.
## 3 Modeling
### Modeling Details
Having shown that volatile-to-sulfur abundances may test planet formation theories, we now explore how atmospheric S -- as measured by SO\({}_{2}\) abundances -- can reveal different atmospheric compositions. Although many factors have a strong impact on a planet's SO\({}_{2}\) abundance -- such as a planet's thermal (temperature-pressure) structure, the level of XUV flux received, and sequestration of some elements in aerosols -- we restrict this exploration to chemical composition and leave these other axes to future studies.
We examine a three-dimensional parameter space of atmospheric elemental abundances: 113 combinations of a range of elemental enhancements of carbon, oxygen, and sulfur (from solar to 100\(\times\) solar), using the VULCAN1 photochemical kinetics code (Tsai et al., 2017, 2021). For each combination of C, O, and S enhancement we calculate atmospheric abundance profiles using VULCAN's SNCHO chemical network, which includes 575 chemical and photochemical reactions. We use WASP-39b as our simulated target; based on initial analyses of its spectrum, we hold all abundances other than C, O, and S were held to 10\(\times\) solar (JTEC Team et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023; Ahrer et al., 2023; Feinstein et al., 2023; Tsai et al., 2023). The star WASP-39's XUV flux remains largely unknown, so we adopt the same stellar spectrum used by Tsai et al. (2023). The planet's thermal structure is also mostly unconstrained, so we compute our grid using two different, pre-determined temperature profiles -- one each for the GCM-derived morning and evening terminator profiles
Figure 1: Predicted atmospheric ratios of C/S, O/S, and C/O based on initial planet location, from the planet formation models of Schneider and Bitsch (2021, SB21; upper, darker curves) and Pacetti et al. (2022, P22; lower, lighter curves). Although individual models span a range of final compositions (only the log-mean and standard deviation of each model set are plotted here), overall the differences suggest that volatile-to-sulfur ratios could provide a strong test as to whether a planet formed mainly via pebble or planetesimal accretion. The error bars at bottom-right and the shaded region show the abundance ratios inferred for WASP-39b from Fig. 4.
presented by Tsai et al. (2023). Finally, we also use the same \(K_{zz}\) profile described by Tsai et al. (2023), scaling with pressure as \(P^{-1/2}\)(following Lindzen, 1981; Moses et al., 2022). The system parameters and abundance values we used in our analysis are listed in Table 1. All the VULCAN outputs are available as machine-readable supplements to this paper2.
Footnote 2: [https://doi.org/10.5281/zenodo.7760360](https://doi.org/10.5281/zenodo.7760360)
We then use the petitRadTrans3 radiative transfer code (Molliere et al., 2019) to calculate synthetic transmission spectra corresponding to each VULCAN run. We convert VULCAN's volume mixing ratios (VMRs) to petitRadTrans' mass mixing ratios. We use petitRadTrans' medium-resolution, correlated-k opacity sources, giving our synthetic spectra a spectral resolution of \(\sim\)1,000 from 1-25 \(\mu m\). The molecules and opacity sources we include are H\({}_{2}\)O, CO, CO\({}_{2}\), SO\({}_{2}\), CH\({}_{4}\), HCN, H\({}_{2}\)S, CH\({}_{3}\), C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\), CN, CH, OH, and SH (Rothman et al., 2010; Polyansky et al., 2018; Yurchenko et al., 2020; Underwood et al., 2016; Yurchenko and Tennyson, 2014; Harris et al., 2006; Barber et al., 2014; Chubb et al., 2018; Azzam et al., 2016; Adam et al., 2019; Chubb et al., 2020; Brooke et al., 2014; Bernath, 2020; Syme and McKemmish, 2020; Masseron et al., 2014; Brooke et al., 2016; Yousefi et al., 2018; Gorman et al., 2019). We have three sets of synthetic spectra: one each for the morning and evening thermal profiles, plus a set of spectra that are the mean of the morning and evening spectra (corresponding to the transmission spectrum that would be observed during transit). All the petitRadTrans synthetic spectra are also available as machine-readable supplements to this paper4.
Footnote 3: [https://petitradtrans.readthedocs.io/](https://petitradtrans.readthedocs.io/)
We note that the abundances of 15 elements (including C and O, but not S) were measured for WASP-39 (Polanski et al., 2022), revealing a composition for which every element is consistent with the solar values at \(<1\sigma\). Similarly, the C/O ratio of \(0.46\pm 0.09\) is consistent with the solar value of 0.55 (assuming the abundances of Lodders, 2020). Therefore, although generally stellar (not solar) abundances are the appropriate referent for atmospheric modeling, we elect to use abundance levels scaled from Solar values given WASP-39's chemical similarity to the Sun.
### Differences From Previous Studies
Our modeling effort expands on previous exploration of SO\({}_{2}\) in several ways. The first such study presented a comprehensive examination of SO\({}_{2}\) abundance in hot Jupiter atmospheres (Polman et al., 2023). Their study also used the VULCAN code to cover metallicities of 1-20\(\times\) Solar, three values of \(K_{zz}\) (constant with altitude, but spanning three orders of magnitude), C/O ratios from 0.25-0.9, several different stellar spectra, and three planetary temperatures (spanning 400 K). More recently, Tsai et al. (2023) demonstrated that SO\({}_{2}\) causes the 4.2 \(\mu\)m absorption feature seen in JWST spectroscopy of WASP-39b (Rustamkulov et al., 2023; Ahrer et al., 2023; Alderson et al., 2023). That investigation used four photochemistry codes (including VULCAN) to span three
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{ Name} & \multicolumn{1}{c}{Units} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Source} \\ \hline \multicolumn{4}{c}{_System parameters:_} \\ \(R_{*}\) & \(R_{\odot}\) & 0.932 & Carter \& May, et al., in prep. \\ \(R_{p}\) & \(R_{J}\) & 1.279 & Carter \& May, et al., in prep. \\ \(M_{P}\) & \(M_{J}\) & 0.281 & Carter \& May, et al., in prep. \\ \(gp\) & m s\({}^{-2}\) & 4.26 & Carter \& May, et al., in prep. \\ \(a\) & AU & 0.04828 & Carter \& May, et al., in prep. \\ \multicolumn{4}{c}{_Modeling parameters:_} \\ He/H & (solar VMR) & \(8.38\times 10^{-2}\) & Lodders (2020) \\ C/H & (solar VMR) & \(2.95\times 10^{-4}\) & Lodders (2020) \\ O/H & (solar VMR) & \(5.37\times 10^{-4}\) & Lodders (2020) \\ S/H & (solar VMR) & \(1.41\times 10^{-5}\) & Lodders (2020) \\ \(P\) & bar & 10–10\({}^{-9}\) & \\ \(P_{0}\) & bar & 0.01 & \\ \(z\) & deg & 83 & \\ C & (\(\times\) solar) & 1, 1.8, 3, 5.6, 7.5, 10, 13, 18, 30, 56, 100 \\ O & (\(\times\) solar) & 1, 1.8, 3, 5.6, 7.5, 10, 13, 18, 30, 56, 100 \\ S & (\(\times\) solar) & 1, 1.8, 3, 5.6, 7.5, 10, 13, 18, 30, 56, 100 \\ \hline \end{tabular}
\end{table}
Table 1Model Parameters:
Figure 2: In the main panel, the dashed line shows the SO\({}_{2}\) VMR, averaged from 0.01–10 mbar, as all elemental abundances are increased in lockstep; the gray region shows the approximate SO\({}_{2}\) abundance reported for WASP-39b. The inset shows the full vertical SO\({}_{2}\) profiles.
Figure 4: Abundance of SO\({}_{2}\) (averaged from 0.01–10 mbar) as the atmospheric ratios of C/O (_top_), C/S (_middle_), and O/S (_bottom_) are varied. The gray regions show the approximate SO\({}_{2}\) abundance reported for WASP-39b, which correspond here to C/S and O/S \(\lesssim\)1.5.
Figure 3: Abundance of SO\({}_{2}\) (averaged from 0.01–10 mbar) as combinations of two elemental abundances are varied: C vs. O (_top_), C vs. S (_middle_), and O vs. S (_bottom_). The points indicate the locations of the model grid points, which we linearly interpolate between. In each panel, all other elements are held fixed at 10\(\times\) Solar abundance.
metallicilities (from 5 to 20\(\times\) Solar), three \(K_{zz}\) profiles (spanning two orders of magnitude), three C/O ratios (from 0.25 to 0.75), three stellar spectra (spanning two orders of magnitude in irradiation), and a range of temperatures (from 600 to 2000 K).
Our analysis builds upon both these works by (i) examining a more densely sampled and fully two-dimensional grid of C and O abundances, (ii) adding a third dimension by exploring a wide range of S abundances, and (iii) extending the analysis up to significantly higher atmospheric metallicities.
### Discussion and Interpretation
Fig. 2 shows how the SO\({}_{2}\) abundance increases as all three elements (C, O, and S) are increased in lockstep. Whereas the abundance of the triatomic CO\({}_{2}\) (produced via equilibrium chemical processes) increases quadratically with metallicity (Zahnle et al., 2009), the more complicated formation pathways of SO\({}_{2}\) result in a more complicated metallicity dependence: much steeper than CO\({}_{2}\) at low metallicity, and shallower at high metallicity. This suggests that if high-metallicity (\(>100\times\) Solar) ice giants also form photochemical SO\({}_{2}\), its VMR is unlikely to be \(\gtrsim\)200 ppm (as found by Tsai et al., 2021).
Fig. 3 shows three slices through our 3D abundance grid, depicting the SO\({}_{2}\) volume mixing ratio (VMR, averaged from 0.01-10 mbar) versus C, O, and S abundances. Fig. 4 shows 1D slices as plotted against the abundance ratios C/O, C/S, and O/S.
Fig. 4 reveals that the SO\({}_{2}\) abundances decreases as the atmospheric C/O ratio increases (while the S abundance is held constant). Consistent with the results of Polman et al. (2023), the dependence on C/O is similar regardless of whether we increase C or decrease O (though with a slightly steeper dependence when O is varied). In either case, when C/O increases beyond the Solar value the SO\({}_{2}\) abundance rapidly drops below detectable levels. The detection of SO\({}_{2}\) is therefore a strong sign of a C/O ratio \(\lesssim\) the Solar value.
More excitingly, Fig. 4 shows how measurements of the SO\({}_{2}\) abundance can distinguish between a variety of volatile-to-sulfur ratios. JWST transit spectroscopy of WASP-39b reveal it to have an atmospheric metallicity \(\sim\)10\(\times\) Solar, a C/O ratio \(\lesssim\) the Solar value, and a SO\({}_{2}\) VMR of 1-10 ppm (JTEC Team et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023; Ahrer et al., 2023; Feinstein et al., 2023; Tsai et al., 2023). Assuming an overall metallicity of 10\(\times\) Solar, the bottom two panels of Fig. 4 show that the SO\({}_{2}\) measurement constrains both C/S and O/S to \(\lesssim\)1.5\(\times\) Solar. Reference to Fig. 1 demonstrates that such values are rather more consistent with planetesimal accretion models (Turrini et al., 2021; Pacetti et al., 2022) than with pebble accretion (Schneider and Bitsch, 2021). The only pebble formation models of Schneider and Bitsch (2021) that can
Figure 5: Synthetic spectra showing the effect of varying the S/H ratio from 1–100\(\times\) Solar while keeping O/H and C/H at 10\(\times\) Solar, as inferred for WASP-39b (equivalent to varying C/S and O/S from 10–0.1\(\times\) Solar). At 10\(\times\) Solar metallicity SO\({}_{2}\) is detectable up to volatile-to-sulfur ratios of \(\lesssim\)3\(\times\) Solar. The shaded rectangles show where absorption is dominated by SO\({}_{2}\) (green) and by H\({}_{2}\)S (grey).
Figure 6: Synthetic spectra showing the effect of varying C/H and O/H from 10–100\(\times\) Solar while keeping the C/S and O/S ratios at 10\(\times\) Solar, as might be expected from pebble accretion (equivalent to varying S/H from 1–10\(\times\) Solar). At 10\(\times\) Solar C/S and O/S, SO\({}_{2}\) is detectable down to a volatile enhancement level of \(\sim\)18\(\times\) Solar. The shaded rectangles show where absorption is dominated by SO\({}_{2}\) (green) and by H\({}_{2}\)S (grey). Note that for volatile enrichment \(\gtrsim\)100\(\times\) Solar the amplitude of spectral features begins to decrease as the mean molecular weight begins to increase.
approximately reproduce these ratios assume an initial formation location of 3 AU and \(\alpha\leq 5\times 10^{-4}\).
We also show a few representative examples of our synthetic transmission spectra. In Fig. 5 we see the effect on WASP-39b's transmission spectra of varying the S/H ratio from 1-100\(\times\) Solar, while keeping O/H=C/H=10\(\times\) Solar (as inferred for WASP-39b); this is equivalent to varying C/S and O/S from 10\(\times\) down to 0.1\(\times\) Solar. The figure shows that at WASP-39b-like volatile enrichment levels, SO\({}_{2}\) has a significant impact on a planet's transmission spectrum for S/H\(\gtrsim 3\times\) Solar (C/S or O/S \(\lesssim\)3\(\times\) Solar).
Tsai et al. (2023) found that varying their \(K_{zz}\) profile (nominally spanning \(5\times 10^{7}\) to \(10^{11}\) cm\({}^{2}\) s\({}^{-1}\)) by \(\pm\) an order of magnitude had only a minor impact, consistent with the results of Hobbs et al. (2021) from isothermal models with \(K_{zz}\) spanning \(10^{6}\)-\(10^{12}\) cm\({}^{2}\) s\({}^{-1}\). In contrast, other studies report that increasing a constant-with-altitude \(K_{zz}\) to \(10^{11}\) cm\({}^{2}\) s\({}^{-1}\) sharply decreases the amount of observable SO\({}_{2}\)(Tsai et al., 2021; Polman et al., 2023). Although \(K_{zz}\) is a challenging quantity to empirically constrain, it would at least be useful to understand how it quantitatively impacts the measurement of SO\({}_{2}\) in planetary atmospheres.
Finally, future studies might also test the impact of self-consistently modeling the thermal and chemical profiles in the atmospheres, thereby accounting for thermal back-reaction of photochemical species such as SO\({}_{2}\) on the planet's vertical temperature structure. Similarly, the interplay of global circulation and atmospheric chemistry may reveal that predictions made from 1D models (as in this work) - or even from post-processed chemistry-free global circulation models - may lead to inaccurate interpretations of exoplanet measurements (Lee et al., 2023).
Dedicated to E\({}^{3}\). We heartily thank S.-M. Tsai for help with VULCAN, for general discussions of photochemistry, and for useful comments on an early draft of this paper. We thank B. Bitsch for several useful discussions that improved the quality of this paper, and we thank D. Turrini for clarifying several points regarding planet formation.
|
2309.02512 | Reciprocity via Reciprocants | The determinant of a skew-symmetric matrix has a canonical square root given
by the Pfaffian. Similarly, the resultant of two reciprocal polynomials of even
degree has a canonical square root given by their reciprocant. Computing the
reciprocant of two cyclotomic polynomials yields a short and elegant proof of
the Law of Quadratic Reciprocity. | Matthew Baker | 2023-09-05T18:01:13Z | http://arxiv.org/abs/2309.02512v2 | # Reciprocity via Reciprocants
###### Abstract.
The determinant of a skew-symmetric matrix has a canonical square root given by the Pfaffian. Similarly, the resultant of two reciprocal polynomials of even degree has a canonical square root given by their _reciprocal_. Computing the reciprocal of two cyclotomic polynomials yields a short and elegant proof of the Law of Quadratic Reciprocity.
We thank Antoine Chambert-Loir for pointing us to Merindol's paper [14]. Thanks also to Darij Grinberg, Franz Lemmermeyer, and Evan O'Dorney for helpful feedback on an earlier version of this paper. The author was supported by NSF grant DMS-2154224 and a Simons Fellowship in Mathematics.
## 1. Introduction
Let \(p\) be a prime number and let \(a\) be an integer not divisible by \(p\). The _Legendre symbol_\(\left(\frac{a}{p}\right)\) is defined by \(\left(\frac{a}{p}\right)=1\) if \(a\) is a square modulo \(p\) and \(\left(\frac{a}{p}\right)=-1\) otherwise.
According to _Euler's criterion_, \(a^{(p-1)/2}\equiv 1\pmod{p}\) if \(\left(\frac{a}{p}\right)=1\) and \(a^{(p-1)/2}\equiv-1\pmod{p}\) if \(\left(\frac{a}{p}\right)=-1\).
The Law of Quadratic Reciprocity, first proved by Gauss, asserts that there is an unexpected relationship between \(\left(\frac{p}{q}\right)\) and \(\left(\frac{a}{p}\right)\) when \(p,q\) are distinct odd primes, and a supplement to the law asserts that \(\left(\frac{2}{p}\right)\) depends only on \(p\) modulo \(8\).
**Theorem 1.1** (Law of Quadratic Reciprocity).:
* _If_ \(p\) _and_ \(q\) _are distinct odd primes then_ \(\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{\frac{p-1}{2}\frac{ \alpha-1}{2}}\)_._
* _If_ \(p\) _is an odd prime then_ \(\left(\frac{2}{p}\right)=(-1)^{\frac{p^{2}-1}{8}}\)_._
There are currently more than \(300\) known proofs of the Law of Quadratic Reciprocity [10]. In this paper we will present an elegant proof that deserves to be better known. The basic approach, via the identity
\[\mathrm{Res}(g,f)=(-1)^{\deg(f)\cdot\deg(g)}\mathrm{Res}(f,g) \tag{1}\]
for resultants, appears to have been independently discovered on at least two occasions [14, 9], see Section 5 below for a discussion of related work.
Our exposition is somewhat novel, in that a central role is played by an expression that we dub the _reciprocal_.1 The resultant of two reciprocal2 polynomials \(f\) and \(g\) of even degree is always a square, and the reciprocal of \(f\) and \(g\) furnishes
a canonical square root. If \(p\) and \(q\) are distinct primes, the resultant of the cyclotomic polynomials \(\Phi_{p}(x)\) and \(\Phi_{q}(x)\) is always equal to \(1\), but their reciprocal \(\operatorname{Rec}(\Phi_{p}(x),\Phi_{q}(x))\) turns out to be the Legendre symbol \(\left(\frac{q}{p}\right)\). By symmetry, we have \(\operatorname{Rec}(\Phi_{q}(x),\Phi_{p}(x))=\left(\frac{p}{q}\right)\), and part (a) of the Law of Quadratic Reciprocity is then a consequence of (1).
We also provide a proof via reciprocants of the supplementary law for \(\left(\frac{2}{p}\right)\).
Our proof of the resultant identity \(\operatorname{Res}(\Phi_{p}(x),\Phi_{q}(x))=1\) is original, to the best of our knowledge. It is in some ways more elementary than the other proofs we have seen of this formula.
Throughout the article, we strive to keep the exposition as elementary as possible, with the goal of making the paper understandable by a reader who has taken basic undergraduate courses in number theory, abstract algebra, and linear algebra. In order to make the paper as self-contained as possible, we provide two appendices, one on resultants and one on the trace polynomial (which is used to define the reciprocal).
## 2. Resultants and Reciprocants
### Resultants
The resultant of two monic3 polynomials \(f,g\in R[x]\) over an integral domain \(R\) satisfies numerous useful identities, including the following (see Appendix A for details):
Footnote 3: We restrict ourselves to resultants of _monic_ polynomials over an integral domain here, as (a) it’s the only case we need and (b) the identities (RES1)-(RES4) look cleaner in the monic case.
1. If \(f(x)=(x-\alpha_{1})\cdots(x-\alpha_{m})\) with all \(\alpha_{i}\) in \(R\), then \(\operatorname{Res}(f,g)=\prod_{i}g(\alpha_{i})\).
2. \(\operatorname{Res}(g,f)=(-1)^{\deg(f)\cdot\deg(g)}\operatorname{Res}(f,g)\).
3. Suppose \(\phi:R\to R^{\prime}\) is a ring homomorphism. Then4 Footnote 4: Here \(\phi(f)\in R^{\prime}[x]\) denotes the image of \(f\in R[x]\) under the homomorphism \(R[x]\to R^{\prime}[x]\) induced by \(\phi\), and similarly for \(\phi(g)\).
\[\phi(\operatorname{Res}(f,g))=\operatorname{Res}(\phi(f),\phi(g)).\]
4. If \(g(x)=f(x)\cdot q(x)+r(x)\) with \(f,g,r\in R[x]\) monic and \(q\in R[x]\) arbitrary, then \[\operatorname{Res}(f,g)=\operatorname{Res}(f,r).\]
### Reciprocal polynomials and their traces
A polynomial \(g(x)=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) with coefficients in a ring5\(R\) is called _reciprocal_ if \(a_{n}\neq 0\) and \(a_{k}=a_{n-k}\) for all \(k=0,1,\ldots,n\). Equivalently, \(g\) is reciprocal if and only if \(g(x)=x^{n}g(\frac{1}{x})\).
Footnote 5: All rings in this paper will be nonzero commutative rings with identity.
If \(g\in R[x]\) is reciprocal of even degree \(2m\), there is a unique polynomial \(g^{\#}(x)\in R[x]\) of degree \(m\) such that
\[g(x)=x^{m}g^{\#}(x+\frac{1}{x}). \tag{2}\]
We call \(g^{\#}(x)\) the _trace polynomial_ of \(g\) (see Appendix B for details). Note that if \(g(x)\) is monic, then \(g^{\#}(x)\) is monic as well.
The following lemma will be proved in Appendix B:
**Lemma 2.1**.: _If \(g(x)=\prod_{i=1}^{m}(x-\alpha_{i})(x-\alpha_{i}^{-1})\) for some units \(\alpha_{1},\ldots,\alpha_{m}\in R^{\times}\), then \(g\) is reciprocal and_
\[g^{\#}(x)=\prod_{i=1}^{m}\left(x-(\alpha_{i}+\alpha_{i}^{-1})\right). \tag{3}\]
_Remark 2.2_.: Conversely, it follows from (2) that if \(K\) is a field, \(g\in K[x]\) is reciprocal of even degree \(2m\), and \(L\) is a splitting field for \(g\) over \(K\), there exist \(\alpha_{1},\ldots,\alpha_{m}\in L^{\times}\) such that \(g(x)=\prod_{i=1}^{m}(x-\alpha_{i})(x-\alpha_{i}^{-1})\).
### Reciprocants
Over an integral domain, the reciprocal is a canonical square root of the resultant of two reciprocal polynomials. More precisely:
**Proposition 2.3**.: _If \(R\) is an integral domain and \(f,g\in R[x]\) are monic reciprocal polynomials of even degree, then_
\[\operatorname{Res}(f,g)=\operatorname{Rec}(f,g)^{2},\]
_where \(\operatorname{Rec}(f,g):=\operatorname{Res}(f^{\#},g^{\#})\in R\) is the reciprocal of \(f\) and \(g\)._
Proof.: Let \(K\) be the fraction field of \(R\) and let \(L\) be a splitting field for \(f\) over \(K\). By Remark 2.2, we can write \(f(x)=\prod_{i=1}^{m}(x-\alpha_{i})(x-\alpha_{i}^{-1})\) with \(\alpha_{i}\in L\) for all \(i\). In what follows, will apply (RES3) to the natural injective map \(\phi:R\to L\).
Let \(a_{i}=\alpha_{i}+\alpha_{i}^{-1}\) for \(i=1,\ldots,m\). We have:
\[\operatorname{Res}(f^{\#},g^{\#})^{2} =\prod_{i}g^{\#}(a_{i})\cdot\prod_{i}g^{\#}(a_{i})\qquad\text{( by (RES1), (RES3), and (3))}\] \[=\prod_{i}\alpha_{i}^{-m}g(\alpha_{i})\cdot\prod_{i}\alpha_{i}^{ m}g(\alpha_{i}^{-1})\qquad\text{(by (\ref{eq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Reqeq:Reqeq:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Req:Reqeq:Reqeq:Req:Req:Req:Req:Reqeq:Req:Reqeq:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Reqeq:Req:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Req:Reqeq:Req:Req:Reqeq:Req:Req:Req:Reqeq:Req:Req:Req:Req:Req:Req:Reqeq:Req:Reqeq:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req:Req
**Proposition 3.1**.: _If \(m,n\) are relatively prime positive integers, \(\operatorname{Res}(g_{m},g_{n})=1\)._
Proof.: If \(m=n=1\) then \(\operatorname{Res}(g_{m},g_{n})=\operatorname{Res}(1,1)=1\). We may therefore suppose without loss of generality that \(n>m\). Note that since \(\gcd(m,n)=1\), at least one of \(m\) and \(n\) is odd.
By the division algorithm, we can write \(n=mq+r\) with \(q,r\) integers such that \(q\geq 0\) and \(0\leq r<q\). Since at least one of \(m\) and \(n\) is odd, the same is true for \(m\) and \(r\).
Working in the quotient ring \(\mathbb{Z}[x]/(x^{m}-1)\), we have
\[x^{n}-1 \equiv(x^{m})^{q}\cdot x^{r}-1\] \[\equiv 1^{q}\cdot x^{r}-1\] \[\equiv x^{r}-1\pmod{x^{m}-1}.\]
In other words, there is a polynomial \(h(x)\in\mathbb{Z}[x]\) such that
\[x^{n}-1=(x^{m}-1)h(x)+x^{r}-1.\]
Dividing both sides by \(x-1\) gives
\[g_{n}(x)=g_{m}(x)h(x)+g_{r}(x).\]
By (RES4) and (RES2), we have
\[\operatorname{Res}(g_{m},g_{n})=\operatorname{Res}(g_{m},g_{r})=\operatorname{ Res}(g_{r},g_{m}). \tag{6}\]
Since \(\gcd(m,n)=1\), it follows from (6) and the Euclidean algorithm that there is an integer \(k\geq 1\) such that
\[\operatorname{Res}(g_{m},g_{n})=\operatorname{Res}(g_{k},g_{1})=\operatorname{ Res}(g_{k},1)=1.\]
_Remark 3.2_.: Conversely, if \(\gcd(m,n)=d>1\) then (RES1) and (RES3) (applied to the natural injection \(\phi:\mathbb{Z}\hookrightarrow\mathbb{C}\)) imply that \(\operatorname{Res}(g_{m},g_{n})=0\), since a primitive \(d^{\text{th}}\) root of unity in \(\mathbb{C}\) is a common root of \(g_{m}\) and \(g_{n}\).
_Remark 3.3_.: Here is an alternate proof of Proposition 3.1 which is arguably more conceptual, but somewhat less elementary. First, observe that if \(K\) is any field and \(\alpha\in K\) satisfies both \(\alpha^{m}=1\) and \(\alpha^{n}=1\), with \(\gcd(m,n)=1\), then necessarily \(\alpha=1\). Let \(p\) be a prime number, let \(\mathbf{F}_{p}\) be the finite field of order \(p\), and let \(\phi:\mathbb{Z}\to\mathbf{F}_{p}\) be the natural homomorphism. Applying (RES3) to \(\phi\) implies, together with (RES1) and the above observation with \(K=\mathbf{F}_{p}\), that \(\operatorname{Res}(g_{m},g_{n})\not\equiv 0\pmod{p}\). Since this holds for all prime numbers \(p\), we must have \(\operatorname{Res}(g_{m},g_{n})=\pm 1\). By Proposition 2.3, we must in fact have \(\operatorname{Res}(g_{m},g_{n})=1\).
Assume from now on that \(n\) is odd. Since \(g_{n}\) is a reciprocal polynomial of even degree, it follows from (2) that
\[g_{n}^{\#}(2)=g_{n}(1)=n. \tag{7}\]
Furthermore, for any ring \(R\), if \(g(x)=(x-1)^{2m}\in R[x]\) then, by Lemma 2.1,
\[g^{\#}(x)=(x-2)^{m}. \tag{8}\]
_Remark 3.4_.: By Remark B.3, we have \(g_{1}^{\#}(x)=1\) and \(g_{3}^{\#}(x)=x+1\), and
\[g_{n}^{\#}(x)=xg_{n-2}^{\#}(x)-g_{n-4}^{\#}(x) \tag{9}\]
for all odd integers \(n\geq 5\). This implies that the polynomials \(g_{n}^{\#}\) are related to the classical _Lucas polynomials_\(L_{n}(x)\), defined for \(n\geq 0\) by \(L_{0}(x)=2,L_{1}(x)=x\), and \(L_{n}(x)=xL_{n-1}(x)+L_{n-2}(x)\), as follows. For \(n\geq 1\) odd, define \(H_{n}(x)\) by \(L_{n}(x)=xH_{n}(x^{2})\). Then \(g_{n}^{\#}(x)=H_{n}(x-2)\).
_Proof of the Law of Quadratic Reciprocity_. Let \(p,q\) be distinct odd primes.
Since \(\operatorname{Res}(g_{p},g_{q})=1\) by Proposition 3.1, it follows from Proposition 2.3 that \(\operatorname{Rec}(g_{p},g_{q})\in\{\pm 1\}\). We compute the following congruences modulo \(p\):
\[\operatorname{Rec}(g_{p},g_{q}) \equiv\operatorname{Rec}((x-1)^{p-1},g_{q})\qquad\text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq
**Proposition 4.2**.: _If \(n\) is an odd positive integer, then \(\operatorname{Rec}(\Phi_{4},g_{n})\) is equal to 1 if \(n\) is 1 or 3 (mod 8) and \(-1\) if \(n\) is 5 or 7 (mod 8)._
Proof.: We have
\[\operatorname{Rec}(\Phi_{4},g_{n})=\operatorname{Res}(x,g_{n}^{\#})=g_{n}^{\# }(0),\]
so it suffices to evaluate \(g_{n}^{\#}(0)\).
This is a straightforward but tedious calculation given (2), which implies that
\[g_{n}^{\#}(0)=g_{n}(i)i^{-\frac{n-1}{2}}=\frac{i^{n}-1}{i-1}i^{-\frac{n-1}{2}},\]
where \(i^{2}=-1\in\mathbb{C}\).
Alternatively, recall from Remark 3.4 that for \(n\geq 5\) odd we have
\[g_{n}^{\#}(x)=xg_{n-2}^{\#}(x)-g_{n-4}^{\#}(x).\]
From this, a simple inductive argument shows that \(g_{n}^{\#}(0)=1\) if \(n\) is 1 or 3 (mod 8) and \(-1\) if \(n\) is 5 or 7 (mod 8).
Proof of Theorem 4.1.: Let \(p\) be an odd prime. We compute:
\[\operatorname{Rec}(\Phi_{4},g_{p}) \equiv\operatorname{Rec}(\Phi_{4},(x-1)^{p-1})\qquad\text{(by (\ref{eq:p-1}) and Proposition \ref{eq:p-1})}\] \[\equiv\operatorname{Res}(x,(x-2)^{\frac{p-1}{2}})\qquad\text{(by (\ref{eq:p-1}) and the definition of the reciprocal)}\] \[=(-2)^{\frac{p-1}{2}}\qquad\text{(by (RES1))}\] \[\equiv\left(\frac{-2}{p}\right)\quad\text{(mod $p$)}\qquad\text{(by Euler's criterion)}.\]
Since \(\operatorname{Rec}(\Phi_{4},g_{p})\) and \(\left(\frac{-2}{p}\right)\) both belong to \(\{\pm 1\}\), we have \(\operatorname{Rec}(\Phi_{4},g_{p})=\left(\frac{-2}{p}\right)\), which implies the desired result via Proposition 4.2.
## 5. Related work
The proof of Quadratic Reciprocity given here is closely related to several existing arguments. The earliest reference we're aware of for a proof of quadratic reciprocity based on resultants of cyclotomic polynomials is J.-Y. Merindol's paper [14], which was published in an obscure French higher education journal called L'Ouvert. A similar proof appears to have been independently discovered by Hambleton and Scharaschkin in [9]. We learned of the basic argument behind these papers from Antoine Chambert-Loir's blog post [4].
The main new ingredient in the present paper is a systematic use of Proposition 2.3 and the quantity we've dubbed the reciprocal. As far as we know, our arguments proving the supplemental law (Theorem 4.1) are also new.
Our treatment of resultants was inspired by a paper of Barnett [3]. Our proof of Proposition 3.1 makes use of the Euclidean algorithm and property (RES4) of resultants; this approach is also used, for example, in [7].
Although we have not seen Proposition 2.3 explicitly stated in a published paper, it is mentioned without proof in a Math Overflow post by Denis Serre [15]. The main ingredients in the proof of Proposition 2.3 are also contained in the proof of [13, Theorem 3.4].
The first published work we're aware of that computes the resultant of two cyclotomic polynomials is F. E. Diederichsen's paper [6]. Diederichsen's results
were extended, and his proofs simplified, in Apostol's paper [1]. Some other papers computing resultants of Fibonacci-Lucas type polynomials include [14, 9, 13, 7].
As noted in [9, Section 3], the key step underlying our proof of Quadratic Reciprocity, which is identifying the Legendre symbol with a resultant, is closely related to one of Eisenstein's classical proofs [11, Chapter 8.1]. There are also close connections to the more recent proof of Swan [16].
The proof given in this paper is also closely related to the proof in the author's blog post [2].
A resultant-based approach to quadratic reciprocity in the function field case is given in [5]. See Section 3.4 of _loc. cit._ for remarks about other proofs of the Law of Quadratic Reciprocity which ultimately boil down (either explicitly or in disguise) to property (RES2) of resultants.
## Appendix A Resultants
Let \(R\) be a ring and let \(f,g\in R[x]\) be monic polynomials. Inspired by an observation of Barnett [3], we define the _resultant_ of \(f\) and \(g\) to be
\[\operatorname{Res}(f,g):=\det\left(g(C_{f})\right)\in R,\]
where \(C_{f}\) is the _companion matrix_ of \(f(x)=a_{0}+a_{1}x+\cdots+a_{m-1}x^{m-1}+x^{m}\):
\[C_{f}:=\begin{pmatrix}0&0&0&\cdots&0&-a_{0}\\ 1&0&0&\cdots&0&-a_{1}\\ 0&1&0&\cdots&0&-a_{2}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&-a_{m-1}\end{pmatrix}\]
We assume for the rest of this section that \(R\) is an integral domain with fraction field \(K\). We will use the following two well-known facts from linear algebra:
1. The characteristic polynomial of \(C_{f}\) over \(K\) is \(f\) (cf. [12, Lemma 8.4]).
2. If an \(m\times m\) matrix \(A\) over \(K\) has characteristic polynomial \(f\), and if \(f\) factors over some extension field \(L\) of \(K\) as \(f(x)=(x-\lambda_{1})\cdots(x-\lambda_{m})\), then the characteristic polynomial of \(g(A)\) is \((x-g(\lambda_{1}))\cdots(x-g(\lambda_{m}))\). (**Proof:** By [12, Theorem 14.17], \(A\) is similar to an upper triangular matrix \(B\) with \(\lambda_{1},\ldots,\lambda_{m}\) on the diagonal. The diagonal entries of \(g(B)\) are \(g(\lambda_{1}),\ldots,g(\lambda_{m})\), and by [12, Theorem 8.12] the characteristic polynomial of \(p(A)\) is equal to that of \(p(B)\).)
Assume \(f\) splits into linear factors over \(R\) as \(f(x)=(x-\alpha_{1})\cdots(x-\alpha_{m})\). Then by (LA1) and (LA2), the characteristic polynomial of \(g(C_{f})\) over \(K\) is
\[(x-g(\alpha_{1}))\cdots(x-g(\alpha_{m})).\]
It follows that the determinant of \(g(C_{f})\) is \(\prod_{i=1}^{m}g(\alpha_{i})\), which proves (RES1). Moreover, if \(g(x)=(x-\beta_{1})\cdots(x-\beta_{n})\) with all \(\beta_{j}\in R\) then
\[\operatorname{Res}(f,g)=\prod_{i,j}(\alpha_{i}-\beta_{j})\in R. \tag{12}\]
If we view the coefficients of \(f\) and \(g\) as indeterminates, the expression \(\det\left(g(C_{f})\right)\) is a polynomial of degree \(m+n\) with integer coefficients in these variables. In other words, there is a multivariate polynomial \(S_{m,n}\in\mathbb{Z}[x_{0},x_{1},\ldots,x_{m-1},y_{0},y_{1},\ldots,y_{n-1}]\)
such that for every ring \(R\) and every pair of monic polynomials \(f(x)=a_{0}+a_{1}x+\cdots+a_{m-1}x^{m-1}+x^{m}\) and \(g(x)=b_{0}+b_{1}x+\cdots+b_{n-1}x^{n-1}+x^{n}\) in \(R[x]\),
\[\operatorname{Res}(f,g)=S_{m,n}(a_{0},a_{1},\ldots,a_{m-1},b_{0},b_{1},\ldots,b _{n-1}).\]
The 'functoriality' relation (RES3) follows easily from this observation.
By (RES3) and the fact that \(R\) is an integral domain, we may replace \(R\) by a splitting field \(L\) for \(fg\) over the fraction field \(K\) of \(R\). The identity (RES2) then follows immediately from (12).
In the same way, we can reduce the proof of (RES4) to the case where \(f(x)=(x-\alpha_{1})\cdots(x-\alpha_{m})\) with all \(\alpha_{i}\in R\). Using (RES1), we compute:
\[\operatorname{Res}(f(x),r(x)) =\prod_{i=1}^{m}r(\alpha_{i})\] \[=\prod_{i=1}^{m}\left(g(\alpha_{i})-f(\alpha_{i})q(\alpha_{i})\right)\] \[=\prod_{i=1}^{m}g(\alpha_{i})\] \[=\operatorname{Res}(f(x),g(x)),\]
which proves (RES4).
## Appendix B The trace polynomial
Our primary goal in this Appendix is to prove:
**Proposition B.1**.: _Suppose \(R\) is a ring and \(g\in R[x]\) is a reciprocal polynomial of even degree \(2m\). Then there is a unique polynomial \(h(x)\in R[x]\) of degree \(m\) such that \(g(x)=x^{m}h(x+\frac{1}{x})\)._
The following proof was suggested by Darij Grinberg.
Proof.: We first prove the existence of \(h(x)\). This will be done by induction on \(m\). The base case \(m=0\) is clear. For the induction step, let \(g(x)=a_{0}+a_{1}x+...+a_{2m}x^{2m}\) be a reciprocal polynomial of degree \(2m\); in particular, \(a_{2m}=a_{0}\). Thus \(\tilde{g}(x):=\left(g(x)-a_{0}(1+x^{2})^{m}\right)/x\) is a reciprocal polynomial of degree \(2(m-1)\). By the inductive hypothesis, \(\tilde{g}(x)=x^{m-1}\tilde{h}(x+1/x)\) for some polynomial \(\tilde{h}(x)\) of degree \(m-1\). Setting \(h(x)=a_{0}x^{m}+\tilde{h}(x)\) yields \(g(x)=x^{m}h(x+1/x)\), as desired. This establishes the existence of \(h\).
The uniqueness of \(h\) follows by reversing the existence argument. More formally, we again proceed by induction on \(m\). The base case \(m=0\) is obvious. For the induction step, note that the equation \(g(x)=x^{m}h(x+1/x)\) implies that the \(x^{m}\)-coefficient of \(h(x)\) must be \(a_{0}\). Let \(\tilde{h}(x)=h(x)-a_{0}x^{m}\), which has degree \(m-1\), and let \(\tilde{g}(x)=\left(g(x)-a_{0}(1+x^{2})^{m}\right)/x\), which is reciprocal of degree \(2(m-1)\). Then \(\tilde{g}(x)=x^{m-1}\tilde{h}(x+1/x)\), and by the inductive hypothesis \(\tilde{h}(x)\) is uniquely determined by \(\tilde{g}(x)\). It follows that \(h\) is uniquely determined by \(g\).
Following the terminology of [8, SS2.1], we define the _trace polynomial_\(g^{\#}\) of \(g\) to be the polynomial \(h\) appearing in Proposition B.1.
_Remark B.2_.: The following alternative proof of the existence portion of Proposition B.1 was suggested by Franz Lemmermeyer, and provides an explicit recursion which will be useful in the next remark.
Write \(g(x)=a_{0}+a_{1}x+\cdots+a_{2m}x^{2m}\) with \(a_{i}=a_{2m-i}\) for all \(0\leq i\leq m\) and \(h(x)=b_{0}+b_{1}x+\cdots+b_{m}x^{m}\). We wish to prove that we can uniquely solve for the coefficients of \(h\) in terms of the coefficients of \(g\).
In the Laurent polynomial ring \(R[x,\frac{1}{x}]\), we have the identity
\[x^{-m}g(x)=a_{0}(x^{m}+x^{-m})+a_{1}(x^{m-1}+x^{-(m-1)})+\cdots+a_{m-1}(x+x^{- 1})+a_{m},\]
so it suffices to prove the result for the special Laurent polynomials \(f_{n}(x):=x^{n}+x^{-n}\) for all \(n\geq 0\). In other words, we want to prove that for each \(n\geq 0\), there is a polynomial \(h_{n}(x)\in R[x]\) of degree \(n\) such that \(f_{n}(x)=h_{n}(x+x^{-1})\).
We prove existence of the polynomials \(h_{n}(x)\) by induction on \(n\). The result is trivial for \(n=0,1\), so we may assume that \(n\geq 2\) and that the result is true for polynomials of degree at most \(n-1\). A simple calculation gives
\[f_{n}(x)=(x+x^{-1})f_{n-1}(x)-f_{n-2}.\]
Therefore, if we set \(h_{0}(x)=2\), \(h_{1}(x)=x\), and
\[h_{n}(x)=xh_{n-1}(x)-h_{n-2}(x), \tag{13}\]
we will have the desired identity \(f_{n}(x)=h_{n}(x+x^{-1})\).
_Remark B.3_.: For \(n\geq 0\), define \(g_{2n+1}(x)=\sum_{k=0}^{2n}x^{k}\) as in (4).
Then with \(f_{k}(x)\) and \(h_{k}(x)\) as in Remark B.2, for \(n\geq 1\) we have \(x^{-n}g_{2n+1}=1+\sum_{k=1}^{n}f_{k}(x)\), and thus \(g_{2n+1}^{\#}(x)=1+\sum_{k=1}^{n}h_{k}(x)\).
Since \(g_{1}(x)=1\) and \(g_{3}(x)=1+x+x^{2}\), we have \(g_{1}^{\#}(x)=1\) and \(g_{3}^{\#}(x)=x+1\). Moreover, since \(h_{k}(x)=xh_{k-1}(x)-h_{k-2}(x)\) for \(k\geq 2\), it follows from (13) that for \(n\geq 2\),
\[xg_{2n-1}^{\#}(x)-g_{2n-3}^{\#}(x)=x+\sum_{k=1}^{n-1}\left(xh_{k}(x)-h_{k-1}( x)\right)-1+h_{0}=1+x+\sum_{k=2}^{n}h_{k}(x)=g_{2n+1}^{\#}.\]
In other words, for all odd integers \(n\geq 5\) we have
\[g_{n}^{\#}(x)=xg_{n-2}^{\#}(x)-g_{n-4}^{\#}(x). \tag{14}\]
Proof of Lemma 2.1.: To see that \(g(x)=\prod_{i=1}^{m}(x-\alpha_{i})(x-\alpha_{i}^{-1})\) is reciprocal, we compute:
\[x^{2m}g(\frac{1}{x}) =x^{2m}\prod(\frac{1}{x}-\alpha_{i})(\frac{1}{x}-\frac{1}{\alpha_ {i}})\] \[=\prod(1-\alpha_{i}x)(1-\frac{1}{\alpha_{i}}x)\] \[=(-1)^{m}\prod\alpha_{i}(x-\frac{1}{\alpha_{i}})\cdot(-1)^{m} \prod\frac{1}{\alpha_{i}}(x-\alpha_{i})\] \[=\prod(x-\frac{1}{\alpha_{i}})(x-\alpha_{i})\] \[=g(x).\]
To prove (3), the case \(m=1\) can be handled by a simple computation: setting \(\alpha=\alpha_{1}\) and \(a=\alpha+\alpha^{-1}\), we have \(g(x)=(x-\alpha)(x-\alpha^{-1})=x^{2}-ax+1=x(x+\frac{1}{x}-a)\)
and thus \(g^{\#}(x)=x-a\). The general case follows immediately from the special case \(m=1\): if \(a_{j}=\alpha_{j}+\alpha_{j}^{-1}\) then \(g^{\#}(x)=\prod_{j=1}^{m}(x-a_{j})\).
As mentioned in the text, we define the _reciprocal_\(\operatorname{Rec}(f,g)\) of two reciprocal polynomials of even degree to be \(\operatorname{Res}(f^{\#},g^{\#})\).
Proof of Proposition 2.4.: It suffices, by (RES2), to prove the following statement: if \(g_{1},g_{2},h\in\mathbb{Z}[x]\) are reciprocal of even degree and \(n\) is a positive integer such that \(g_{1}\equiv g_{2}\pmod{n}\), then \(\operatorname{Rec}(g_{1},h)\equiv\operatorname{Rec}(g_{2},h)\pmod{n}\).
By Proposition B.1, we have \(g_{1}^{\#}(x)\equiv g_{2}^{\#}(x)\pmod{n}\). Applying (RES3) to the natural ring homomorphism \(\phi:\mathbb{Z}\to\mathbb{Z}/n\mathbb{Z}\) shows that \(\operatorname{Res}(g_{1}^{\#},h^{\#})\equiv\operatorname{Res}(g_{2}^{\#},h^{ \#})\pmod{n}\) as desired.
|
2302.10264 | Integrated waveguide-based acousto-optic modulation with near-unity
conversion efficiency | Acousto-optic modulation in piezoelectric materials offers the efficient
method to bridge electrical and optical signals. It is widely used to control
optical frequencies and intensities in modern optical systems including
Q-switch lasers, ion traps, and optical tweezers. It is also critical for
emerging applications such as quantum photonics and non-reciprocal optics.
Acousto-optic devices have recently been demonstrated with promising
performance on integrated platforms. However, the conversion efficiency of
optical signals remains low in these integrated devices. This is attributed to
the significant challenge in realizing large mode overlap, long interaction
length, and high power robustness at the same time. Here, we develop
acousto-optic devices with gallium nitride on sapphire substrate. The unique
capability to confine both optical and acoustic fields in sub-wavelength scales
without suspended structures allows efficient acousto-optic interactions over
long distances under high driving power. This leads to the near-unity optical
conversion efficiency with integrated acousto-optic modulators. With the
unidirectional phase matching, we also demonstrate the non-reciprocal
propagation of optical fields with isolation ratio above 10 dB. This work
provides a robust and efficient acousto-optic platform, opening new
opportunities for optical signal processing, quantum transduction, and
non-magnetic optical isolation. | Liang Zhang, Chaohan Cui, Pao-Kang Chen, Linran Fan | 2023-02-20T19:54:14Z | http://arxiv.org/abs/2302.10264v3 | # Integrated waveguide-based acousto-optic modulation with near-unity conversion efficiency
###### Abstract
Acousto-optic modulation in piezoelectric materials offers the efficient method to bridge electrical and optical signals. It is widely used to control optical frequencies and intensities in modern optical systems including \(Q\)-switch lasers, ion traps, and optical tweezers. It is also critical for emerging applications such as quantum photonics and non-reciprocal optics. Acousto-optic devices have recently been demonstrated with promising performance on integrated platforms. However, the conversion efficiency of optical signals remains low in these integrated devices. This is attributed to the significant challenge in realizing large mode overlap, long interaction length, and high power robustness at the same time. Here, we develop acousto-optic devices with gallium nitride on sapphire substrate. The unique capability to confine both optical and acoustic fields in sub-wavelength scales without suspended structures allows efficient acousto-optic interactions over long distances under high driving power. This leads to the near-unity optical conversion efficiency with integrated acousto-optic modulators. With the unidirectional phase matching, we also demonstrate the non-reciprocal propagation of optical fields with isolation ratio above 10 dB. This work provides a robust and efficient acousto-optic platform, opening new opportunities for optical signal processing, quantum transduction, and non-magnetic optical isolation.
## 1 Introduction
Large-scale integration and device minimization are powerful methods to improve system functionality and efficiency. This is witnessed by the recent development of nonlinear optics in integrated photonic circuits [1, 2, 3, 4]. Compared with optical fields, acoustic fields have much lower propagation speed and stronger coupling with electric fields, thus providing complementary benefits for signal processing [5]. Therefore, tailored interactions between optical and acoustic fields in hybrid photonic-phononic circuits attract significant attentions recently with potential applications ranging from quantum transduction [6, 7, 8] and comb generation [9, 10] to photonic machine learning [11].
Acousto-optic modulation (AOM) plays a critical role in such hybrid circuits for signal conversion among different degrees of freedom. Intensive efforts have been devoted to the development of integrated acousto-optic modulators based on different piezoelectric materials including lithium niobate [9, 12, 13], aluminum nitride [14, 15, 16], gallium arsenide [17], and indium phosphire [18]. Efficient AOM requires the simultaneous confinement of optical and acoustic fields in sub-wavelength structures [13]. While optical confinement can be readily realized using materials with higher refractive index for waveguides, acoustic confinement is challenging as integrated photonic materials typically have acoustic velocities higher than substrates [19]. The lack of acoustic confinement leads to small coupling strengths between optical and acoustic fields due to the small mode overlapping. While the simultaneous confinement of optical and acoustic fields can be realized in suspended structures, the interaction length and power handling capability are limited due to the mechanical fragility [13]. The complex fabrication process also causes high acoustic propagation losses, which further deceases the interaction length. As a
result, high optical conversion efficiency is still out of reach for integrated acousto-optic devices.
In this article, we overcome these challenges to realize near-unity conversion efficiencies with integrated acousto-optic modulators. This is achieved by developing the gallium nitride (GaN) on sapphire platform. The refractive index of GaN is significantly larger than sapphire [20]. More importantly, velocities of both transverse and longitudinal acoustic waves in GaN are remarkably lower than sapphire [19]. Therefore, we can realize sub-wavelength confinement of both optical and acoustic fields in GaN waveguides on sapphire substrates without suspended structures. Strong acousto-optic coupling can be realized over long interaction lengths under high driving power with minimal propagation losses, leading to the near-unity optical conversion efficiency.
## 2 Results
The integrated acousto-optic modulator is schematically depicted in Fig. 1a. Acoustic fields are launched by interdigital transducers (IDT) through the piezoelectric effect, and focused into the waveguide. Optical fields are launched through a separate waveguide, and transferred into the same waveguide with acoustic fields through a directional coupler. For co-propagating acoustic and optical fields, AOM can occur as both Stokes and anti-Stokes processes. Input optical fields at angular frequency \(\omega_{0}\) are scattered by acoustic fields at angular frequency \(\Omega\) to generate the output optical field at \(\omega_{1}=\omega_{0}-\Omega\) and \(\omega_{1}=\omega_{0}+\Omega\) in the Stokes and anti-Stokes processes respectively. The corresponding phase matching conditions are \(\beta_{0}-q=\beta_{1}\) and \(\beta_{0}+q=\beta_{1}\) with \(q\), \(\beta_{0}\), and \(\beta_{1}\) the acoustic, optical input, and optical output wavevectors respectively (Fig. 1b). We can switch between Stokes and anti-Stokes processes by interchanging the optical input and output modes [15]. Here, we use the fundamental and first-order transverse-electric (TE\({}_{0}\) and
Figure 1: GaN-on-sapphire platform for acousto-optic modulation. **a.** Schematic of the integrated acouto-optic modulator. Waveguides are aligned along the GaN [11\(\bar{2}\)0] direction. **b.** Phase matching condition for co-propagating acoustic field (green), input TE\({}_{0}\) (blue), and output TE\({}_{1}\) (red) optical modes. **c.** Simulated electric field profiles of input TE\({}_{0}\) and output TE\({}_{1}\) optical modes along the \(x\) direction. **d.** Calculated acousto-optic coupling coefficient between TE\({}_{0}\) and TE\({}_{1}\) optical modes as a function of waveguide width and acoustic frequency. **e.** Simulated displacement profile of the fundamental Rayleigh (R0) acoustic mode.
TE\({}_{1}\)) modes for the input and output optical fields respectively (Fig. 1c). As the TE\({}_{0}\) mode has a larger wave-vector (\(\beta_{0}>\beta_{1}\)), the Stokes process dominates the acousto-optic modulation in our device. The conversion between TE\({}_{0}\) and TE\({}_{1}\) optical modes can be mediated by different acoustic modes. We perform numerical simulations to calculate the acousto-optic coupling coefficient [21]. Multiple acoustic modes can mediate efficient acousto-optic coupling between TE\({}_{0}\) and TE\({}_{1}\) modes as shown in Fig. 1d. We choose the fundamental Rayleigh (R0) acoustic mode, which shows the largest coupling strength. Moreover, with the significant out-of-plane displacement, the fundamental Rayleigh mode can be efficiently excited by IDTs on GaN (0001) plane (Fig. 1e) [22]. Due to the strong sub-wavelength confinement, the acousto-optic modulation process shows significant geometric dispersion. Therefore, the acoustic frequency can be tuned by the waveguide width. The waveguide width of our device is designed to be 1 \(\mu\)m. Therefore, the phase matching condition can be satisfied near the acoustic frequency \(\Omega=2\pi\times 1\) GHz.
The device is fabricated with 1-\(\mu\)m thick GaN template wafers grown on sapphire substrates using metal-organic chemical vapor deposition (Fig. 2a). Acousto-optic devices are patterned by the electron-beam lithography (EBL) using FOX-16 resist. After developing in TMAH, we etch the GaN layer with reactive ion etching using Cl\({}_{2}\)/BC\({}_{3}\)/Ar gases. IDTs are defined with EBL in ploymethyl methacrylate (PMMA) resist, followed by Ti/Al/Au deposition and lift-off in acetone. The total waveguide length is \(L=3\) mm to ensure efficient acousto-optic interaction. IDTs consist of a 5-nm titanium bottom layer, 100-nm aluminium middle layer, and 10-nm gold top layer. The IDT period is designed as 4.9 \(\mu\)m to match the acoustic frequency around \(\Omega=2\pi\times 1\) GHz (Supplementary Section 1). The IDT aperture and electrode width are 100 \(\mu\)m and 1.22 \(\mu\)m respectively (Fig. 2b and c). The directional coupler consists of two parallel 800 nm wide waveguides with 400 nm gap (Fig. 2d). As acoustic and optical fields have different coupling strengths between the two waveguides, the same directional coupler structure can be designed to show different functions for acoustic and optical fields. Here, we set the directional coupler length at 250 \(\mu\)m, in which case acoustic fields remain in the same waveguide and optical fields are completely transferred into the other waveguide (Supplementary Section 2).
We first use the vector network analyzer (VNA) to measure the acoustic transmission between
Figure 2: Integrated acousto-optic modulators. **a.** Optical image of the fabricated device. **b-d.** Scanning electron microscopy (SEM) images of the IDT (**b**), IDT electrodes (**c**), and directional coupler (**d**). **e.** Acoustic transmission of the directional coupler in the bar (blue) and cross (red) waveguides. **f.** Optical transmission of the directional coupler in the bar (blue) and cross (red) waveguides.
IDT pairs directly connected by straight GaN waveguides with different lengths. The dependence of the acoustic transmission on the device length allows us to estimate the propagation loss of the fundamental Rayleigh mode, which is \(\alpha_{b}=0.85\) dB/mm (Supplementary Section 1). By extrapolating the acoustic transmission to zero device length, we can obtain the IDT power efficiency \(\eta^{2}=-44\) dB. With 50 periods, IDTs show a center frequency of 0.99 GHz and 3-dB bandwidth of 12 MHz. To verify the directional coupler performance, we fabricated test devices with four input/output ports connected all with IDTs for acoustic field characterization or all with optical couplers for optical field characterization (Fig. 2e and f). We only observe strong acoustic signal in the bar configuration of the directional coupler (Fig. 2e). The oscillation in the acoustic transmission spectrum is caused by the reflection between input and output IDTs. Extinction ratio above 10 dB between the bar and cross waveguides can be achieved. For optical fields, we only observe strong output in the cross configuration with average extinction ratio above 14 dB (Fig. 2f). This shows that optical fields and acoustic fields can be efficiently combined by the directional coupler.
The modulation response is characterized by the measurement setup illustrated in Fig. 3a. CW light generated by a tunable semiconductor laser is amplified by the erbium-doped fiber amplifier (EDFA) and delivered to the device with a lensed fiber. RF signals from the vector network analyzer (VNA) is loaded into IDTs after amplification to excite acoustic fields. The optical transmission is monitored by a slow high-sensitivity photodetector (DC PD). The output optical field is also measured by a fast photodetector (AC PD), whose electric signal is sent back to VNA. The representative RF modulation spectrum is shown in Fig. 3b. The input optical wavelength is set at \(\lambda=\)1551.7 nm. Strong modulation can be clearly observed. The acoustic modulation 3-dB bandwidth is measured as \(\Delta=0.57\) MHz. With the modulation bandwidth and acoustic propagation loss, we can calculate the acoustic group velocity \(v_{\text{g}}=2\pi\Delta/\alpha_{b}\approx 4400\) m/s, which agrees with the simulated value (see Supplementary Section 1) [14]. We further measure the RF modulation response at different optical wavelengths (Fig. 3c). The acoustic frequency with maximum modulation efficiency shifts with the optical wavelength, showing the change of the phase-matching condition due to dispersion. Figure 3d shows the peak modulation response at different optical wavelengths with the fixed acoustic frequency \(\Omega/2\pi\)= 0.998 GHz. The optical bandwidth of the modulation process is measured as 3.5 nm, which agrees the theoretical value \(\delta\lambda=2.78\lambda^{2}/(\pi L\delta n_{\text{g}})\approx 3.2\) nm with \(\delta n_{\text{g}}=0.22\) the group refractive index difference between TE\({}_{0}\) and TE\({}_{1}\) modes [15].
To measure the optical conversion efficiency, we use a free-space cavity with 1-MHz linewidth as the optical filter (Fig. 3a). A piezo-transducer is attached to the free-space cavity mirror to modify the resonant wavelengths. The driving voltage of the piezo-transducer is continuously swept. Therefore, we can separate optical powers from the residual input and converted output optical fields in the time domain (Fig. 3e). Without acoustic fields, we only observe the TE\({}_{0}\) input optical field. With acoustic fields, the amplitude of the TE\({}_{0}\) input optical field decreases. More importantly, we can observe the emergence of the converted TE\({}_{1}\) output optical field. The converted TE\({}_{1}\) output optical field only shows up on one side of the residual TE\({}_{0}\) input optical field, proving the single-sideband nature of acousto-optic modulation. By increasing the RF drive amplitude, we can observe the sinusoidal oscillation of the optical power between TE\({}_{0}\) and TE\({}_{1}\) modes (Fig. 3f). With driving power \(P=560\) mW, the TE\({}_{0}\) power is close to zero with the TE\({}_{1}\) mode reaching the maximum amplitude. Therefore, the on-chip conversion efficiency (\(\theta\)) close to unity can be achieved at driving power \(P=560\) mW (Fig. 3g). This further allows us to estimate the acousto-optic coupling coefficient (Supplementary Section 3) [13]
\[\frac{g}{\sqrt{\hbar\Omega}}=\frac{\pi\alpha_{b}}{4\eta\sqrt{P}(1-e^{-\alpha_ {b}L/2})}\approx 255\text{ mm}^{-1}\text{W}^{-1/2} \tag{1}\]
which is more than two orders of magnitude higher than acousto-optic devices without simultaneous sub-wavelength acoustic and optical confinement (Table 1).
Figure 3: Acousto-optic modulation performance. **a.** Measurement setup. EDFA: Erbium-Doped Fiber Amplifier. FPC: Fiber Polarization Controller. VNA: Vector Network Analyszer. RF AMP: RF Amplifier. HV DC AMP: High voltage DC Amplifier. AWG: Arbitrary Waveform Generator. OSC: Oscilloscope. PD: Photo-Detector. **b.** Acoustic modulation spectrum with optical input wavelength 1551.7 nm. **c.** Acoustic modulation spectrum with different optical input wavelengths. **d.** Optical modulation spectrum with acoustic frequency \(\Omega/2\pi=0.998\) GHz. **e.** Optical signals filtered by the free-space cavity. **f.** Power of TE\({}_{0}\) and TE\({}_{1}\) modes with different RF driving amplitudes. **g.** Optical conversion efficiency with different RF driving amplitudes.
Due to difficulties in integrating high-quality magneto-optical materials, it is challenging to build on-chip optical isolators and circulators. With the direction-dependent phase-matching condition, acousto-optic modulation provides the non-magnetic method to realize on-chip optical isolation [13, 15, 23]. We verify the non-reciprocal behavior of our acousto-optic device by measuring the optical transmission in both forward and backward directions (Fig. 4a and b). The RF driving signal has fixed frequency \(\Omega/2\pi=0.998\) GHz and power \(P=560\) mW. In the forward direction, phase-matching condition is satisfied for the Stokes process, where the input TE\({}_{0}\) mode at frequency \(\omega_{0}\) is converted into the output TE\({}_{1}\) mode at frequency \(\omega_{0}-\Omega\) (Fig. 4a) [13]. No anti-Stokes signal at frequency \(\omega_{0}+\Omega\) is observed. In the backward direction, no mode conversion should happen ideally and all optical power should remain in the TE\({}_{0}\) mode at frequency \(\omega_{0}\). However, with the finite interaction length, we observe the anti-Stokes process in the backward direction (Fig. 4b). The anti-Stokes process in the backward direction has much lower efficiency due to phase mismatch. Therefore, the input TE\({}_{0}\) mode at frequency \(\omega_{0}\) is only partially converted into the output TE\({}_{1}\) mode at frequency \(\omega_{0}+\Omega\) even at the maximum modulation optical wavelength \(\lambda=1551.7\) nm (Fig. 4c). If we compare the TE\({}_{0}\) power in forward and backward directions, we can clearly see that isolation ratio above 10 dB has been achieved (Fig. 4d).
## 3 Discussion
To benchmark the performance of our device, a summary of recent works on integrated acousto-optic modulators is presented in Table 1. The GaN-on-sapphire platform in this work is the only one that is capable of confining both optical and acoustic fields in sub-wavelength scales without using suspended structures. Therefore, the interaction length has been significantly improved while maintaining the high acousto-optic coupling coefficient. In addition, the GaN-on-sapphire platform also has excellent power-handling capability. No performance degradation is observed under high driving powers (Supplementary Section 4). Moreover, the GaN-on-sapphire platform
Figure 4: Nonreciprocal propagation. **a.** Power spectrum of the input TE\({}_{0}\) mode and output Stokes and anti-Stokes TE\({}_{1}\) modes in the forward direction. **b.** Power spectrum of the input TE\({}_{0}\) mode and output Stokes and anti-Stokes TE\({}_{1}\) modes in the backward direction. **c.** Optical signals filtered by the free-space cavity in forward and backward directions. **d.** Power ratio between forward and back directions.
shows remarkably lower acoustic propagation loss. The excellent performance in all critical merits leads to the demonstration of the near-unity optical conversion efficiency. The robustness of unsuspended structures also allows the incorporation of integrated acousto-optic modulators into large-scale photonic-phononic circuits with complex functionalities. The elimination of supporting tethers and membranes to anchor suspended structures further enables the flexible design of circuit patterns.
The acoustic driving efficiency can be further improved. Propagation loss below 0.05 dB/mm has been demonstrated with GaN-on-sapphire acoustic waveguides [19]. Therefore, the acousto-optic interaction length can be extended by more than 10 times. This will lead to the reduction of the driving power by more than two orders of magnitude, below 10 mW with the current IDT design. The IDT efficiency can be further improved using uni-directional IDT designs with more periods and better electrical impedance matching [24, 25]. The heterogeneous structure of aluminum nitride (AlN) and GaN, which is widely used for power electronics [26], can further increase the driving efficiency by leveraging the larger piezoelectric coefficient in AlN [27]. As a result, we expect the driving power to achieve complete optical conversion can be decreased to the micro-watt level.
## 4 Conclusion
In conclusion, we have developed the GaN-on-sapphire platform for acousto-optic devices. We achieve high acousto-optic coupling strength, long interaction length, low-loss acoustic propagation, and large power handling capability at the same time. This leads to the first demonstration of near-unity conversion efficiency with integrated acousto-optic modulators. This work will enable the exploration of hybrid photonic-phoninc circuits at large scale for advanced signal processing, with important applications in microwave photonics and quantum transduction.
**Funding.** This material is based upon work supported by the Office of the Under Secretary of Defense for Research and Engineering under DEPSCoR program award number FA9550-21-1-0225 managed by Army Research Office, and NSF Grant No. ITE-2134830.
**Disclosures.** The Authors declare no competing financial or non-financial interests
**Data availability.** The data that support the findings of this study are available from the corresponding author upon reasonable request
**Supplemental document.** See Supplement 1 for supporting content.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Work & Year & Platform & \(\Omega/2\pi\) & \(\alpha_{b}\) & \(L\) & \(g/\sqrt{\hbar\Omega}\) & \(\theta\) & Suspended & Optical & Acoustic \\ & & & (GHz) & (dB/mm) & (mm) & (mm\({}^{-1}\)W\({}^{-1/2}\)) & (\%) & & confined & confined \\ \hline \hline Liu [14] & 2019 & AlN & 16.4 & — & 0.5 & 0.041 & 2.5e-4 & Y & Y & Y \\ \hline Kittlaus [15] & 2020 & Si/AlN & 3.11 & — & 0.96 & 4.5 & 13.5 & N & Y & N \\ \hline Christopher [12] & 2020 & LiNbO\({}_{3}\) & 0.67 & — & 0.6 & 1.66 & 0.9 & N & Y & N \\ \hline Shao [9] & 2020 & LiNbO\({}_{3}\) & 2.89 & — & 0.1 & 0.004 & 3.5 & N & N & N \\ \hline Ahmed [28] & 2021 & LiNbO\({}_{3}\) & 1.16 & 4 & 0.45 & 0.417 & 1 & Y & Y & N \\ \hline Christopher [13] & 2021 & LiNbO\({}_{3}\) & 0.44 & 11.7 & 0.25 & 377 & 18 & Y & Y & Y \\ \hline Wan [29] & 2022 & LiNbO\({}_{3}\) & 0.84 & — & 0.12 & 0.035 & 3.2e-3 & N & Y & N \\ \hline This work & 2023 & GaN & 0.99 & 0.85 & 3 & 255 & \(\sim\)100 & N & Y & Y \\ \hline \hline \end{tabular}
\end{table}
Table 1: Integrated acousto-optic modulators |
2303.12982 | Fault Prognosis of Turbofan Engines: Eventual Failure Prediction and
Remaining Useful Life Estimation | In the era of industrial big data, prognostics and health management is
essential to improve the prediction of future failures to minimize inventory,
maintenance, and human costs. Used for the 2021 PHM Data Challenge, the new
Commercial Modular Aero-Propulsion System Simulation dataset from NASA is an
open-source benchmark containing simulated turbofan engine units flown under
realistic flight conditions. Deep learning approaches implemented previously
for this application attempt to predict the remaining useful life of the engine
units, but have not utilized labeled failure mode information, impeding
practical usage and explainability. To address these limitations, a new
prognostics approach is formulated with a customized loss function to
simultaneously predict the current health state, the eventual failing
component(s), and the remaining useful life. The proposed method incorporates
principal component analysis to orthogonalize statistical time-domain features,
which are inputs into supervised regressors such as random forests, extreme
random forests, XGBoost, and artificial neural networks. The highest performing
algorithm, ANN-Flux, achieves AUROC and AUPR scores exceeding 0.95 for each
classification. In addition, ANN-Flux reduces the remaining useful life RMSE by
38% for the same test split of the dataset compared to past work, with
significantly less computational cost. | Joseph Cohen, Xun Huan, Jun Ni | 2023-03-23T01:19:41Z | http://arxiv.org/abs/2303.12982v1 | Fault Prognosis of Turbofan Engines: Eventual Failure Prediction and Remaining Useful Life Estimation
###### Abstract
In the era of industrial big data, prognostics and health management is essential to improve the prediction of future failures to minimize inventory, maintenance, and human costs. Used for the 2021 PHM Data Challenge, the new Commercial Modular Aero-Propulsion System Simulation dataset from NASA is an open-source benchmark containing simulated turbofan engine units flown under realistic flight conditions. Deep learning approaches implemented previously for this application attempt to predict the remaining useful life of the engine units, but have not utilized labeled failure mode information, impeding practical usage and explainability. To address these limitations, a new prognostics approach is formulated with a customized loss function to simultaneously predict the current health state, the eventual failing component(s), and the remaining useful life. The proposed method incorporates principal component analysis to orthogonalize statistical time-domain features, which are inputs into supervised regressors such as random forests, extreme random forests, XGBoost, and artificial neural networks. The highest performing algorithm, ANN-Flux, achieves AUROC and AUPR scores exceeding 0.95 for each classification. In addition, ANN-Flux reduces the remaining useful life RMSE by 38% for the same test split of the dataset compared to past work, with significantly less computational cost.
## 1 Introduction
The field of prognostics and health management (PHM) has attracted recent research attention for large-scale, high-dimensional, and dynamic engineering systems. Typically performed on the component level, the goal of intelligent prognostic approaches is to predict the progression of degradation in advance to facilitate swift and responsible decision-making before catastrophic failure (Lee et al., 2014; Tsui et al., 2015). Typical PHM applications include data-driven fault diagnosis and prognosis of bearing failures (Shao et al., 2018) and gearbox failures (Li et al., 2016) utilizing vibration, current, and/or acoustic emission signals. As described by Liao and Kottig in (Liao and Kottig, 2014), PHM approaches can be separated into a few categories: physics-based, expert knowledge-based, or data-driven, with significant potential for hybridization. PHM is essential for reliable operation of safety-critical systems such as nuclear power plants, which have devastating consequences should catastrophic failures occur and are often difficult to predict due to the lack of historical, labeled failure data (Coble et al., 2015).
In light of the need for active PHM research, the PHM Society hosts annual data challenges open to the public to enable benchmarking for relevant industrial applications. This paper will focus on the 2021 PHM Data Challenge, which centered on accurately estimating the remaining useful life (RUL) for a small fleet of turbofan engines (M. A. Chao et al., 2021). The new Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset, openly available in the NASA Prognostics Center of Excellence (PCoE) Data Set Repository (M. A. Chao et al., 2021), consists of synthetic run-to-failure trajectories operating under realistic flight conditions. The flight trajectories are generated using the C-MAPSS dynamical model from NASA Ames Research Center in collaboration with ETH Zurich and PARC (M. Chao et al., 2021). The turbofan engines experience 7 possible failure modes that involve efficiency and/or flow failures of 5 rotating subcomponents: fan, low-pressure compressor (LPC), high-pressure compressor (HPC), low-pressure turbine (LPT), and high-pressure turbine (HPT). A schematic representation of a turbofan engine unit is shown in Figure 1.
The winning approaches of the 2021 PHM Data Challenge utilized deep learning techniques for RUL estimates. In (Lovberg, 2021), Lovberg proposed a neural network-based normalization procedure to effectively denoise the sensor measurements with respect to the dynamic flight conditions. After normalization, the input trajectories are passed into a deep convolutional neural network (CNN) with dilated convolutions in an approach that allows for variable input sequence lengths. Lovberg relied on the provided health state label to sample degraded sequences in their RUL prediction model. DeVol _et al._ proposed in (DeVol et al., 2021) the integration of inception modules within a CNN architecture to handle the variable trajectory lengths. DeVol _et al._ reported RUL prediction results using NASA's training-testing split in the N-CMAPSS dataset, streamlining reproducibility and benchmarking for this challenge problem. Finally, Solis-Martin _et al._ approached this problem by stacking two CNNs in sequence: an encoder model first used for dimensionality reduction and feature extraction, and a secondary model used for RUL prediction (Solis-Martin et al., 2021). Solis-Martin _et al._ used Bayesian hyperparameter optimization to tune their models and noted that their prediction results could be improved by reducing overfitting.
These approaches benefit from the strengths of deep learning; namely, they allow for feature representations to be learned via CNN rather than manually defined. Clever variations of CNNs, such as Lovberg's approach implementing dilated convolutions (Lovberg, 2021) and DeVol _et al._'s usage of inception modules (DeVol et al., 2021), have allowed for accurate RUL estimation given varying flight trajectories and input lengths. However, there are key limitations to these approaches that inhibit their potential for practical use outside of the 2021 PHM Data Challenge. By focusing solely on RUL prediction, prior methods do not provide a holistic prognosis that predicts the eventual failing component(s). DeVol _et al._ mentioned that the resulting RUL predictions lack explainability, and that future work should utilize the labeled failure modes and components provided in the N-CMAPSS dataset to provide a more complete prognosis for turbofan engines (DeVol et al., 2021).
Our work significantly expands upon past efforts by broadening the research scope to encompass eventual failure prediction. Being able to accurately predict and isolate the reason for failure has important implications on maintenance decision-making, equipping operators with the capability to dispatch the appropriate experts and resources in a timely manner. Such predictive maintenance strategies can enable intelligent inventory optimization (Bousdekis et al., 2017) and reduce reactive maintenance costs, which may account for up to 40% of the overall budget in large industries (Bagavathiappan et al., 2013). To maximize applicability for a real-world scenario, we aim to simultaneously learn to predict three meaningful indicators: 1) the current health state; 2) the eventual failing component(s); and 3) the RUL until catastrophic failure. To the authors' knowledge, this is the first attempt at a unified model to effectively accomplish fault detection, isolation, and RUL estimation for the N-CMAPSS benchmark dataset. We accomplish these goals by first simplifying the feature extraction process to enable comparisons amongst state-of-the-art machine learning regressors. Then, we derive and optimize a specialized loss function that balances classification and regression objectives. We also compare the performance of state-of-the-art machine learning regressors and important pre-processing steps such as orthogonalization via principal components analysis (PCA). Our main contributions for this research effort are summarized as follows:
* Reformulating and expanding upon the 2021 PHM Data Challenge to include health state detection and forecasting eventual failures;
* Deriving a customized loss function to simultaneously optimize classification and regression PHM objectives;
* Accurately predicting health state, eventual failures, and RUL, with state-of-the-art regression approaches benchmarked with prior work.
In the following sections of the paper, we will detail our proposed methodology, our reproducible results, and provide comparisons to previous work.
## 2 Methods
First, we will describe the N-CMAPSS dataset in detail, introducing the input variables and dataset composition in detail. With our expanded goal to predict the current health state and eventual failing components in addition to RUL, our proposed methodology encompasses both classification and regression objectives. Our method is summarized in three steps: 1) feature extraction; 2) feature standardization and orthogonalization via PCA; and 3) training a supervised
Figure 1: Turbofan engine schematic, courtesy of NASA Prognostics Center of Excellence (M. A. Chao et al., 2021)
machine learning model to obtain the final predictions. Figure 2 illustrates the proposed methodology.
### Dataset Description
The N-CMAPSS dataset consists of 8 provided subsets and contains 90 engine units in total. In our research, we combine flow and efficiency failures into one general failure category for each mechanical component. Table 1 provides a summary of the failure modes present in each subset. In the dataset, engine units have a lifetime rated typically between 60 and 100 cycles, with the overall objective being to estimate the RUL until catastrophic failure. Each flight cycle is of variable length and is characterized by 18 time series signals: 4 flight data descriptors \(W=\{W_{1},W_{2},W_{3},W_{4}\}\) summarizing the dynamic operating conditions, and 14 real-time sensor measurements \(X_{s}=\{X_{s_{1}},X_{s_{2}},...,X_{s_{14}}\}\). In addition to the time series signals, each cycle also includes auxiliary variables \(A=\{A_{1},A_{2},A_{3},A_{4}\}\) useful for understanding the context of a flight cycle: the unit number, cycle number, a categorical flight class variable \(F_{c}\) representing the length of the flight (set to 1 for short flights, 2 for medium flights, and 3 for long flights), as well as a binary health state variable \(h_{s}\) (set to 1 for healthy status and 0 for unhealthy status). We note that the simulated engines are flown past unhealthy operation until end of life (i.e., catastrophic failure). Table 2 provides a summary of the variables provided in the dataset.
In all, there are a total of 6825 flight cycles in the dataset, with engines averaging approximately 75 cycles per unit.
### Feature Extraction
Feature selection and extraction are necessary to reduce the input dimensionality of the dataset. Although there are only 90 turbofan engine units in the N-CMAPSS dataset as per Table 1, the dataset contains over 63 million timestamps and requires reduction for subsequent data processing. As in previous work, we aim to make predictions on a per-cycle basis (DeVol et al., 2021).
In this study, we extract cycle-wide statistical time domain features to summarize the distribution for each time series. These features include mean, standard deviation, and the five-number summary (minimum, first quartile, median, third quartile, and maximum) for all signals. We also extract features that are held constant per cycle such as the time duration of the cycle, the current cycle number, and the flight class \(F_{c}\). In general terms, this feature selection and extraction method is applied for \(n\) training cycles, with \(\mathbf{x}_{j}\in\mathbb{R}^{n}\) representing the vector of samples for the \(j^{\text{th}}\) feature. Finally, the feature vectors are concatenated into a single data matrix containing all \(p\) features, \(\mathbf{X}\in\mathbb{R}^{n\times p}\).
### Feature Standardization and PCA Orthogonalization
Features extracted from the time series signals may be of different scales and units. As a result, standardization helps ensure that predictions are not influenced by these differences. First, we apply a min-max normalization scheme across all features to map all features in the bounded range [0, 1] as shown in Eq. (1):
\[\overline{\mathbf{x}}_{j}=\frac{\mathbf{x}_{j}-\min\bigl{(}\mathbf{x}_{j}\bigr{)}}{\max \bigl{(}\mathbf{x}_{j}\bigr{)}-\min\bigl{(}\mathbf{x}_{j}\bigr{)}} \tag{1}\]
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Subset** & **Units** & **Failure** & **Fan** & **LPC** & **HPC** & **HPT** & **LPT** \\
**Name** & **Units** & **Mode** & **Fail** & **Fail** & **Fail** & **Fail** & **Fail** \\ \hline DS01 & 10 & 1 & No & No & No & Yes & No \\ DS03 & 15 & 2 & No & No & No & Yes & Yes \\ DS04 & 10 & 3 & Yes & No & No & No & No \\ DS05 & 10 & 4 & No & No & Yes & No & No \\ DS06 & 10 & 5 & No & Yes & Yes & No & No \\ DS07 & 10 & 6 & No & No & No & No & Yes \\ DS08a & 15 & 7 & Yes & Yes & Yes & Yes & Yes \\ DS08c & 15 & 7 & Yes & Yes & Yes & Yes & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Failure mode description from the N-CMAPSS dataset (M. A. Chao et al., 2021)
Figure 2: Proposed methodology flow for failure prediction
Once again, we concatenate the feature vectors into a normalized data matrix, \(\mathbf{\overline{X}\in\mathbb{R}^{n\times p}}\). After obtaining the normalized data matrix, PCA orthogonalization is recommended as a multivariate preprocessing step to obtain a set of uncorrelated variables. PCA is typically used to achieve dimensionality reduction by retaining the most important principal components (PCs) such that the explained variance is maximized (Jollife and Cadima, 2016). However, we have found that in practice, there is utility to keeping all PCs to improve training results. This is potentially because the features extracted are significantly correlated, and therefore simply using PCA for its orthogonalization benefits may improve the performance of gradient descent-based optimization methods employed in training. In Section 3, we will compare results with and without PCA orthogonalization for all models. PCA can be formulated as a linear transformation using the eigendecomposition of the sample correlation matrix \(\mathbf{Q}\in\mathbb{R}^{p\times p}\) of the features from \(\mathbf{\overline{X}}\), as shown in Eqs. (2)-(3):
\[\mathbf{Q}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{\mathrm{T}}=\begin{bmatrix}\mathbf{v}_{1},\mathbf{v} _{2},...,\mathbf{v}_{p}\end{bmatrix}\begin{bmatrix}\lambda_{1}&...&0\\ \vdots&\vdots&\vdots\\ 0&...&\lambda_{p}\end{bmatrix}\begin{bmatrix}\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v} _{p}\end{bmatrix}^{\mathrm{T}} \tag{2}\]
\[\mathbf{\overline{X}}=\mathbf{\overline{X}}\mathbf{V}=\begin{bmatrix}\mathbf{\overline{x}}_{1 },\mathbf{\overline{x}}_{2},...,\mathbf{\overline{x}}_{p}\end{bmatrix}\begin{bmatrix} \mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{p}\end{bmatrix} \tag{3}\]
where \(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{p}\) are the principal components with corresponding eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p}\geq 0\). The resulting matrix \(\mathbf{\overline{X}\in\mathbb{R}^{n\times p}}\) is the newly orthogonalized training dataset scored along the PC axes.
### Output Labeling Scheme
While the 2021 PHM Data Challenge formulation provides \(h_{s}\) and failure mode information as possible inputs for a RUL prediction model, we aim to instead predict them as outputs encoded as binary variables. As mentioned previously, these additional outputs will provide a more comprehensive prognosis of the degraded turbofan engine unit. With these new outputs, we require a labeling scheme for training a model. For learning the current cycle health state \(h_{s}\), we borrow the labels provided in the N-CMAPSS dataset, i.e., a label of "1" for healthy operation and "0" for unhealthy operation. We introduce a vector of possible eventual failures \(\mathbf{y}_{EF}=\begin{bmatrix}\mathbf{y}_{Fan},\mathbf{y}_{LPC},\mathbf{y}_{HPC},\mathbf{y}_{HPT}, \mathbf{y}_{LPT}\end{bmatrix}^{\mathrm{T}}\), in which each variable \(\mathbf{y}_{comp}\in\mathbf{y}_{EF}\) is binary, with the positive label indicating eventual failure as specified in Table 1. For example, for the DS06 subset in which the LPC and HPC components eventually fail, \(\mathbf{y}_{EF}=\begin{bmatrix}0,1,1,0,0\end{bmatrix}^{\mathrm{T}}\). Lastly, for the RUL training label, we follow the N-CMAPSS convention, which provides \(RUL\in\mathbb{Z}^{*}\) as calculated by subtracting the current cycle number from the total lifetime of the engine unit, i.e., \(RUL=t_{EOL}-A_{2}\). With these definitions, we can prepare the ground truth vector of labels \(\mathbf{y}=[h_{s},\mathbf{y}_{EF}^{\mathrm{T}},RUL]^{\mathrm{T}}\) paired with the features of each cycle.
### Training Loss Function and Model Evaluation
Handling classification and regression objectives simultaneously provides additional complexity for training a predictive machine learning model. We propose optimizing a customized loss function that explicitly weighs both objectives. First, we base the RUL loss contribution from
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Variable** & **Symbol** & **Description** & **Units** \\ \hline \(A_{1}\) & unit & Unit number & - \\ \(A_{2}\) & cycle & Flight cycle & - \\ \(A_{3}\) & \(F_{c}\) & Flight class & - \\ \(A_{4}\) & \(h_{s}\) & Health state & - \\ \(W_{1}\) & alt & Altitude & ft \\ \(W_{2}\) & Mach & Flight Mach & - \\ \(W_{3}\) & TRA & Throttle- & \% \\ & & resolver angle & \% \\ \(W_{4}\) & T2 & Total temp. at fan inlet & \({}^{\mathrm{o}}\)R \\ \(X_{5_{1}}\) & Wf & Fuel flow & pps \\ \(X_{2}\) & Nf & Physical fan & rpm \\ & & speed & \\ \(X_{3_{3}}\) & Nc & Physical core & rpm \\ & & speed & \\ \(X_{4}\) & T24 & Total temp. at LPC outlet & \({}^{\mathrm{o}}\)R \\ \(X_{5_{5}}\) & T30 & Total temp. at HPC outlet & \({}^{\mathrm{o}}\)R \\ \(X_{5_{6}}\) & T48 & Total temp. at HP outlet & \({}^{\mathrm{o}}\)R \\ \(X_{5_{7}}\) & T50 & Total temp. at LPT outlet & \({}^{\mathrm{o}}\)R \\ \(X_{8}\) & P15 & Total pressure & \\ & & in bypass-duct & psia \\ \(X_{5_{9}}\) & P2 & Total pressure & psia \\ & & at fan inlet & psia \\ \(X_{5_{10}}\) & P21 & Total pressure & \\ & & at fan outlet & psia \\ \(X_{5_{11}}\) & P24 & Total pressure & \\ & & at LPC outlet & psia \\ \(X_{5_{12}}\) & Ps30 & Static pressure & \\ & & at HPC outlet & psia \\ \(X_{5_{13}}\) & P40 & Total pressure & \\ & & at burner outlet & psia \\ \(X_{5_{14}}\) & P50 & Total pressure & \\ & & at LPT outlet & psia \\ \hline \hline \end{tabular}
\end{table}
Table 2: Auxiliary, flight descriptors, and sensor measurement variables used in N-CMAPSS dataset (M. A. Chao et al., 2021)
NASA's scoring criteria (M. A. Chao et al., 2021), which penalizes overestimation of RUL to favor conservative predictions and is defined in Eqs. (4)-(6):
\[s_{c}(RUL,\overline{RUL}\ )=\frac{1}{n}\sum_{i=1}^{n}\exp\bigl{(}\alpha\big{|}RUL_{i}- \overline{RUL}_{i}\big{|}\bigr{)}-1 \tag{4}\]
\[RMSE(RUL,\overline{RUL})=\left(\frac{1}{n}\sum_{i=1}^{n}\bigl{(}RUL_{i}- \overline{RUL}_{i}\bigr{)}^{2}\right)^{1/2} \tag{5}\]
\[NASA(RUL,\overline{RUL})=0.5\ RMSE+0.5\ s_{c} \tag{6}\]
in which \(\alpha\) is the overestimation penalty equal to \(1/13\) if the RUL is underestimated (i.e., \(\overline{RUL}_{i}<RUL_{i}\)) and equal to \(1/10\) for overestimations. Note that we can substitute these values and rewrite Eq. 4 as follows using an indicator function, which will allow for easier implementation with automatic differentiation coding environments:
\[u=\left(\frac{1}{13}+\frac{3}{130}\mathbf{1}_{RUL_{i}>RuU_{i}}\right)\left| RUL_{i}-\overline{RUL}_{i}\right| \tag{7}\]
\[s_{c}\bigl{(}RUL,\overline{RUL}\bigr{)}=\frac{1}{n}\sum_{i=1}^{n}\exp(u)-1 \tag{8}\]
This alternative formulation essentially "upgrades" the \(\alpha\) penalty from a base value of \(1/13\) to \(1/10\) when RUL is overestimated. However, the loss function in Eq. 6 only considers RUL errors. We propose utilizing the binary cross-entropy loss function \(BCE(\mathbf{y},\widehat{\mathbf{y}})\) for \(q\) classification outputs. Finally, we obtain the overall loss function by weighing the terms introduced above:
\[L(\mathbf{y},\widehat{\mathbf{y}})=NASA(RUL,\overline{RUL}\ )+\gamma BCE(\mathbf{y}, \widehat{\mathbf{y}}) \tag{9}\]
where \(\gamma\) is a tunable weight attached to the classification loss term. Because the NASA score is based on RUL regression, we expect this term will dominate the overall loss value due to the greater magnitude compared to the BCE. For this application, we set \(\gamma=10\) to balance the regression and classification objectives.
Additionally, because benchmarking comparisons between multiple machine learning regressors have not yet been provided in previous work on the N-CMAPSS dataset (DeVol et al., 2021; Lovberg, 2021; Solis-Martin et al., 2021), we aim to compare the performance of four state-of-the-art machine learning methods: random forests (RFs) (Genuer et al., 2017), extreme random forests (ERFs) (also known as extra trees) (Maier et al., 2015), XGBoost (XGB) (Chen & Guestrin, 2016) and artificial neural networks (ANNs) (Mahamad et al., 2010). Specifically, we will compare the performance of the tree-based ensemble regressors trained to minimize mean squared error (MSE) to an ANN that minimizes our proposed loss function in Eq. 9. For completeness, we will also compare these results to an ANN minimizing MSE. To evaluate the quality of classification predictions, we will use the area under receiver operating characteristics (AUROC) and precision-recall curves (AUPR) metrics to evaluate performance at all possible thresholds. Meanwhile, root-mean-square error (RMSE), the NASA scoring function detailed in Eq. 6, mean absolute error (MAE), and MAE normalized as a percentage of the unit's lifetime will be reported for judging regression quality for RUL predictions.
## 3 Results
For reproducibility, results will be reported using the built-in N-CMAPSS dataset split, as in (DeVol et al., 2021). This dataset split is notable for having a testing set with entire engine units that are unseen in the training set, making the benchmark problem more realistic and challenging. The split follows an approximately 60%-40% training-testing ratio, with 4089 cycles in the training set and 2736 cycles in the test set. Following the feature extraction method detailed in Section 2.2, we extract 129 features. Using the min-max standardization and PCA orthogonalization methods from the popular scikit-learn package (Pedregosa et al., 2012), we normalize the training set and apply these learned transformations to the testing set.
To prepare the regressors minimizing the MSE loss function, it is necessary to scale the labels such that the RUL regression error does not dominate the MSE calculation. This is done by simply multiplying the binary encoded labels in \(\mathbf{y}\) by 100, thereby putting the binary labels in the same magnitudes as the RUL labels. After training the models, the AUROC and AUPR metrics are then computed using the resulting classification predictions on the test set in \(\widehat{\mathbf{y}}\) to serve as robust indicators of performance averaged across all possible thresholds. Table 3 shows the classification and regression results for RF, ERF, XGB, an ANN also trained on MSE (ANN-MSE), and an ANN trained on the custom loss function derived in Section 2.5 (ANN-Flux). Results with and without the PCA orthogonalization step are included, demonstrating the impact of the preprocessing procedure on minimally tuned models. The RF and ERF regressors, implemented using scikit-learn, each contain 100 base estimators. The XGBoost method also uses 100 estimators, with the learning rate \(\eta\) set to 0.3 and the max depth of a tree set to 6. Both ANNs share the same shallow architecture of two hidden layers with 64 and 32 neurons each with RELU activations, employing the ADAM optimizer and trained for 5000 epochs. All methods are implemented using the Julia 1.7.3 programming language (Bezanson et al., 2014) and both ANNs are designed using the Flux deep learning backend, which allows for auto-differentiation of custom loss functions (Innes, 2018). The results in Table 3 may be further improved with hyperparameter optimization techniques and merely illustrate the potential for the simultaneous prediction of eventual failures alongside RUL.
Benchmarked on a single machine running with an Intel Core i7-10750H CPU @ 2.60 GHz processor with 32 GB RAM, it takes approximately just 60 seconds in total to train and evaluate the RF, ERF, and XGB methods. On the same processor, the Flux models take approximately 250 seconds to train with a NVIDIA GeForce RTX 2070 Super GPU. The feature extraction is the longest step in terms of runtime, taking ~450 seconds to load the dataset and extract all 129 features for the training and testing sets.
The ANN-Flux method with the PCA preprocessing step accurately predicts the current health state and the eventual failure component(s) with AUROC and AUPR scores exceeding 0.95 for each output. This is especially notable considering the significant overlap of the failing components depending on the failure modes (see Table 1). In addition, the RUL prediction also outperforms the other techniques tested. The parity plot in Figure 2(a) visualizes the ANN-Flux RUL predictions in the testing set versus the ground truth labels. In addition, producing a figure similar to (DeVol et al., 2021), Figure 2(b) also illustrates the ANN-Flux RUL predictions with the ground truth RUL labels sorted from least-to-greatest. Compared to previously published efforts on the N-CMAPSS benchmark on the NASA split, the RUL prediction is significantly improved. Our RMSE of 7.75 and NASA score (smaller is better) of 4.34 compare favorably with the RMSE of 12.50 and NASA score of 7.50 in previously published work--a 38% and 42% reduction, respectively (DeVol et al., 2021).
We note that these RUL predictions are directly output from the ANN-Flux model and further considerations may improve their quality and usefulness in practice. For example, despite the asymmetric NASA scoring function favoring conservative underestimates, the average prediction error still slightly overestimates the ground truth by 0.65 cycles. This contrasts with the training prediction error, which on average underestimates the ground truth by 0.50 cycles. Further adjustments on the overestimation penalty \(\alpha\) and the classification loss weight \(\gamma\) may skew the prediction error
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c|c c} \hline \multirow{2}{*}{**Output**} & \multicolumn{2}{c|}{**Validation**} & \multirow{2}{*}{**RF**} & \multicolumn{1}{c|}{\multirow{2}{*}{**+ PCA**}} & \multirow{2}{*}{**ERF**} & \multicolumn{1}{c|}{\multirow{2}{*}{**+ PCA**}} & \multirow{2}{*}{**XGB**} & \multicolumn{1}{c|}{\multirow{2}{*}{**+ PCA**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**ANN- MSE PCA**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**+ Flux PCA**}} & \multicolumn{1}{c}{\multirow{2}{*}{**ANN- Flux PCA**}} & \multicolumn{1}{c}{\multirow{2}{*}{**+ PCA PCA**}} \\ & \multicolumn{1}{c|}{**Metric**} & & & & & & & & & & \\ \hline Health & AUROC & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\ State & AUPR & 0.99 & 0.98 & 0.99 & 0.99 & 0.99 & 0.98 & 0.99 & 0.99 & 0.99 & 0.99 \\ \hline Fan & AUROC & 0.81 & 0.92 & 0.79 & 0.91 & 0.88 & 0.94 & 0.82 & 0.97 & 0.87 & 0.99 \\ Failure & AUPR & 0.66 & 0.88 & 0.62 & 0.87 & 0.79 & 0.91 & 0.74 & 0.92 & 0.82 & 0.99 \\ \hline LPC & AUROC & 0.81 & 0.91 & 0.80 & 0.91 & 0.90 & 0.93 & 0.75 & 0.96 & 0.82 & 0.97 \\ Failure & AUPR & 0.67 & 0.83 & 0.66 & 0.81 & 0.84 & 0.89 & 0.61 & 0.93 & 0.70 & 0.95 \\ \hline HPC & AUROC & 0.84 & 0.93 & 0.84 & 0.93 & 0.94 & 0.96 & 0.73 & 0.99 & 0.84 & 0.99 \\ Failure & AUPR & 0.81 & 0.91 & 0.81 & 0.90 & 0.94 & 0.95 & 0.70 & 0.99 & 0.82 & 0.99 \\ \hline HPT & AUROC & 0.82 & 0.89 & 0.81 & 0.88 & 0.89 & 0.90 & 0.91 & 0.96 & 0.91 & 0.97 \\ Failure & AUPR & 0.79 & 0.87 & 0.79 & 0.85 & 0.88 & 0.90 & 0.92 & 0.94 & 0.92 & 0.97 \\ \hline LPT & AUROC & 0.80 & 0.88 & 0.80 & 0.88 & 0.87 & 0.91 & 0.83 & 0.90 & 0.83 & **0.95** \\ Failure & AUPR & 0.78 & 0.86 & 0.79 & 0.85 & 0.87 & 0.90 & 0.85 & 0.89 & 0.85 & **0.95** \\ \hline & RMSE & 10.14 & 10.72 & 10.19 & 10.16 & 9.64 & 10.13 & 9.23 & 9.51 & 8.27 & **7.75** \\ RUL & NASA & 5.81 & 6.20 & 5.85 & 5.84 & 5.50 & 5.82 & 5.25 & 6.02 & 4.61 & **4.34** \\ MAE (cycles) & 7.94 & 8.59 & 8.02 & 8.10 & 7.52 & 7.77 & 7.23 & 7.29 & 6.14 & **5.87** \\ MAE (\%) & 10.72 & 11.51 & 10.89 & 10.85 & 10.12 & 10.45 & 9.66 & 9.53 & 8.07 & **7.72** \\ \hline \end{tabular}
\end{table}
Table 3: Classification and regression results for N-CMAPSS dataset, with the proposed PCA-orthogonalized ANN-Flux method generally outperforming the other benchmark methods
towards underestimation. In addition, we have not instituted hard constraints to guarantee nonnegative RUL values. Postprocessing transformations such as the ReLU function can be implemented in the future to rectify the outputs such that all resulting RUL predictions are nonnegative.
Notably, the PCA orthogonalization pre-processing step has a profound impact on classification performance for the eventual failure of the mechanical components. This is especially true for ANN-MSE, which has AUROC and AUPR scores increasing by at least 0.20 for compressor failures when PCA orthogonalization is performed prior to classification. These findings are consistent among all attempted machine learning methods. However, PCA orthogonalization did not appear to improve the regression performance in the same way; 3 out of the 5 attempted methods had increased RMSE when inputs were orthogonalized. This suggests that using PCA to orthogonalize these extracted features is especially useful for binary classification predictions, but may not always lead to better results for minimizing RUL error.
Having additional classification outputs enables explainable analysis of RUL predictions along various slices of the dataset. For example, Figure 4 illustrates the RUL prediction errors for unhealthy versus healthy cycles. Intuitively, the interquartile range for unhealthy operating cycles is substantially narrower, indicating that RUL predictions on average improve throughout the life of the engine unit.
It is also useful to determine whether there are certain components with higher variance in RUL prediction errors; by observing the RUL prediction errors on a per-component basis, operators can glean more information and make targeted decisions based on their confidence of the prognosis. Similar to Figure 4, Figure 5 plots the RUL prediction error spread of the test set for each of the labeled eventual mechanical component failures. Figure 5 demonstrates that the RUL prediction errors have a median centered near 0 for each mechanical component and there is no significant component-based bias identified. Relatively, the compressor failures have a tighter concentration around 0 and the turbine failures are more negatively skewed, indicating more underestimates, but we note that it is difficult to draw definitive conclusions due to overlapping failures.
## 4 Discussion
Our findings have broad economic implications beyond engine prognostics, as a similar approach could potentially be applied for other PHM applications. Our approach is enabled by expanding the formulation of the 2021 PHM Data Challenge to include these objectives, taking full advantage of the provided labels in the N-CMAPSS dataset. Previous research on this dataset also utilized the labeled health state as an input to improve RUL predictions [9]; we have relaxed assumptions by instead learning the health state as an output.
Figure 4: Box-and-whisker plot for RUL error for healthy vs unhealthy operation cycles
Figure 3: **a)** Parity plot comparing actual and predicted RUL values for ANN-Flux predictions on the N-CMAPSS testing engine unit set; **b)** ANN-Flux predictions scatter with ground truth labels sorted from least-to-greatest
The computation effort of our approach compared to past work is also noteworthy. Table 4 compares the number of trainable parameters in ANN-Flux compared to the 2021 PHM Data Challenge winners' reported number of parameters. L\(\ddot{\text{o}}\)vberg does not report the total number of trainable parameters, but utilizes a four-layer ANN for normalization as a pre-processing step before a 10-layer CNN with dilated convolutions made for RUL predictions [9]. ANN-Flux is remarkably simple, with number of parameters approximately two orders of magnitude fewer compared to the deep CNNs of prior work. In a realistic scenario with larger datasets, smaller networks are less expensive to run in real-time, streamlining inferencing efforts. Although our method requires hand-selected features prior to training an ANN, the extracted features are simple statistical features and do not require significant domain expertise. Perhaps surprisingly, predicting RUL in addition to the eventual failure component(s) and current health state does not appear to negatively impact the prediction error, as our approach results in a RMSE reduction of 38% for the same split of the dataset [10].
To the authors' knowledge, our work is also the first to compare multiple state-of-the-art regression approaches for predicting component failures and RUL estimation for the N-CMAPSS benchmark. We also provide comparisons with and without PCA orthogonalization and for multiple loss functions, with tangible improvements for both RUL and binary classifications with these computational approaches. We hope that our contributions encourage future benchmarking efforts on the N-CMAPSS dataset and for PHM research.
Despite these advancements, important limitations remain that require addressing in future work. Firstly, while RUL prediction is improved over past work, the prediction errors still have a large variance. Integration with physics is suggested in the future to improve the confidence of RUL predictions. In addition, failure data are difficult to obtain in practice, and as a result, industrial datasets are often imbalanced (Santos et al., 2018), threatening the utility of fully supervised learning techniques. As a result, more research is required in semi-supervised and unsupervised methods to at least lighten the supervision requirement for AI algorithms to provide accurate prognoses. In addition, while PCA orthogonalization vastly improved the component failure predictions, the derived PC variables lack physical meaning, hindering the explainability of the input features. This step makes the current formulation incompatible with explainable AI (XAI) methods such as SHAP, which attempt to explain black-box model predictions in terms of additive marginal contributions of features (Senoner et al., 2021). While being able to accurately isolate eventual failures on a component-level provides inherent explainability compared to previous efforts, we leave XAI integration for future work.
## 5 Conclusion
Our work as benchmarked on the N-CMAPSS dataset uniquely demonstrates the potential for an approach that simultaneously detects the current health state, predicts which component(s) will fail, and then estimates the number of cycles until failure. In essence, this integrates the important disciplines of anomaly detection and fault diagnosis--conventionally requiring multiple models--in one prognostic model that makes accurate predictions, even for presently healthy units. Our main contributions and findings for this research effort are restated as follows:
* Reformulated and expanded the scope of the 2021 PHM Data Challenge to include health state detection and eventual failure prognosis;
* Customized loss function derived to simultaneously balance classification and regression objectives;
* Accurately predicted health state and eventual failures, with AUROC and AUPR exceeding 0.95 for each classification prediction accomplished with the ANN-Flux methodology;
* RUL RMSE reduced by 38% for the same dataset split and with less computational effort required for training compared to prior work.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **L\(\ddot{\text{o}}\)vberg** & **DeVol** _et al._ **(2021)** & \begin{tabular}{c} **Solis-** \\ **Martin** _et al._ **(2021)** \\ \end{tabular} &
\begin{tabular}{c} **ANN-** \\ **Flux** \\ \end{tabular} \\ \hline \# Trainable & & & & \\ Parameters & N/A & 1,030,000 & 4,089,465 & 10,631 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Number of trainable parameters compared with 2021 PHM Data Challenge winners
Figure 5: Box-and-whisker plots for RUL prediction errors for each eventual failing component
The authors hope that these contributions will help bolster PHM research and Industry 4.0 efforts to improve safety, lower costs, and enhance decision-making in the age of Big Data.
## 6 Data Availability
We plan on making all code for this paper fully available on GitHub for maximum transparency and encourage reproducibility to further N-CMAPSS as a benchmark for PHM research. The N-CMAPSS dataset is publicly available for download in NASA's Prognostics Center of Excellence Data Repository: [https://www.nasa.gov/content/prognostics-center-of-excellence-data-set-repository](https://www.nasa.gov/content/prognostics-center-of-excellence-data-set-repository).
## Acknowledgement
The authors would like to thank the S. M. Wu Manufacturing Research Center as well as the Uncertainty Quantification & Scientific Machine Learning Group at the University of Michigan for invaluable feedback and support.
|
2310.02471 | Flat hypercomplex nilmanifolds are H-solvable | We say that a hypercomplex nilpotent Lie algebra is $\mathbb{H}$-solvable if
there exists a sequence of $\mathbb{H}$-invariant subalgebras $\mathfrak{g}_1^{
\mathbb{H}}\supset\mathfrak{g}_2^{
\mathbb{H}}\supset\cdots\supset\mathfrak{g}_{k-1}^{
\mathbb{H}}\supset\mathfrak{g}_k^{ \mathbb{H}}=0,$ such that $[\mathfrak{g}_i^{
\mathbb{H}},\mathfrak{g}_i^{ \mathbb{H}}]\subset\mathfrak{g}^{
\mathbb{H}}_{i+1}.$ Let $N=\Gamma\backslash G$ be a hypercomplex nilmanifold
with flat Obata connection and $\mathfrak{g}=Lie(G)$. We prove that the Lie
algebra $\mathfrak{g}$ is $ \mathbb{H}$-solvable. | Yulia Gorginyan | 2023-10-03T22:38:55Z | http://arxiv.org/abs/2310.02471v1 | # Flat hypercomplex nilmanifolds are \(\mathbb{H}\)-solvable
###### Abstract
We say that a hypercomplex nilpotent Lie algebra is \(\mathbb{H}\)**-solvable** if there exists a sequence of \(\mathbb{H}\)-invariant subalgebras
\[\mathfrak{g}_{1}^{\mathbb{H}}\supset\mathfrak{g}_{2}^{\mathbb{H}}\supset\cdots \supset\mathfrak{g}_{k-1}^{\mathbb{H}}\supset\mathfrak{g}_{k}^{\mathbb{H}}=0,\]
such that \([\mathfrak{g}_{i}^{\mathbb{H}},\mathfrak{g}_{i}^{\mathbb{H}}]\subset\mathfrak{ g}_{i+1}^{\mathbb{H}}.\) Let \(N=\Gamma\backslash G\) be a hypercomplex nilmanifold with flat Obata connection and \(\mathfrak{g}=\operatorname{Lie}(G)\). We prove that the Lie algebra \(\mathfrak{g}\) is \(\mathbb{H}\)-solvable.
###### Contents
* 1 Introduction
* 1.1 Affine manifolds
* 1.2 Hypercomplex affine nilmanifolds
* 1.3 \(\mathbb{H}\)-solvable Lie algebras and algebraic holonomy
* 2 Preliminaries: Nilpotent Lie groups and algebras
* 3 Algebraic holonomy group
* 3.1 Left equivariant vector bundles
* 3.2 Algebraic holonomy group
* 3.3 Maltsev completion
* 3.4 Unipotent holonomy group and \(\mathbb{H}\)-solvability
* 4
## 1 Introduction
### Affine manifolds
A manifold \(X\) together with a torsion-free flat connection \(\nabla\) is called **an affine manifold**. Equivalently, an affine manifold \(X\) is a manifold with an atlas such that all translation maps between charts are in \(\operatorname{Aff}(\mathbb{R}^{n})\) ([FGH], [Sh]).
Recall that a group of affine transformations \(\operatorname{Aff}(\mathbb{R}^{n})\) is a semidirect product \(\operatorname{GL}_{n}(\mathbb{R})\ltimes\mathbb{R}^{n}\). If \(f\in\operatorname{Aff}(\mathbb{R}^{n})\) then for any \(x\in\mathbb{R}^{n}\) there exist a linear part \(A\in\operatorname{GL}(\mathbb{R}^{n})\) and a translation part \(t\in\mathbb{R}^{n}\) such that \(f(x)=Ax+t\). **The linearization map** is the natural homomorphism \(l:\operatorname{Aff}(\mathbb{R}^{n})\longrightarrow\operatorname{GL}( \mathbb{R}^{n})\) given by the formula \(l(f)=A\).
An affine manifold \(X\) is called **complete** if its universal cover is affine equivalent to \(\mathbb{R}^{n}\), i.e. \(X=\mathbb{R}^{n}/\Gamma\), where \(\Gamma\subset\operatorname{Aff}(\mathbb{R}^{n})\) is a discrete subgroup of the affine transformation group.
There are three famous open questions related to affine geometry. In the historical context, the first was the conjecture of Chern:
**Conjecture 1.1**: (Chern) The Euler characteristic of a compact affine manifold vanishes [Gol].
Kostant and Sullivan proved Chern's conjecture for the compact complete affine manifolds [KS]. Also, it was proven by Bruno Klingler [Klin] in the case when a compact affine manifold admits a parallel volume form. As far as we know, the general case of Chern's conjecture remains open.
The conjecture of Markus links the existence of a parallel volume form with the completeness of a compact affine manifold.
**Conjecture 1.2**: (Markus, 1962)[Mar] A compact affine manifold \(X\) admits parallel volume form if and only if the manifold \(X\) is complete.
Recall that a subgroup \(\Gamma\) of an affine group \(\operatorname{Aff}(\mathbb{R}^{n})\) is called **crystallographic**[A] if its action on \(\mathbb{R}^{n}\) is properly discontinuous, free, and cocompact.
Recall that a group \(G\) is said to have a certain property \(P\)**virtually** if \(G\) contains a subgroup \(H\) of finite index which has the property \(P\).
**Theorem 1.3:** (Bieberbach, 1911) [Bieb] Every discrete subgroup \(\Gamma\) of an isometry group \(\operatorname{Isom}(\mathbb{R}^{n})\) is virtually abelian. Every crystallographic subgroup of \(\operatorname{Isom}(\mathbb{R}^{n})\) is virtually a translation group. For a given \(n\) there exists only a finite number of such groups \(\Gamma\subset\operatorname{Isom}(\mathbb{R}^{n})\) up to a conjugation.
One of the ways to generalize the theorem of Bieberbach is to consider an affine transformation group \(\operatorname{Aff}(\mathbb{R}^{n})\) instead of \(\operatorname{Isom}(\mathbb{R}^{n})\).
**Conjecture 1.4:** (Auslander, 1964)[Aus] Every crystallographic subgroup of an affine group is virtually solvable, i.e. contains a solvable subgroup of finite index.
Auslander's conjecture was proven by D. Fried and W. Goldman [FG] in the case \(n=3\) and the result was refined up to dimension \(n=6\) by H. Abels, G.A. Margulis, and G.A. Soifer [AMS]. Also, Auslander's conjecture is true in a case when the monodromy group preserves a metric of signature \((1,n)\)[GK] and \((2,n)\)[AMS2].
### Hypercomplex affine nilmanifolds
Recall that **a nilmanifold**\(N\) is a compact quotient of a connected simply connected nilpotent Lie group \(G\) by a lattice subgroup \(\Gamma\), which acts on the group \(G\) from the left. We denote the quotient as \(N=\Gamma\backslash G\). It is called **an affine nilmanifold** if \(G\) has a left-invariant affine structure, i.e. the flat torsion-free connection \(\nabla\) such that \(L_{g}^{*}(\nabla)=\nabla\), where \(L_{g}:G\mathop{\longrightarrow}G\) is the left-translation.
Let \(\pi:\widetilde{X}\mathop{\longrightarrow}X\) be the universal cover of a manifold \(X\) and \(\pi_{1}(X)\) the fundamental group. An affine immersion \(D:\widetilde{X}\mathop{\longrightarrow}\mathbb{R}^{n}\) is called **the developing map**. By a well-known theorem ([OV], [Gol1]) \(D\) exists and is uniquely defined up to an affine automorphism.
**Definition 1.5: The affine holonomy representation** is a unique homomorphism \(h:\pi_{1}(X)\mathop{\longrightarrow}\operatorname{Aff}(\mathbb{R}^{m})\) which satisfies \(D\circ\gamma=h(\gamma)\circ D\) for every \(\gamma\in\pi_{1}(X)\). **The affine holonomy group**\(\mathscr{H}:=\operatorname{Im}(h)\) is the image of the homomorphism \(h\) in \(\operatorname{Aff}(\mathbb{R}^{n})\). **The linear holonomy group** is the image of the affine holonomy group under the linearization map, \(\mathscr{L}:=l(\mathscr{H})\subset\operatorname{GL}(\mathbb{R}^{n})\).
**Definition 1.6:** A closed Lie subgroup \(G\subset\operatorname{GL}(V)\) is called **a linear algebraic group** if \(G\) is given by a system of polynomial equations.
**Definition 1.7:** A representation of a linear algebraic group is called **unipotent** if, in a certain basis, its image is the group of strictly upper-triangular matrices.
Let \(X\) be a compact affine manifold whose linear holonomy representation is unipotent. Then \(X\) admits a parallel volume form. The converse was proven by Goldman, Fried, and Hirsch, [10, Theorem A]:
**Theorem 1.8:** Let \(X\) be a compact affine manifold with a parallel volume form. Assume that its affine holonomy group is nilpotent. Then the linear holonomy representation is unipotent.
We will use the following reformulation of Theorem 1.8:
**Theorem 1.9:** Let \(X\) be a compact affine manifold with a parallel volume form. Assume that its fundamental group is nilpotent. Then the monodromy representation is unipotent.
We are interested in affine nilmanifolds which also possess a hypercomplex structure. First, recall a definition of a complex nilmanifold.
**Definition 1.10:** Let \(G\) be a Lie group equipped with a left-invariant complex structure. **A complex nilmanifold** is a pair \((N=\Gamma\backslash G,I)\), where \(N\) is a nilmanifold and the complex structure \(I\) is obtained from the corresponding left-invariant complex structure on \(G\).
An almost hypercomplex manifold \(X\) is a smooth manifold equipped with three endomorphisms \(I,J\) and \(K\) of the tangent bundle satisfying the quaternionic relations \(I^{2}=J^{2}=K^{2}=-\operatorname{Id}\) and \(IJ=K\). When almost complex structures are integrable, the quadruple \((X,I,J,K)\) is called **a hypercomplex manifold.**
**Definition 1.11:** Let \(G\) be a nilpotent Lie group with a left-invariant hypercomplex structure \(I,J,K\) and \(\Gamma\subset G\) a cocompact lattice. Then \((N=\Gamma\backslash G,I,J,K)\) is called **a hypercomplex nilmanifold**.
M. Obata showed [11] that on a hypercomplex manifold \(M\), there exists a unique torsion-free connection \(\nabla\) preserving the complex structures: \(\nabla I=\nabla J=\nabla K=0\). It is called **the Obata connection**.
It could be written in the following form
\[\nabla_{X}Y=\frac{1}{2}([X,Y]+I[IX,Y]-J[X,JY]+K[IX,JY]), \tag{1.1}\]
for any \(X,Y\in TM\)[Sol].
The existence of a parallel volume form on a hypercomplex nilmanifold is guaranteed by the following theorem:
**Theorem 1.12**:: [BDV, Theorem 3.2] Let \(N=\Gamma\backslash G\) be a hypercomplex nilmanifold, \(n=\dim_{\mathbb{C}}G\). Then \(G\) admits a left-invariant, non-zero, holomorphic section \(\Omega\) of the canonical bundle \(\Lambda^{n,0}G\). Moreover, \(\nabla\Omega=0\), where \(\nabla\) is **the Obata connection**.
### \(\mathbb{H}\)-solvable Lie algebras and algebraic holonomy
An operator \(I\) on a real Lie algebra \(\mathfrak{g}\) is called **a complex structure operator** if \(I^{2}=-\operatorname{Id}\) and the \(\sqrt{-1}\)-eigenspace \(\mathfrak{g}^{1,0}\) is a Lie subalgebra in the complexification \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\otimes\mathbb{C}\).
**Remark 1.13**:: Take an endomorphism \(I\in\operatorname{End}\mathfrak{g}\), \(I^{2}=-\operatorname{Id}\), and extend it to the left-invariant almost complex structure on the Lie group \(G\). Then \(I\) is complex if and only if \([\mathfrak{g}^{1,0},\mathfrak{g}^{1,0}]\subset\mathfrak{g}^{1,0}\). In other words, this definition is compatible with Definition 1.10.
**A hypercomplex structure** on a Lie algebra \(\mathfrak{g}\) is a triple of complex structure operators \(I,J\) and \(K\) on \(\mathfrak{g}\) satisfying the quaternionic relations.
**Definition 1.14**:: Let \(\mathfrak{g}\) be a nilpotent hypercomplex Lie algebra. Define inductively \(\mathbb{H}\)-invariant Lie subalgebras:
\[\mathfrak{g}_{i}^{\mathbb{H}}:=\mathbb{H}^{\dagger}[\mathfrak{g}_{i-1}^{ \mathbb{H}},\mathfrak{g}_{i-1}^{\mathbb{H}}],\]
where \(\mathfrak{g}_{0}^{\mathbb{H}}=\mathfrak{g}\) and
\[\mathfrak{g}_{1}^{\mathbb{H}}:=\mathbb{H}^{\dagger}[\mathfrak{g},\mathfrak{g }]=[\mathfrak{g},\mathfrak{g}]+I[\mathfrak{g},\mathfrak{g}]+J[\mathfrak{g}, \mathfrak{g}]+K[\mathfrak{g},\mathfrak{g}].\]
It is natural to study if \(\mathfrak{g}_{1}^{\mathbb{H}}\) is a proper subalgebra in \(\mathfrak{g}\).
A hypercomplex nilpotent Lie algebra \(\mathfrak{g}\) is called \(\mathbb{H}\)**-solvable** if the following sequence terminates for some \(k\in\mathbb{Z}_{>0}\):
\[\mathfrak{g}_{1}^{\mathbb{H}}\supset\mathfrak{g}_{2}^{\mathbb{H}}\supset\cdots \supset\mathfrak{g}_{k-1}^{\mathbb{H}}\supset\mathfrak{g}_{k}^{\mathbb{H}}=0. \tag{1.2}\]
**Conjecture 1.15:** Let \((\mathfrak{g},I,J,K)\) be a nilpotent hypercomplex Lie algebra. Then it is \(\mathbb{H}\)-solvable.
Let \((N=\Gamma\backslash G,I,J,K)\) be a hypercomplex nilmanifold. Consider a complex nilmanifold \((N,L)\) with a general complex structure \(L=aI+bJ+cK\), where \((a,b,c)\in S^{2}\), obtained from the hypercomplex structure. It is natural to try to describe complex submanifolds in \((N,L)\). We solved this problem in the case of complex curves under the additional assumption on the Lie algebra \(\mathfrak{g}=\mathrm{Lie}(G)\) [Gor]. Precisely, if the corresponding Lie algebra \(\mathfrak{g}\) is \(\mathbb{H}\)-solvable, then there are no complex curves in a complex nilmanifold \((N,L)\) for the general complex structure \(L\).
In this work, we show that in the case when a hypercomplex nilmanifold \(\Gamma\backslash G\) admits the flat Obata connection, then \(\mathfrak{g}=\mathrm{Lie}(G)\) is \(\mathbb{H}\)-solvable. The main argument we use relies on the notion of _an algebraic holonomy group_, which we define below (Definition 3.19).
**Example 1.16:** It was shown in [DF] that all 8-dimensional hypercomplex nilpotent Lie algebras are Obata-flat. However, in the same paper, I. Dotti and A. Fano presented an example of the 3-step nilpotent hypercomplex Lie algebra of dimension 12 which has non-zero curvature of the Obata connection.
Let \(\mathfrak{g}\) be a nilpotent Lie algebra over the field \(\mathbb{R}\). **A rational structure** in \(\mathfrak{g}\) is a subalgebra \(\mathfrak{g}_{\mathbb{Q}}\subset\mathfrak{g}\) defined over the rational numbers such that \(\mathfrak{g}_{\mathbb{Q}}\otimes_{\mathbb{Q}}\mathbb{R}=\mathfrak{g}\). Assume that a Lie algebra \(\mathfrak{g}\) admits a rational structure \(\mathfrak{g}_{\mathbb{Q}}\). By [CG] it happens if and only if there exists a nilmanifold \(\Gamma\backslash G\) such that \(\mathfrak{g}_{\mathbb{Q}}:=\mathrm{span}_{\mathbb{Q}}\langle\log\Gamma\rangle\).
In the case when a Lie group \(G\) admits a left-invariant hypercomplex structure with the flat Obata connection, we prove that \(\mathfrak{g}_{i}^{\mathbb{H}}\) is a proper subalgebra of \(\mathfrak{g}_{i-1}\) using the following approach.
**A connection** on a Lie algebra \(\mathfrak{g}\) is an \(\mathbb{R}\)-linear map
\[\nabla:\mathfrak{g}\longrightarrow\mathfrak{g}^{*}\otimes\mathfrak{g}\]
such that \(X\mapsto\nabla_{X}\in\text{End}(\mathfrak{g})\). Note that a connection on a Lie algebra \(\mathfrak{g}\) is the same as a left-invariant connection on a Lie group \(G\) (Claim 3.6). This notion can be generalized to arbitrary \(G\)-equivariant vector bundle on a Lie group \(G\), giving the algebraic version of an equivariant connection (see (3.1)).
**The curvature tensor \(R\)** of the connection \(\nabla\) defined in the following way:
\[R(X,Y)=[\nabla_{X},\nabla_{Y}]-\nabla_{[X,Y]},\]
where \(X,Y\in\mathfrak{g}\). The connection \(\nabla\) is called **flat** if \(R=0\). Notice that left invariant flat connections on \(G\) are equivalent to the representations of a Lie algebra on itself considered as a vector space (Claim 3.9).
In the end, we prove the following theorem:
**Theorem 1.17:** Let \((N,I,J,K)\) be a hypercomplex nilmanifold with the flat Obata connection. Then the corresponding Lie algebra is \(\mathbb{H}\)-solvable.
## 2 Preliminaries: Nilpotent Lie groups and algebras
Let \(G\) be a Lie group. Define inductively the descending chain of normal subgroups of \(G\)
\[G=G_{0}\supset G_{1}\supset G_{2}\supset\cdots\supset G_{k}\supset\cdots, \tag{2.1}\]
where \(G_{j}:=[G_{j-1},G]\) is the subgroup generated by the elements of the form \(xyx^{-1}y^{-1}\), \(x\in G_{j-1}\), \(y\in G\).
**Definition 2.1:** A Lie group \(G\) is called **a nilpotent Lie group** if (2.1) terminates for some \(k\in\mathbb{Z}_{>0}\), i.e. \(G_{k}=\{e\}\).
**The descending series** of a real Lie algebra \(\mathfrak{g}\) is the chain of ideals defined as follows:
\[\mathfrak{g}_{0}\supset\mathfrak{g}_{1}\supset\cdots\supset[\mathfrak{g}_{k},\mathfrak{g}]\supset\ldots,\]
\(\mathfrak{g}_{0}=\mathfrak{g}\) and \(\mathfrak{g}_{k}=[\mathfrak{g}_{k-1},\mathfrak{g}]\).
**Definition 2.2:** A Lie algebra \(\mathfrak{g}\) is called **nilpotent** if \(\mathfrak{g}_{s}=0\) for some \(s\in\mathbb{Z}_{>0}\).
Let \(\mathfrak{g}\) be a real Lie algebra and \(\mathfrak{g}^{*}\) its dual space. For any \(\alpha\in\mathfrak{g}^{*}\) and \(\xi,\theta\in\mathfrak{g}\) the Chevalley-Eilenberg differential \(d:\mathfrak{g}^{*}\mathop{\longrightarrow}\Lambda^{2}\mathfrak{g}^{*}\) is defined as follows
\[d\alpha(\xi,\theta)=-\alpha([\xi,\theta]). \tag{2.2}\]
It extends to a finite-dimensional complex
\[0\mathop{\longrightarrow}\mathfrak{g}^{*}\mathop{\longrightarrow}\Lambda^{2 }\mathfrak{g}^{*}\mathop{\longrightarrow}\cdots\mathop{\longrightarrow} \Lambda^{2n}\mathfrak{g}^{*}\mathop{\longrightarrow}0 \tag{2.3}\]
by the Leibniz rule: \(d(\alpha\wedge\beta)=d\alpha\wedge\beta+(-1)^{\widetilde{\alpha}}\alpha\wedge d\beta\), where \(\alpha,\beta\in\mathfrak{g}^{*}\). The identity \(d^{2}=0\) follows from the Jacobi identity [CE].
Notice that the kernel of a closed 1-form is an ideal in the Lie algebra \(\mathfrak{g}\). According to the definition of the Chevalley-Eilenberg differential, any closed 1-form \(\alpha\in\mathfrak{g}^{*}\) vanishes on the commutator ideal \([\mathfrak{g},\mathfrak{g}]\subseteq\ker\alpha\).
**Remark 2.3:** The intersection of all kernels of closed 1-forms \(\alpha\in\mathfrak{g}^{*}\)
\[\Sigma\ :=\bigcap_{\alpha\in\mathfrak{g}^{*},d\alpha=0}\ker\alpha\]
also forms an ideal in the Lie algebra \(\mathfrak{g}\) which coincides with the commutator ideal \([\mathfrak{g},\mathfrak{g}]\).
Recall that **a distribution** on a smooth manifold \(N\) is a sub-bundle \(\Sigma\subset TN\). The distribution is called **involutive** if it is closed under the Lie bracket. **A leaf** of the distribution \(\Sigma\) is a maximal connected, immersed submanifold \(L\subset N\) such that \(L\) tangent to \(\Sigma\) at each point. If \(\Sigma\) is involutive, then the set of all its leaves is called **a foliation**.
## 3 Algebraic holonomy group
### Left equivariant vector bundles
Recall that a map \(\varphi:X\mathop{\longrightarrow}Y\) of two manifolds \(X\) and \(Y\) with an action of a group \(G\) is called \(G\)**-equivariant** if \(\varphi(g\cdot x)=g\cdot\varphi(x)\) for any \(g\in G\), \(x\in X\). A vector bundle \(\pi:\mathbb{B}\mathop{\longrightarrow}X\) on a manifold \(X\) is called **a \(G\)-equivariant vector bundle** if its total space is equipped with a \(G\)-action such that the
projection \(\pi\) is a \(G\)-equivariant map and the action of \(G\) is linear on the fibers, i.e. \(\mathbb{B}_{x}\longrightarrow\mathbb{B}_{gx}\) is a linear map for all \(g\in G\).
Let \(G\) be a Lie group, \(L_{g}:G\mathop{\longrightarrow}G\) the left translation \((h\mapsto g\cdot h)\), and \(B\) a finite-dimensional vector space with a basis \(\{b_{1},\ldots,b_{n}\}\).
**Definition 3.1:** Let \(\mathfrak{g}=\mathrm{Lie}(G)\) be a Lie algebra and \(\mathfrak{g}^{*}=\mathrm{Hom}(\mathfrak{g},\mathbb{R})\) its dual. **An algebraic connection** is an \(\mathbb{R}\)-linear map
\[\nabla:B\longrightarrow\mathfrak{g}^{*}\otimes B.\]
We use the following notation
\[\nabla(b)=\sum_{i}\theta_{i}\otimes A_{i}(b) \tag{3.1}\]
where \(\theta_{i}\in\mathfrak{g}^{*}\) and \(A_{i}\in\mathrm{End}(B)\), \(b\in B\).
Let \(d:\mathfrak{g}^{*}\mathop{\longrightarrow}\Lambda^{2}\mathfrak{g}^{*}\) be the Chevalley-Eilenberg differential. We define **the curvature** of the connection \(\nabla\) by the following formula
\[\Theta_{\nabla}:=d\omega+\omega\wedge\omega, \tag{3.2}\]
where \(\omega:=\sum_{i}\theta_{i}\otimes A_{i}\), \(d\omega=\sum_{i}d\theta_{i}\otimes A_{i}\) and \(\omega\wedge\omega=\sum_{i<j}\theta_{i}\wedge\theta_{j}\otimes[A_{i},A_{j}]\).
Consider a vector bundle \(\pi:\mathbb{B}\mathop{\longrightarrow}G\) on a Lie group \(G\). From now we denote the fiber at the identity \(e\in G\) by \(\mathbb{B}_{e}=B\).
**Definition 3.2:** **A left equivariant vector bundle**\(\pi:\mathbb{B}\mathop{\longrightarrow}G\) over a Lie group \(G\) is a \(G\)-equivariant vector bundle \(\pi\) on a Lie group \(G\) with an action given by the left translations. **A left invariant section**\(s\) is determined by its value \(s_{e}\) at \(e\in G\). This gives a map
\[\mathbb{L}:\mathbb{B}_{e}\mathop{\longrightarrow}H^{0}(G,\mathbb{B}),\quad s =\mathbb{L}(s_{e}), \tag{3.3}\]
\[\mathbb{L}(s_{e})(g)=s(g):=(L_{g})_{*}(s_{e}).\]
identified the fiber \(\mathbb{B}_{e}\) with the space \(H^{0}(G,\mathbb{B})\) of all \(G\)-invariant sections of \(\mathbb{B}\).
**Remark 3.3:** All tensor powers of a \(G\)-equivariant vector bundle are \(G\)-equivariant.
**Claim 3.4:** The category of left equivariant vector bundles on a Lie group \(G\) is equivalent to the category of vector spaces.
**Proof:** The group \(G\) acts on itself by the left multiplication freely and transitively. Any left equivariant bundle \(\mathbb{B}\) is trivial as a vector bundle and there is a basis of left invariant sections \(\{s_{1},\ldots,s_{n}\}\), such that \(s_{i}=\mathbb{L}(b_{i})\). For each left equivariant vector bundle the vector space of its left invariant sections gives a functor to the category of vector spaces. Conversely, given a vector space, consider it as a space of sections evaluated at the identity \(e\in G\), and then extend them via all left translations (3.3) to obtain a left equivariant vector bundle.
Now we are able to give a definition of an invariant connection on the bundle \(\mathbb{B}\). We use an algebraic connection (3.1) on the fiber \(B=\mathbb{B}_{e}\).
**Definition 3.5:** Let \(\pi:\mathbb{B}\mathop{\longrightarrow}X\) be a \(G\)-equivariant vector bundle over a manifold \(X\). A connection \(\overline{\nabla}\) on \(\mathbb{B}\) is called **equivariant** if it defines a left invariant differential operator
\[\overline{\nabla}:\mathbb{B}\mathop{\longrightarrow}\Lambda^{1}(X)\otimes \mathbb{B}.\]
**Claim 3.6:** A connection \(\overline{\nabla}:\mathbb{B}\mathop{\longrightarrow}\Lambda^{1}(G)\otimes \mathbb{B}\) on a left equivariant vector bundle \(\mathbb{B}\) over a Lie group \(G\) is equivariant if it satisfies
\[\overline{\nabla}(\mathbb{L}(s_{e}))=\mathbb{L}((\overline{\nabla}s)_{e}), \tag{3.4}\]
where \(\overline{\nabla}s\) is understood as a section of the left-equivariant bundle \(\Lambda^{1}(G)\otimes\mathbb{B}\), \(\mathbb{L}\) is the left-translation map defined in (3.3), and \((\overline{\nabla}s)_{e}\) means the value of the section in \(e\in G\).
**Definition 3.7:** Let \(\theta\in\mathfrak{g}^{*}\) and \(b\in B\). Define a map
\[d_{\nabla}:\mathfrak{g}^{*}\otimes_{\mathbb{R}}B\mathop{\longrightarrow} \Lambda^{2}\mathfrak{g}^{*}\otimes_{\mathbb{R}}B\]
by the following formula:
\[d_{\nabla}(\theta\otimes b)=d\theta\otimes b-\theta\wedge\nabla b. \tag{3.5}\]
This defines **the curvature of an algebraic connection**
\[\Theta=d_{\nabla}^{2}:B\mathop{\longrightarrow}\Lambda^{2}\mathfrak{g}^{*} \otimes B.\]
As in (3.2) one can also write \(\Theta=d\omega+\omega\wedge\omega\).
**Claim 3.8:** The curvature of a left equivariant connection on a G-equivariant bundle \(\pi:\mathbb{B}\mathop{\longrightarrow}X\) is an \(\mathbb{R}\)-linear map \(\Theta_{\overline{\nabla}}:\mathbb{B}\mathop{\longrightarrow}\Lambda^{2}(X) \otimes\mathbb{B}\) given by \(\Theta_{\overline{\nabla}}(s):=\mathbb{L}(d_{\nabla}^{2}(s_{e}))\).
We say that bundle \(\mathbb{B}\) is **flat** if \(\Theta_{\overline{\nabla}}=0\).
**Claim 3.9:** Let \(\pi:\mathbb{B}\mathop{\longrightarrow}G\) be a left equivariant vector bundle over a Lie group. Then the bundle is flat if and only if \(\nabla_{X}:\mathfrak{g}\mathop{\longrightarrow}\mathrm{End}(B)\) is a Lie algebra representation.
**Proof:** Assume that \(\nabla:\mathfrak{g}\mathop{\longrightarrow}B^{*}\otimes B\cong\mathrm{End}(B)\) is a representation and the map is given as follows \(X\mapsto\nabla_{X}\). Then for any \(X,Y\in\mathfrak{g}\) we have\([\nabla_{X},\nabla_{Y}]-\nabla_{[X,Y]}=0\), hence by Claim 3.8 the curvature vanishes. Conversely, if \(\Theta_{\overline{\nabla}}=0\), then \(\Theta_{\overline{\nabla}}=[\nabla_{X},\nabla_{Y}]-\nabla_{[X,Y]}=0\), hence \(\nabla_{X}\) is a representation.
Consider a left-equivariant vector bundle \(\pi:\mathbb{B}\mathop{\longrightarrow}G\) over a Lie group \(G\). Let \(\Gamma\) be a discrete subgroup of a Lie group \(G\) which acts on \(G\) from the left and \(q:G\mathop{\longrightarrow}\Gamma\backslash G\) the quotient map. Denote by \(\mathbb{B}_{\Gamma}\) the induced vector bundle \(\pi_{\Gamma}:\mathbb{B}_{\Gamma}\mathop{\longrightarrow}\Gamma\backslash G\) on a manifold \(\Gamma\backslash G\) obtained by taking the fiberwise quotients by \(\Gamma\) such that the following diagram is commutative
Given a vector bundle \((\mathbb{B},\overline{\nabla})\) with an equivariant connection \(\overline{\nabla}\), we denote by \(\overline{\nabla}^{\Gamma}\) the induced connection on a bundle \(\mathbb{B}_{\Gamma}\).
### Algebraic holonomy group
Recall that there is a bijection between the set of isomorphism classes of flat vector bundles over \(X\) and the set of conjugacy classes of homomorphisms \(\varphi:\pi_{1}(X)\mathop{\longrightarrow}\mathrm{GL}(\mathbb{R}^{n})\) (It is called "the Riemann-Hilbert correspondence", see e.g. [OV]).
Let \(p:E\mathop{\longrightarrow}M\) be a vector bundle over a manifold \(M\) and \(\delta:[0,1]\mathop{\longrightarrow}M\) a smooth path in \(M\). An equation \(\nabla_{\dot{\delta}(t)}v=0\) defines **a parallel transport**
of a section \(v\) of \(E_{|_{\delta(t)}}\) along the curve \(\delta\). The parallel transport along a loop \(\delta:[0,1]\mathop{\longrightarrow}M\) such that \(x=\delta(0)=\delta(1)\) gives an endomorphism \(p_{\delta}\in\mathop{\rm End}\nolimits(E_{x})\) of the fiber \(E_{x}\).
**Definition 3.10:** Let \(\nabla:E\mathop{\longrightarrow}\Lambda^{1}(M)\otimes E\) be a connection on a vector bundle \(\pi:E\mathop{\longrightarrow}G\). **The holonomy group** of the connection \(\nabla\) is the group of linear transformations of a fiber \(E_{x}\) given by all parallel translations along all smooth loops based at \(x\):
\[\mathop{\mathcal{H}\!\omega}\nolimits\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Maltsev completion
In this section, we assume that a group \(G\) is a finitely generated torsion-free nilpotent group.
**Definition 3.13**:: [Mal] A nilpotent group \(G\) is called **Maltsev complete** if for each \(g\in G\) and for all \(n\in\mathbb{Z}_{>0}\) the equation \(x^{n}=g\) has solutions in \(G\).
**Definition 3.14**:: [Mal] Let \(\Gamma\) be a subgroup of a Maltsev complete nilpotent group \(G\). Then the set \(\hat{\Gamma}=\{g\in G\,|\,g^{n}\in\Gamma,\,n\in\mathbb{Z}\}\subset G\) is called **the Maltsev completion** of a group \(\Gamma\). In [Mal], the definition of the Maltsev completion was given over the field \(\mathbb{Q}\) but it could be done over any field \(k\) of characteristic zero.
**Definition 3.15**:: [GH] Let \(k\) be a field of characteristic zero. **The Maltsev completion functor**\(\mathscr{M}_{k}\) is a functor from the category of finitely generated torsion-free nilpotent groups to the category of unipotent algebraic \(k\)-groups. If \(G\) is a finitely generated nilpotent group, then \(\mathscr{M}_{k}(G):=\overline{\Phi(G)}\), where
\[\Phi:G\longrightarrow\,\text{End}(B)\]
is a faithful unipotent representation and \(\overline{\Phi(G)}\) is the Zariski closure of the image, which lies in the subgroup of upper-triangular matrices.
**Remark 3.16**:: We will also denote the rational Maltsev completion of a group \(G\) by \(\hat{G}\), as it was introduced in Definition 3.13.
Below we are listing some properties of the Maltsev completion [GH]:
**Properties 3.17**::
1. A connected nilpotent Lie group is Maltsev complete;
2. Let \(\Gamma\) be a subgroup of a nilpotent torsion-free Lie group \(G\). By definition, \(\mathscr{M}_{k}(\Gamma)\subset G\) is a Maltsev complete group. Its isomorphism class does not depend on \(G\) and the embedding of \(\Gamma\) to \(G\);
3. The Maltsev completion \(\mathscr{M}_{k}(\Gamma)\) is a minimal complete subgroup which contains \(\Gamma\);
4. The Maltsev completion \(\mathscr{M}_{k}(\Gamma)\) is equipped with a natural structure of an algebraic group over \(k\).
**Claim 3.18:** The functor \(\mathscr{M}_{\mathbb{Q}}\) provides a bijection between the finite-dimensional unipotent representations of \(\Gamma\) over \(\mathbb{Q}\) and the finite-dimensional \(\mathbb{Q}\)-representations of \(\hat{\Gamma}\), considered as an algebraic group:
Moreover, the image \(\Phi(\Gamma)\) is Zariski dense in \(\hat{\Phi}(\hat{\Gamma})\).
**Proof:**[GH].
Note that in the context of Theorem 3.12, the holonomy group \(\mathscr{H}\mathscr{o}\mathscr{l}_{\nabla^{\Gamma}}\) is isomorphic to the image of the representation of the lattice subgroup \(\Gamma\):
\[\mathscr{H}\mathscr{o}\mathscr{l}_{\nabla^{\Gamma}}\cong\Phi(\Gamma). \tag{3.8}\]
The notion of a holonomy group has a geometric nature. We define an algebraic version of it as follows.
**Definition 3.19:** Let \(B\) be a vector space, \(\mathfrak{g}\) a Lie algebra, and \(\nabla:B\longrightarrow\mathfrak{g}^{*}\otimes B\) an algebraic connection (3.1). **An algebraic holonomy group \(\mathscr{H}\mathscr{o}\mathscr{l}_{\nabla}^{a}\)** is a subgroup of \(\operatorname{GL}(B)\) generated by the matrix exponents:
\[\mathscr{H}\mathscr{o}\mathscr{l}_{\nabla}^{a}:=\langle e^{t\nabla_{X}}\,|\,t \in\mathbb{R},\,\text{for all}\,X\in\mathfrak{g}\,\rangle.\]
**Remark 3.20:** We assume that an equivariant connection \(\nabla\hskip-5.0pt\nabla\) on a \(G\)-equivariant vector bundle \(B\) is flat. In this case, by Claim 3.9 the associated algebraic connection defines a representation of a Lie algebra \(X\mapsto\nabla_{X}\in\operatorname{End}(B)\).
Let \(G\) be a nilpotent group and \(\hat{G}\) its rational Maltsev completion, which is a rational algebraic group, i.e. it can be written as \(\hat{G}=\operatorname{Spec}(A)\), where \(A\) is its ring of regular functions.
**Remark 3.21:** In Definition 3.15 we defined the functor of Maltsev completion \(\mathscr{M}_{k}\) with coefficients in a field \(k\). **The real Maltsev completion \(\mathscr{M}_{\mathbb{R}}\)** can be equivalently defined as follows:
\[\hat{G}\otimes_{\mathbb{Q}}\mathbb{R}:=\operatorname{Spec}(A\otimes_{\mathbb{ Q}}\mathbb{R})=\mathscr{M}_{\mathbb{R}}(G).\]
**Theorem 3.22:** Let \((\mathbb{B},\mathbb{W})\) be a flat left equivariant vector bundle over a nilpotent Lie group \(G\), \(\Gamma\subset G\) a cocompact lattice, and \((\mathbb{B}_{\Gamma},\mathbb{W}^{\Gamma})\) the induced bundle over the nilmanifold \(\Gamma\backslash G\). Suppose that \(\nabla:B\longrightarrow\mathfrak{g}^{*}\otimes B\) is the algebraic connection associated with \(\mathbb{W}\). Assume that the monodromy representation \(\Phi:\Gamma\longrightarrow\,\text{End}(B)\) is unipotent.
Then
\[\overline{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}=\mathscr{H}\!o \ell_{\nabla}^{a}, \tag{3.9}\]
where \(\overline{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\subseteq\text{ GL}(B)\) denotes the Zariski closure of the monodromy group of \((\mathbb{B}_{\Gamma},\mathbb{W}^{\Gamma})\).
**Proof:** Consider the chain of embeddings
\[\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\subseteq\widehat{ \mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\subseteq\widehat{ \mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\otimes_{\mathbb{Q}} \mathbb{R}, \tag{3.10}\]
where \(\widehat{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\) is the rational Maltsev completion of the group \(\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\).
Notice that \(\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\cong\Phi(\Gamma)\) is Zariski dense in \(\widehat{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\) by [BrVer, Lemma 3.11] and \(\widehat{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\) is Zariski dense in \(\widehat{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\otimes_{\mathbb{Q }}\mathbb{R}\). Hence,
\[\overline{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}=\widehat{ \mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\otimes_{\mathbb{Q}} \mathbb{R}. \tag{3.11}\]
By Definition 3.19, the algebraic holonomy group \(\mathscr{H}\!o\ell_{\nabla}^{a}\subseteq\widehat{\mathscr{H}\!o\ell_{ \overline{\mathbb{W}}^{\Gamma}}}\) is a real group, which also contains \(\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\). Therefore, it has to be isomorphic to \(\widehat{\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}}\otimes_{\mathbb{Q }}\mathbb{R}\), because of the minimality of the real Maltsev completion.
**Corollary 3.23:** Let \(\Gamma\backslash G\) be a hypercomplex nilmanifold with the flat Obata conection \(\nabla^{Ob}\) (1.1) on \(TG\). Then the action of the algebraic holonomy on \(\mathfrak{g}=\text{Lie}(G)\) is unipotent.
**Proof:** Let \(\nabla\) be a flat algebraic connection on the Lie algebra \(\mathfrak{g}=T_{e}G\), associated with \(\nabla^{Ob}\). Define the action \(\mathscr{H}\!o\ell_{\nabla}^{a}\times\mathfrak{g}\longrightarrow\mathfrak{g}\) on \(\mathfrak{g}\) as follows:
\[(e^{\nabla_{X}},Y)\mapsto e^{\nabla_{X}}\cdot Y, \tag{3.12}\]
where \(X,Y\in\mathfrak{g}\). By Theorem 1.9, the monodromy representation \(\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\) is unipotent. From Theorem 3.22 it follows that \(\mathscr{H}\!o\ell_{\overline{\mathbb{W}}^{\Gamma}}\) is Zariski dense in \(\mathscr{H}\!o\ell_{\nabla}^{a}\). Hence \(\mathscr{H}\!o\ell_{\nabla}^{a}\) also unipotent.
### Unipotent holonomy group and \(\mathbb{H}\)-solvability
Let \((G,I,J,K)\) be a nilpotent Lie group with a left-invariant hypercomplex structure. In what follows we denote by \(\nabla\) the Obata connection (1.1), which we assumed to be flat, by \(\mathcal{H}\!o\!f_{\nabla}\) and \(\mathcal{H}\!o\!f_{\nabla}^{a}\) its holonomy (monodromy) and algebraic holonomy groups respectively.
Let \(\mathfrak{g}\) be a nilpotent hypercomplex Lie algebra. Consider the smallest \(\mathbb{H}\)-invariant subspace of \(\mathfrak{g}\) containing the commutator ideal \([\mathfrak{g},\mathfrak{g}]\):
\[\mathfrak{g}_{1}^{\mathbb{H}}:=\mathbb{H}[\mathfrak{g},\mathfrak{g}]=[ \mathfrak{g},\mathfrak{g}]+I[\mathfrak{g},\mathfrak{g}]+J[\mathfrak{g}, \mathfrak{g}]+K[\mathfrak{g},\mathfrak{g}].\]
Since \(\mathbb{H}[\mathfrak{g},\mathfrak{g}]\) contains the commutator ideal, it is an ideal, hence a subalgebra of \(\mathfrak{g}\).
Define inductively the \(\mathbb{H}\)-invariant Lie subalgebras:
\[\mathfrak{g}_{i}^{\mathbb{H}}:=\mathbb{H}[\mathfrak{g}_{i-1}^{\mathbb{H}}, \mathfrak{g}_{i-1}^{\mathbb{H}}],i\in\mathbb{Z}_{\geqslant 2}.\]
**Claim 3.24:** Let \((\mathfrak{g},I,J,K)\) be a hypercomplex nilpotent Lie algebra with the flat Obata connection. Then \(\nabla_{X}Y\in\mathfrak{g}_{i+1}^{\mathbb{H}}\) for any \(X,Y\in\mathfrak{g}_{i}^{\mathbb{H}}\).
**Proof:** Suppose that \(X,Y\in\mathfrak{g}_{i}^{\mathbb{H}}\). Then by (1.1) \(\nabla_{X}Y\in\mathfrak{g}_{i+1}^{\mathbb{H}}\).
For each \(i\in\mathbb{Z}_{\geqslant 0}\), \(X\in\mathfrak{g}_{i}^{\mathbb{H}}\), the elements \(e^{\nabla_{X}}\in\mathrm{End}(\mathfrak{g}_{i}^{\mathbb{H}})\) generate the algebraic holonomy group, associated to the Lie subalgebra \(\mathfrak{g}_{i}^{\mathbb{H}}\):
\[\mathcal{H}\!o\!f_{\nabla^{i}}^{a}=\langle e^{t\nabla_{X}}\,|\,t\in\mathbb{R},\,X\in\mathfrak{g}_{i}^{\mathbb{H}}\rangle.\]
**Remark 3.25:** Notice that \(\mathcal{H}\!o\!f_{\nabla^{i}}^{a}\) acts on \(\mathfrak{g}_{i}^{\mathbb{H}}\) as in (3.12) and it could happen that it is not a subgroup or even a subset of the algebraic holonomy group \(\mathcal{H}\!o\!f_{\nabla}^{a}\). Indeed, the action of \(\nabla_{X}\) on \(\mathfrak{g}_{i}^{\mathbb{H}}\) could be trivial, but act non-trivially on \(\mathfrak{g}\) for some \(X\in\mathfrak{g}_{i}^{\mathbb{H}}\). However, the action of \(\mathcal{H}\!o\!f_{\nabla^{i}}^{a}\) on \(\mathfrak{g}_{i}^{\mathbb{H}}\) coincides with the action of the corresponding subgroup of \(\mathcal{H}\!o\!f_{\nabla}^{a}\). Therefore, it is unipotent on \(\mathfrak{g}_{i}^{\mathbb{H}}\) if \(\mathcal{H}\!o\!f_{\nabla}^{a}\) is unipotent.
The following theorem is the main result of this paper.
**Theorem 3.26:** Let \((\mathfrak{g},I,J,K)\) be a hypercomplex nilpotent Lie algebra with the flat Obata connecton. Assume that the algebraic monodromy representation is unipotent. Then \(\mathfrak{g}\) is \(\mathbb{H}\)-solvable.
**Proof:** The action of \(\mathcal{H}\!o\!f^{a}_{\nabla^{i}}\) on the Lie subalgebra \(\mathfrak{g}^{\mathbb{H}}_{i}\) defined as in (3.12). By the assumption, the representation of the algebraic holonomy on \(\mathfrak{g}\) is unipotent, hence, by Remark 3.25, the action of \(\mathcal{H}\!o\!f^{a}_{\nabla^{i}}\) is unipotent on the \(\mathfrak{g}^{\mathbb{H}}_{i}\).
Let \(\varphi:\mathcal{H}\!o\!f^{a}_{\nabla^{i}}\longrightarrow\mathrm{End}(( \mathfrak{g}^{\mathbb{H}}_{i})^{*})\) be the action of the algebraic holonomy group on the dual Lie algebra \((\mathfrak{g}^{\mathbb{H}}_{i})^{*}\). Every unipotent representation of a Lie algebra has an invariant vector. This implies the existence of a non-zero Obata-parallel 1-form for \(\alpha\in\Lambda^{1}(\mathfrak{g}^{\mathbb{H}}_{i})^{*}\). Again, we consider the holonomy action on \((\mathfrak{g}^{\mathbb{H}}_{i})^{*}\) and for each \(i\) we obtain a non-zero parallel (hence, closed1) 1-form \(\alpha_{i}\in\Lambda^{1}(\mathfrak{g}^{\mathbb{H}}_{i})^{*}\).
Footnote 1: For any torsion-free connection \(\nabla\) and any 1-form \(\beta\) we have the equality \(d\beta=\mathrm{Alt}(\nabla\beta)\), where \(\mathrm{Alt}\) is the skew-symmetrization map.
The intersection of kernels \(\Sigma_{\alpha_{i}}=\ker\alpha_{i}\cap\ker I\alpha_{i}\cap\ker J\alpha_{i} \cap\ker K\alpha_{i}\) gives an \(\mathbb{H}\)-invariant foliation, which contains \(\mathfrak{g}^{\mathbb{H}}_{i}\) as a proper subspace (Remark 2.3). This implies \(\mathfrak{g}^{\mathbb{H}}_{i+1}\subsetneq\mathfrak{g}^{\mathbb{H}}_{i}\). The sequence \(\mathfrak{g}\subsetneq\mathfrak{g}^{\mathbb{H}}_{1}\subsetneq\dots\) terminates in finitely many steps since \(\mathfrak{g}\) is finite-dimensional.
**Corollary 3.27:** Let \((N,I,J,K)\) be a hypercomplex nilmanifold with flat Obata connection \(\nabla\). Then the Lie algebra \(\mathfrak{g}=\mathrm{Lie}(G)\) is \(\mathbb{H}\)-solvable.
**Proof:** By Corollary 3.23, the representation of the algebraic holonomy on \(\mathfrak{g}\) is unipotent.
|
2310.12662 | A mathematical foundation for self-testing: Lifting common assumptions | In this work we study the phenomenon of self-testing from the first
principles, aiming to place this versatile concept on a rigorous mathematical
footing. Self-testing allows a classical verifier to infer a quantum mechanical
description of untrusted quantum devices that she interacts with in a black-box
manner. Somewhat contrary to the black-box paradigm, existing self-testing
results tend to presuppose conditions that constrain the operation of the
untrusted devices. A common assumption is that these devices perform a
projective measurement of a pure quantum state. Naturally, in the absence of
any prior knowledge it would be appropriate to model these devices as measuring
a mixed state using POVM measurements, since the purifying/dilating spaces
could be held by the environment or an adversary.
We prove a general theorem allowing to remove these assumptions, thereby
promoting most existing self-testing results to their assumption-free variants.
On the other hand, we pin-point situations where assumptions cannot be lifted
without loss of generality. As a key (counter)example we identify a quantum
correlation which is a self-test only if certain assumptions are made.
Remarkably, this is also the first example of a correlation that cannot be
implemented using projective measurements on a bipartite state of full Schmidt
rank. Finally, we compare existing self-testing definitions, establishing many
equivalences as well as identifying subtle differences. | Pedro Baptista, Ranyiliu Chen, Jędrzej Kaniewski, David Rasmussen Lolck, Laura Mančinska, Thor Gabelgaard Nielsen, Simon Schmidt | 2023-10-19T11:41:31Z | http://arxiv.org/abs/2310.12662v1 | # A mathematical foundation for self-testing: Lifting common assumptions
###### Abstract
In this work we study the phenomenon of self-testing from the first principles, aiming to place this versatile concept on a rigorous mathematical footing. Self-testing allows a classical verifier to infer a quantum mechanical description of untrusted quantum devices that she interacts with in a black-box manner. Somewhat contrary to the black-box paradigm, existing self-testing results tend to presuppose conditions that constrain the operation of the untrusted devices. A common assumption is that these devices perform a _projective_ measurement of a _pure_ quantum state. Naturally, in the absence of any prior knowledge it would be appropriate to model these devices as measuring a mixed state using POVM measurements, since the purifying/dilating spaces could be held by the environment or an adversary.
We prove a general theorem allowing to remove these assumptions, thereby promoting most existing self-testing results to their assumption-free variants. On the other hand, we pin-point situations where assumptions cannot be lifted without loss of generality. As a key (counter)example we identify a quantum correlation which is a self-test only if certain assumptions are made. Remarkably, this is also the first example of a correlation that cannot be implemented using projective measurements on a bipartite state of full Schmidt rank. Finally, we compare existing self-testing definitions, establishing many equivalences as well as identifying subtle differences.
###### Contents
* 1 Introduction
* 1.1 Results
* 1.2 Structure of the paper
* 2 Preliminaries
* 2.1 Notation
* 2.2 Nonlocal games and strategies
* 2.3 Self-testing: Definitions
* 3 Key tools and concepts
* 3.1 Facts about local dilations
* 3.2 Nearly support-preserving strategies
* 3.3 Nearly projective strategies
* 3.4 Restrictions of nonlocal strategies
* 3.5 Naimark dilation of nonlocal strategies
* 4 Lifting Assumptions
* 4.1 Lifting the PVM assumption
* 4.2 Lifting the full-rank assumption
* 4.3 Lifting the purity assumption
* 4.3.1 Eigenspace of game operator
* 4.3.2 Pure self-tests imply mixed self-tests
* 4.4 Proof of Theorem 4.1
* 5 Equivalence of Definitions
* 5.1 Local dilation in a matrix form
* 5.2 Extraction local dilation
* 6 Separation of Definitions
* 6.1 Separating pure full-rank self-tests and pure self-tests
* 6.2 Separating pure PVM self-tests and pure self-tests
* 6.3 Separating (standard) self-tests and abstract state self-tests
* 7 Acknowledgements
* A Self-testing from correlation
Introduction
_Self-testing_ was first introduced by Mayers and Yao [14]. Over the years it has evolved into an active research field with widespread applications that well surpass the initial expectations (see review paper [2]). It could be argued that the successful applications of self-testing have outpaced a thorough examination and rigorous development of the underlying mathematical formalism. Addressing this issue is the overarching goal of this work which we believe will further broaden and ease future applications of self-testing.
The original motivation of self-testing is that it can be used for certifying quantum devices. Since the properties of quantum systems are inherently difficult to observe directly, the problem of certifying that a quantum device functions according to its specification is challenging. Self-testing provides the strongest form of certification in this context. Specifically, it allows untrusted parties to convince a classical verifier that their shared quantum memory holds a specific state on which they are able to perform certain quantum measurements.
In addition to its fundamental role in certification, self-testing techniques are increasingly being used for other purposes. These include protocols for delegated quantum computation, verifiable randomness generation, device-independent cryptography, Bell nonlocality, and quantum complexity theory. In fact some of the biggest recent breakthroughs like the \(\text{MIP}^{*}=\text{RE}\)[14] crucially rest on self-testing techniques.
The concept of self-testing can be framed in the context of nonlocal games [1], which involve two untrusted provers, Alice and Bob, and a verifier. The provers respond to questions from the verifier and their win or loss is determined by a predefined function. Crucially, Alice and Bob cannot communicate after receiving the questions but can agree on a strategy beforehand. In a quantum strategy, \(S=(\rho,\{A_{xa}\},\{B_{yb}\})\), they share an entangled state, \(\rho\), and employ local measurements to obtain their answers. We refer to a quantum strategy the provers use as an **arbitrary strategy** whereas a quantum strategy we would like to certify as a **canonical strategy**. In this context, self-testing posits that any arbitrary strategy that optimally wins a nonlocal game must be equivalent to a canonical strategy for that game, up to a local isometry. This conceptual idea of self-testing has been formalized in many different, albeit similar, definitions [14, 2, 3] which have then been employed to establish self-testing theorems. This naturally evokes the following questions:
**Question 1**.: _What is the relationship between the existing definitions of self-testing (and hence the obtained self-testing theorems)? Which is the strongest or the "right" definition of self-testing?_
Understanding the difference between the existing definitions of self-testing is further complicated by the fact that most authors do not allow the arbitrary strategy to take the most general form allowed by quantum mechanics (POVM measurements on a mixed quantum state). A priori this weakens the resulting notion of self-testing and goes against the idea that the untrusted provers should be allowed to be all-powerful. Most of the existing self-testing results place at least one of the following three assumptions on the arbitrary strategy, \(S=(\rho,\{A_{xa}\},\{B_{yb}\})\), employed by Alice's and Bob's untrusted quantum devices:
1. the shared state, \(\rho\), is pure,
2. the shared state has full Schmidt rank1, Footnote 1: For example, the state \(|\psi\rangle=1/\sqrt{2}(|00\rangle+|11\rangle)\in\mathbb{C}^{2}\otimes\mathbb{C} ^{2}\) is full-rank, while \(|\psi^{\prime}\rangle=1/\sqrt{2}(|00\rangle+|22\rangle)\in\mathbb{C}^{3}\otimes \mathbb{C}^{3}\) is not.
3. the measurements \(\{A_{xa}\}\) and \(\{B_{pb}\}\) are projective measurements (PVM's).
Depending on which assumptions are made on the arbitrary strategy, we refer to the resulting self-tests as pure/ full-rank/ PVM self-tests. For example, a PVM self-test means that any PVM strategy can be mapped by a local isometry to the canonical strategy but such a mapping need not exist for non-projective strategies2. If the arbitrary strategy is not restricted in any way, we say that the resulting self-test is _assumption-free_. The above assumptions (1)-(3) give rise to a hierarchy of self-tests: for instance, every pure PVM self-test is also a PVM self-test. Hardly any of the existing self-tests are proven to be assumption-free. In particular, it is very common to only consider arbitrary strategies which measure a pure state with projective measurements (Assumptions (1) & (3)). This leads to several intriguing questions:
Footnote 2: One might attempt to get rid of this assumption by dilating the non-projective strategy to a projective one with Naimark dilation. Unfortunately, the isometry that exists for Naimark dilation does not directly work for the original non-projective measurement in the sense of the self-testing definition. At a conceptual level this argument is problematic if we consider that the dilating space could be held by an adversary or the environment.
**Question 2**.: _Which of the assumptions (1)-(3) can (or cannot) be lifted without loss of generality? Can things go wrong in the most general case?_
To gain intuition of the potential consequences of making unjustified assumptions, consider an example from [10] where two provers receive a single question each and produce a perfectly correlated bit. This can be achieved with a classical, separable mixed state: no quantum entanglement needed. However, if we assume that the perfectly correlated bit is produced by measuring a pure state, then this state needs to be entangled, leading to an entirely different analysis and conclusions. To give a more practical example, in device-independent random number generation, randomness is secure if it is not predictable by a third party [1]. Then the purity assumption oversimplifies and invalidate the security analysis, as there is no way any third party is entangled with a pure state. The assumption that all measurements are projective is sometimes made for the sake of simplicity or due to historical precedent. On the other hand, we know that non-projective measurements are essential for certain tasks in quantum error correction and state discrimination. Adhering to this assumption could therefore unnecessarily restrict the applicability of self-testing methods. From a philosophical standpoint, making additional assumption goes against the idea of self-testing, which aims to make as few assumptions as possible. This is particularly important in cryptographic contexts where fewer assumptions often translate into stronger security guarantees.
Shifting the attention to the canonical strategy, we can inquire about the limits of self-testing, namely, which states and measurements can we hope to self-test:
**Question 3**.: _Which strategies can or cannot be self-tested?_
It is known that mixed states cannot be self-tested (see _e.g._[14, Sect. 3.5]). However, it remains uncertain whether strategies containing non-full-rank pure states or non-projective measurements can be self-tested. Additionally, the ability to self-test certain strategies might depend on the specific definition of self-testing or the assumptions regarding the arbitrary strategies employed.
Moving beyond self-testing, the 2-party Bell scenario presents an intriguing question: what is the _simplest_ form of a strategy realizing a given bipartite correlation? Conventional purification and Naimark dilation arguments show that any quantum correlation can be realized by measuring a pure state with local projective measurements. In a similar vein, by restricting a strategy to the state's local supports, we can obtain a strategy which utilizes a pure state of full Schmidt rank. However, these standard arguments fall short of providing a strategy that simultaneously has all three desired attributes: purity, full rank, and projective measurements. This leads us to the following question:
**Question 4**.: _Can every bipartite quantum correlation be realized by locally measuring a shared state of full Schmidt rank with projective measurements?_
### Results
We study self-testing from first principles. This includes determining the assumptions that can be made without sacrificing generality, as well as identifying which quantum states and measurements are amenable to self-testing. At a conceptual level, we make the following contributions:
* We put the concept of self-testing on a rigorous mathematical footing. This includes identifying new key concepts (_e.g._ "support-preserving strategy"), putting forth definitions for robust versions of properties like "projectivity" or "being support-preserving", and unifying the existing self-testing definitions.;
* We establish which strategies can or cannot be self-tested in an assumption-free manner,
* We delineate which combinations of assumptions (1)-(3) can or cannot be lifted;
* We provide a previously unknown reference example of a quantum correlation that cannot be realized by projective measurements on a full-rank state.
To start things off, we examine and compare existing self-testing definitions. We show that some of these definitions are indeed equivalent while highlighting subtle differences and pinpointing assumptions which potentially cannot be lifted, thus giving a better understanding of Question 1.
**Theorem A**.: _(informal version of the results in Section 5) The existing definitions of self-testing are equivalent in certain natural settings. In the general case we identify definitions that yield weaker notion of self-testing (See also Theorem B.2)._
To address Question 2 we establish the following theorem:
**Theorem B.1**.: _(Theorem 4.1) Let \(G\) be a nonlocal game._
* _Let_ \(\tilde{S}\) _be a strategy for_ \(G\) _that measures a pure state of full Schmidt rank._ _If_ \(G\) _is a pure PVM self-test of_ \(\tilde{S}\) _then it is also an assumption-free self-test of_ \(\tilde{S}\)_._
* _Let_ \(\tilde{S}\) _be a strategy for_ \(G\) _that uses only projective measurements._ _If_ \(G\) _is a pure full-rank self-test of_ \(\tilde{S}\) _then it is also an assumption-free self-test of_ \(\tilde{S}\)_._
**Remark**.: _Theorem B.1 (as well as Theorem C given below) also hold for self-tests from Bell inequalities and extreme quantum correlations. In Section 4 we establish robust versions of these results._
Theorem B.1 shows that both the "purity + projectivity" and "purity + full-rank" assumptions can consistently be lifted, thus _elevating_ most (if not all) of the existing self-tests to their assumption-free versions. We view this as the most practically impactful contribution of our work. Essentially, Theorem B.1 enables us to sidestep cumbersome general strategies that involve non-projective measurements and arbitrary mixed states, thereby simplifying the proof process for self-testing theorems. Part (a) of Theorem B.1 answers the question raised in [1, Appendix B.2].
It is natural to ask whether without loss of generality we can restrict to full-rank _and_ projective arbitrary strategies. We conjecture that this is not possible in general, since we do not know a general construction allowing to promote an arbitrary strategy to an equivalent strategy that is simultaneously full-rank and projective.
**Conjecture**.: _There is a nonlocal game that is a pure full-rank PVM self-test for a projective full-rank strategy \(\tilde{S}\), which is not an assumption-free self-test for \(\tilde{S}\)._
Figure 1 succinctly illustrates Theorem B.1 as well as our conjecture.
Taking another look at Theorem B.1, one can ask whether the same conclusions hold if the canonical strategy \(\tilde{S}\) is not projective and/or full-rank. We show that in any more general situation, there are examples of restricted self-tests that are not assumption-free self-tests, giving an answer to the second part of Question 2.
**Theorem B.2**.: _(Corollary 6.2 and Corollary 6.9) We exhibit an extreme quantum correlation \(\tilde{p}\) that can be realized by two different strategies. The first strategy, \(S_{\text{full-rank}}\), measures a full-rank shared state with non-projective measurements while the second strategy, \(S_{\text{proj}}\), measures a non-full-rank shared state with projective measurements3. We have that_
Footnote 3: Essentially we can take \(S_{\text{proj}}\) to be the Naimark dilation of \(S_{\text{full-rank}}\).
* \(\tilde{p}\) _is a PVM self-test of_ \(S_{\text{proj}}\)_,_
* \(\tilde{p}\) _is a full-rank self-test of_ \(S_{\text{full-rank}}\)_,_
* \(\tilde{p}\) _is_ **not** _an assumption-free self-test (no matter what canonical strategy we choose),_
* \(\tilde{p}\) _cannot be realized by performing projective measurements on a shared state with full Schmidt rank. (This answers Question 4 in the negative.)_
The correlation \(\tilde{p}\) highlights that we need to be very careful with the assumptions imposed on the arbitrary strategies as these assumptions can yield weaker versions of self-testing.
Figure 1: Implications of Theorem B.1 for a canonical full-rank projective strategy. Specifically, Theorem B.1 (a) consists of Theorem 4.3 and 4.7. And Theorem B.1 (b) consists of Theorem 4.5 and 4.7. For the implications with ‘?’ we conjecture them to be false.
Indeed, assuming that the arbitrary strategy is projective (or full-rank) allows us to obtain a self-testing result for \(\tilde{p}\) while this fails to hold in the absence of any assumptions.
To obtain the correlation \(\tilde{p}\) we combine the CHSH inequality with another Bell inequality, in which Alice holds the same measurement operators as in CHSH and Bob gets an additional three-outcome measurement.
Finally, one might ask if it is possible to self-test canonical strategies \(\tilde{S}\) that are not full-rank or projective. Using the properties of Naimark dilation and restriction, we show that this is essentially not the case, thus answering Question 3.
**Theorem C**.: _(An abridged version of Theorem 4.2) If a nonlocal game is an assumption-free self-test, then there exists a full-rank projective strategy that is self-tested by this game. Moreover, full-rank non-projective strategies cannot be self-tested in an assumption-free way._
A crucial take-away from the above Theorem C is that it is not possible to self-test non-projective measurements.
We remark that for the case of pure self-tests from correlations, a conclusion similar to the first part of Theorem C has also been reached in [14, Proposition 4.14] via a different approach.
### Structure of the paper
In Section 2, we recall basic notions in nonlocal games and explain the different self-testing definitions we are using throughout the paper. Section 3 gives the tools we are using for the main results. We show how strategies being support-preserving or projective is connected to local dilation. In addition, we look at restrictions and Naimark dilations of strategies which can also be related to such strategies. For an overview of this, see Figure 2. Then, in Section 4, we have all the tools to prove our main result: Theorem B.1. Here we lift common assumptions in self-testing theorems, in different steps. First, we lift the PVM assumption, then the full-rank assumption and finally the assumption that the state in an arbitrary strategy is pure. Using the tools from the previous section, we are also able to prove Theorem C here. After this, we compare existing definitions for local dilations for strategies using mixed states and show their equivalence in Section 5. Finally, in Section 6, we proceed by proving Theorem B.2. Here, we give examples showing we cannot lift our assumptions on self-testing theorems in general.
## 2 Preliminaries
### Notation
We state some notations and basic facts we need throughout the article. Unless specified otherwise, we assume all Hilbert spaces to be finite-dimensional. For any vector \(v\in\mathcal{H}\), we denote its norm by \(\left\|v\right\|=\left\langle v,v\right\rangle^{\frac{1}{2}}\). We write \(u\approx_{\varepsilon}v\) if \(\left\|u-v\right\|\leq\varepsilon\).
A _pure state_\(\left|\psi\right\rangle\) is a unit vector in a Hilbert space \(\mathcal{H}\). For a bipartite pure state \(\left|\psi\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), we can consider its _Schmidt decomposition_
\[\left|\psi\right\rangle=\sum_{i=0}^{k-1}\alpha_{i}\left|e_{i}\right\rangle \left|f_{i}\right\rangle \tag{1}\]
where \(\alpha_{i}>0\) and both \(\{|e_{i}\rangle\}_{i=0}^{k-1}\subseteq\mathcal{H}_{A}\) and \(\{|f_{i}\rangle\}_{i=0}^{k-1}\subseteq\mathcal{H}_{B}\) are orthonormal sets. Note that (1) only includes terms with positive Schmidt coefficients. We refer to the number \(k\) as the _Schmidt rank_ of the state \(|\psi\rangle\) and if \(k=\dim(\mathcal{H}_{A})=\dim(\mathcal{H}_{B})\) then we say that \(|\psi\rangle\) has full Schmidt rank, or \(|\psi\rangle\) is _full-rank_ for simplicity. We also define \(\operatorname{supp}_{A}|\psi\rangle:=\operatorname{span}\left\{|e_{0}\rangle \,,\ldots|e_{k-1}\rangle\right\}\subseteq\mathcal{H}_{A}\), and similarly, \(\operatorname{supp}_{B}|\psi\rangle:=\operatorname{span}\left\{|f_{0}\rangle \,,\ldots,|f_{k-1}\rangle\right\}\subseteq\mathcal{H}_{B}\).
A _mixed state_\(\rho\) is represented by a positive semi-definite, self-adjoint matrix with trace one, also called density matrix. Every mixed state can be written in the form \(\rho=\sum_{i=1}^{n}p_{i}\,|\psi_{i}\rangle\langle\psi_{i}|\) for a probability vector \((p_{i})_{i=1}^{n}\) and pure states \(|\psi_{i}\rangle\). A _purification_ of a mixed state \(\rho\in B(\mathcal{H})\) is a pure state \(|\psi\rangle\in\mathcal{H}\otimes\mathcal{H}_{P}\) for some purification Hilbert space \(\mathcal{H}_{P}\) such that \(\rho=\operatorname{Tr}_{P}(|\psi\rangle\langle\psi|)\). Here \(\operatorname{Tr}_{P}\) denotes the partial trace over the Hilbert space \(\mathcal{H}_{P}\).
We will be working with the following definition of measurements.
**Definition 2.1** (Povm).: _A positive, operator-valued measurement (POVM) is a set of positive, self-adjoint operators \(\{E_{i}\}_{i=1}^{n}\) in \(B(\mathcal{H})\) such that_
\[\sum_{i=1}^{n}E_{i}=\mathbbm{1}_{B(\mathcal{H})}.\]
_Furthermore, if all operators are projections (\(E_{i}=E_{i}^{2}=E_{i}^{*}\) for all \(i\)), then we call it projective measurement (PVM)._
### Nonlocal games and strategies
**Definition 2.2** (Nonlocal game).: _A nonlocal game \(G\) is a tuple \((\mathcal{S},\mathcal{T},\mathcal{A},\mathcal{B},\pi,\mathcal{V})\) of a probability distribution of the questions \(\pi:\mathcal{S}\times\mathcal{T}\to[0,1]\) and a verification function \(\mathcal{V}:\mathcal{A}\times\mathcal{B}\times\mathcal{S}\times\mathcal{T} \to\{0,1\}\), where \(\mathcal{S}\) and \(\mathcal{T}\) are finite sets of questions for Alice and Bob respectively, and \(\mathcal{A}\) and \(\mathcal{B}\) are finite sets of answers for Alice and Bob respectively._
A nonlocal game \(G\) is a cooperative game played by two players, which we typically call Alice and Bob, and a referee. The game is played the following way: Before the game starts, Alice and Bob agree on some strategy, including possibly sharing a quantum state. During the game, the players are not allowed to communicate, but they can perform measurements on their own parts of the shared state. Alice and Bob each receive a question \(s\) and \(t\) respectively from predetermined sets of questions \(\mathcal{S}\) and \(\mathcal{T}\), determined by the probability distribution \(\pi\). Each of them then gives an answer \(a\) and \(b\) from their answer sets \(\mathcal{A}\) and \(\mathcal{B}\), respectively. They then win if \(\mathcal{V}(a,b|s,t)=1\). In such games, the behaviour of Alice and Bob are described as _quantum strategies_.
**Definition 2.3** (Strategy).: _A (tensor-product) quantum strategy for a nonlocal game \(G=(\mathcal{S},\mathcal{T},\mathcal{A},\mathcal{B},\pi,\mathcal{V})\) is a tuple_
\[S=(\rho_{AB},\{A_{sa}\}_{s\in\mathcal{S},a\in\mathcal{A}},\{B_{tb}\}_{t\in \mathcal{T},b\in\mathcal{B}}), \tag{2}\]
_consisting of a shared density operator \(\rho_{AB}\in B(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\), where \(\mathcal{H}_{A}\) is the state space of Alice and \(\mathcal{H}_{B}\) is the state space of Bob. Furthermore, for each \(s\in\mathcal{S}\), the set \(\{A_{sa}\}_{a\in\mathcal{A}}\subset B(\mathcal{H}_{A})\) is a POVM on \(\mathcal{H}_{A}\), and for each \(t\in\mathcal{T}\), the set \(\{B_{tb}\}_{b\in\mathcal{B}}\subset B(\mathcal{H}_{B})\) is a POVM on \(\mathcal{H}_{B}\). We identify the following special cases (which are not mutually exclusive):_
* _If_ \(\rho_{AB}=|\psi\rangle\langle\psi|\) _for some pure state_ \(|\psi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\)_, we refer to the quantum strategy as_ pure_. In this case, we may replace_ \(\rho_{AB}\) _with_ \(|\psi\rangle\) _in (_2_)._
* _If both marginal states_ \(\rho_{A}:=\operatorname{Tr}_{B}[\rho_{AB}],\rho_{B}:=\operatorname{Tr}_{A}[\rho_{ AB}]\) _have rank equal to the dimension of corresponding Hilbert space, we may refer to the quantum strategy as_ full-rank_. In the case of pure state_ \(\rho_{AB}=|\psi\rangle\)_, this is equivalent to_ \(|\psi\rangle\) _having full Schmidt rank._
* _If all POVM elements_ \(A_{sa}\) _and_ \(B_{tb}\) _are projectors, then we refer to the quantum strategy as_ projective_. Otherwise, we call it_ non-projective_._
We will write \(\{A_{sa}\}_{s\in\mathcal{S},a\in\mathcal{A}}\) as \(\{A_{sa}\}\) when from the context it is clear that the set is indexed over the sets \(\mathcal{S}\) and \(\mathcal{A}\). We will use analogous notation for Bob's measurements \(\{B_{tb}\}\). We will refer to a quantum strategy simply as a strategy in the following.
It is easy to compute the winning probability when using a particular strategy \(S\) for a game \(G\). Sometimes, it will be useful to collect all the information regarding game \(G\) and the employed measurements in a single operator \(W\).
**Lemma 2.4**.: _Let \(G=(\mathcal{S},\mathcal{T},\mathcal{A},\mathcal{B},\pi,\mathcal{V})\) be a nonlocal game and \(S=(\rho_{AB},\{A_{sa}\},\{B_{tb}\})\) a strategy for \(G\). Define \(W\) as_
\[W:=\sum_{a,b,s,t}\pi(s,t)\mathcal{V}(a,b|s,t)(A_{sa}\otimes B_{tb}).\]
_Then the probability of winning the game \(\omega(S,G)\) using the strategy \(S\) can be found as_
\[\omega(S,G)=\operatorname{Tr}(W\rho_{AB}).\]
Proof.: We show this by rewriting \(\omega(S,G)\) using the definition of \(W\) and linearity of the trace,
\[\omega(S,G) =\sum_{s,t}\pi(s,t)\sum_{a,b}\mathcal{V}(a,b|s,t)\operatorname{Tr }((A_{st}\otimes B_{tb})\rho_{AB})\] \[=\operatorname{Tr}(\left(\sum_{s,t}\sum_{a,b}\pi(s,t)\mathcal{V} (a,b|s,t)(A_{st}\otimes B_{tb})\right)\rho_{AB})\] \[=\operatorname{Tr}(W\rho).\]
Furthermore, we define \(\omega_{q}(G):=\sup_{S}\omega(S,G)\), where the supremum is taken over all (tensor-product) strategies \(S\) which are compatible with \(G\). We refer to \(\omega_{q}(G)\) as the _optimal_ or _maximal_ quantum value of \(G\). We call \(S\) an optimal strategy if \(\omega(S,G)=\omega_{q}(G)\). Note that in general, it may not be possible to obtain \(\omega_{q}(G)\) with any tensor-product strategy[14]. For \(\delta\geq 0\), we say that \(S\) is \(\delta\)-optimal if \(\omega(S,G)\geq\omega_{q}(G)-\delta\).
### Self-testing: Definitions
The main topic of this work, _self-testing_, asks whether the state and the measurements used in an optimal quantum strategy for a nonlocal game are unique, up to local isometries. The reason this is useful is that it enables us to make conclusions about the strategy only from the observed probability of winning. Consider a situation where we make Alice and Bob play a nonlocal game that self-tests an optimal strategy. If in this setting we observe that Alice and Bob achieve the quantum value of the game, we can guarantee that Alice and Bob must have used a strategy that is equivalent to the optimal one up to local isometries.
We will also discuss the concept of _robust self-testing_. Here, we ask that the states and measurements of an almost optimal strategy are close to those of a (canonical) optimal strategy, up to local isometries.
In the literature, strategies are often assumed to use pure states and projective measurements, and so common formulations of self-testing reflect this fact. We will be working with a definition of self-testing that is very similar to the one presented by [14, Definition 2], though augmented with additional qualifiers that are relevant to the presentation of our results.
To start discussing the concept of (robust) self-testing, we first introduce the concept of local \(\varepsilon\)-dilations.
**Definition 2.5** (Local \(\varepsilon\)-dilation).: _Given two strategies_
\[S =(\rho_{AB}\in B(\mathcal{H}_{A}\otimes\mathcal{H}_{B}),\{A_{sa} \}_{s\in\mathcal{S},a\in\mathcal{A}},\{B_{tb}\}_{t\in\mathcal{T},b\in\mathcal{ B}})\text{ and }\] \[\tilde{S} =(|\tilde{\psi}\rangle\in\mathcal{H}_{\tilde{A}}\otimes\mathcal{ H}_{\tilde{B}},\{\tilde{A}_{sa}\}_{s\in\mathcal{S},a\in\mathcal{A}},\{\tilde{B}_{ tb}\}_{t\in\mathcal{T},b\in\mathcal{B}})\]
_we say that \(\tilde{S}\) is a local \(\varepsilon\)-dilation of \(S\) and write \(S\overset{\varepsilon}{\hookrightarrow}\tilde{S}\) if for any purification \(|\psi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{P}\) of \(\rho_{AB}\) there exist spaces \(\mathcal{H}_{\tilde{A}},\mathcal{H}_{\tilde{B}}\), a local isometry \(U=U_{A}\otimes U_{B}\), with \(U_{A}:\mathcal{H}_{A}\rightarrow\mathcal{H}_{\tilde{A}}\otimes\mathcal{H}_{ \tilde{A}}\), \(U_{B}:\mathcal{H}_{B}\rightarrow\mathcal{H}_{\tilde{B}}\otimes\mathcal{H}_{ \tilde{B}}\) and a state \(|\mathrm{aux}\rangle\in\mathcal{H}_{\tilde{A}}\otimes\mathcal{H}_{\tilde{B}} \otimes\mathcal{H}_{P}\) such that for all \(s,t,a,b\) we have_
\[\|(U\otimes\mathbb{1}_{P})\,|\psi\rangle-|\tilde{\psi}\rangle \otimes|\mathrm{aux}\rangle\|\leq\varepsilon, \tag{3}\] \[\|(U\otimes\mathbb{1}_{P})(A_{sa}\otimes\mathbb{1}_{B}\otimes \mathbb{1}_{P})\,|\psi\rangle-(\tilde{A}_{sa}\otimes\mathbb{1}_{\tilde{B}})\,| \tilde{\psi}\rangle\otimes|\mathrm{aux}\rangle\|\leq\varepsilon,\] \[\|(U\otimes\mathbb{1}_{P})(1_{A}\otimes B_{tb}\otimes\mathbb{1}_{ P})\,|\psi\rangle-(\mathbb{1}_{\tilde{A}}\otimes\tilde{B}_{tb})\,|\tilde{\psi} \rangle\otimes|\mathrm{aux}\rangle\|\leq\varepsilon.\]
_In case we want to name the local isometry and the auxiliary state, we write \(S\overset{\varepsilon}{\underset{U,|\mathrm{aux}\rangle}{\longleftarrow}} \tilde{S}\). We will use this notation only when \(\rho_{AB}\) is pure to avoid ambiguity._
**Remark 2.6**.:
* _Note that local dilations are transitive. That is if_ \(S_{X}\overset{\varepsilon_{1}}{\hookrightarrow}S_{Y}\) _and_ \(S_{Y}\overset{\varepsilon_{2}}{\hookrightarrow}S_{Z}\)_, then_ \(S_{X}\overset{\varepsilon_{1}+\varepsilon_{2}}{\longleftarrow}S_{Z}\)_, see_ _[_13_, Lemma 4.7]__._
* _If the state_ \(\rho_{AB}=|\psi\rangle\langle\psi|\) _in strategy_ \(S\) _is pure, we do not need to concern ourselves with purifications of_ \(\rho_{AB}\) _in the above definition. That is, the auxiliary state_ \(|\mathrm{aux}\rangle\in\mathcal{H}_{\tilde{A}}\otimes\mathcal{H}_{\tilde{B}}\)_, and Eq._ (3) _becomes_ \[\|U\,|\psi\rangle-|\tilde{\psi}\rangle\otimes|\mathrm{aux}\rangle\| \leq\varepsilon,\] \[\|U(A_{sa}\otimes\mathbb{1}_{B})\,|\psi\rangle-(\tilde{A}_{sa} \otimes\mathbb{1}_{\tilde{B}})\,|\tilde{\psi}\rangle\otimes|\mathrm{aux}\rangle\| \leq\varepsilon,\] \[\|U(\mathbb{1}_{A}\otimes B_{tb})\,|\psi\rangle-(\mathbb{1}_{ \tilde{A}}\otimes\tilde{B}_{tb})\,|\tilde{\psi}\rangle\otimes|\mathrm{aux}\rangle \| \leq\varepsilon.\]
* _If_ \(\varepsilon=0\) _holds, we say that_ \(\tilde{S}\) _is a local dilation of_ \(S\) _and write_ \(S\hookrightarrow\tilde{S}\)_. For pure states, this is equivalent to finding a local isometry_ \(U=U_{A}\otimes U_{B}\) _such that_ \[U(A_{sa}\otimes B_{tb})\,|\psi\rangle=(\tilde{A}_{sa}\otimes\tilde{B}_{tb})\,| \tilde{\psi}\rangle\otimes|\mathrm{aux}\rangle\] _holds for all_ \(a,b,s,t\)_._
Intuitively, self-testing allows us to say that _any_ optimal strategy \(S\) for a game \(G\) can be mapped to a chosen canonical strategy \(\tilde{S}\). In practice, however, when proving self-testing theorems, authors often impose different restrictions on the set of considered strategies \(S\). Three most common types of assumptions restricting the strategy, \(S\), implemented by the untrusted black-box quantum device are as follows:
1. the state in \(S\) is pure rather than mixed,
2. the state in \(S\) is full-rank.
3. the measurements in \(S\) are projective rather than general POVMs,
The above assumptions give rise to a priori different definitions of self-testing. A _\(t\)-strategy_ for \(t\subseteq\{\text{pure},\text{full-rank},\text{PVM}\}\) is a strategy for the game, where the states and measurements are restricted according to \(t\). For example, a pure PVM strategy has a pure state and projective measurements, while the rank of the state can be arbitrary. An assumption-free strategy will usually just be called a strategy.
**Definition 2.7** (Self-testing).: _Let \(\tilde{S}\) be a pure strategy and \(t\subseteq\{\text{pure},\text{full-rank},\text{PVM}\}\). We say that a nonlocal game \(G\) is a \(t\)-self-test for a (reference) strategy \(\tilde{S}\) if \(S\hookrightarrow\tilde{S}\) for every optimal \(t\)-strategy \(S\) for game \(G\)._
All those different definitions also have a robust version, defined as follows.
**Definition 2.8** (Robust self-testing).: _Let \(\tilde{S}\) be a pure strategy, \(t\subseteq\{\text{pure},\text{full-rank},\text{PVM}\}\). We say that a nonlocal game \(G\) is a robust \(t\)-self-test for a (reference) strategy \(\tilde{S}\) if from every \(\varepsilon\geq 0\), there exists \(\delta\geq 0\) such that \(S\xleftrightarrow{\varepsilon}\tilde{S}\) for every \(\delta\)-optimal \(t\)-strategy \(S\) for a game \(G\)._
Many of the existing robust self-testing results specify an explicit dependence between \(\varepsilon\) and \(\delta\). Our results also apply to this case, and we remark how the theorem statements should be altered in case one wishes to apply them to lift assumptions for a robust self-test with an explicit \((\varepsilon,\delta)\)-dependence.
It is clear that every \(t\)-self-test is also a \(t^{\prime}\)-self-test if \(t^{\prime}\) imposes more restrictions on the strategy than \(t\). For example, every PVM self-test is also a pure PVM self-test. We will refer to an assumption free self-test just as self-test. Note that in the literature, the term "self-test" is often used for a prior weaker form of self-testing, such as a pure PVM self-test. In this paper, we refer to an (assumption-free) self-test if our arbitrary strategies are allowed to have mixed states of any rank and POVM measurements.
## 3 Key tools and concepts
In this section, we introduce support-preserving strategies as a fundamental concept crucial to the proofs of several key results. Furthermore, we examine the formalization of projectiveness, providing a comprehensive framework for the Naimark dilation of non-local strategies and presenting important properties associated with it. The interaction of the concepts introduced in this section can be visualized, see Fig. 2.
### Facts about local dilations
We noted in Section 2 that the local dilation is transitive, so it is a pre-order on the set of strategies. In general, local dilation is not an equivalence relation, because if we let \(S^{\prime}\) to be \(S\) attached with an entangled auxiliary state, then \(S^{\prime}\hookrightarrow S\) but not the other direction.
Nevertheless, we can show that if the auxiliary state in a local dilation is separable, then the two strategies are "equivalent" in the sense of local dilation:
**Proposition 3.1**.: _If a strategy \(S_{n}\) is a \(\varepsilon\)-local dilation of a strategy \(S_{m}\) with a separable ancilla:_
\[S_{m}\xleftarrow{\varepsilon\atop V_{A}\otimes V_{B},\left|0\right\rangle_{A^ {\prime}}\otimes\left|0\right\rangle_{B^{\prime}}}S_{n},\]
_then \(S_{m}\) is also a \(\varepsilon\)-local dilation of \(S_{n}\) (for some separable ancilla)._
Proof.: Without loss of generality, assume that \(S_{n}\) has local dimension \(n\), and \(S_{m}\) has local dimension \(m\). Since \(S_{m}\xleftarrow{\varepsilon\atop V_{A}\otimes V_{B},\left|0\right\rangle_{A ^{\prime}}\otimes\left|0\right\rangle_{B^{\prime}}}S_{n}\), then
\[(V_{A}\otimes V_{B})(A_{sa}^{(m)}\otimes B_{tb}^{(m)})\left|\psi_ {m}\right\rangle\approx_{\varepsilon} (\left|0\right\rangle_{A^{\prime}}\left|0\right\rangle_{B^{ \prime}})\otimes(A_{sa}^{(n)}\otimes B_{tb}^{(n)})\left|\psi_{n}\right\rangle\] \[= (\mathbbm{1}_{n_{A}^{\prime}\times n}\otimes\mathbbm{1}_{n_{B}^{ \prime}\times n})(A_{sa}^{(n)}\otimes B_{tb}^{(n)})\left|\psi_{n}\right\rangle\]
where \(n_{A}^{\prime}=n\times\dim\mathcal{H}_{A^{\prime}}\), \(n_{B}^{\prime}=n\times\dim\mathcal{H}_{B^{\prime}}\), and \(\mathbbm{1}_{x\times y}\) (\(y\leq x\)) denotes the first \(y\) columns of the \(x\times x\) identity matrix, which is an isometry.
Express \(V_{A},V_{B}\) as \(V_{A}=U_{A}\mathbbm{1}_{n_{A}^{\prime}\times m},V_{B}=U_{B}\mathbbm{1}_{n_{B} ^{\prime}\times m}\), where \(U_{A},U_{B}\) are unitaries. Then
\[(\mathbbm{1}_{n_{A}^{\prime}\times m}\otimes\mathbbm{1}_{n_{B}^{ \prime}\times m})(A_{sa}^{(m)}\otimes B_{tb}^{(m)})\left|\psi_{m}\right\rangle\] \[\approx_{\varepsilon} (U_{A}^{*}\mathbbm{1}_{n_{A}^{\prime}\times n}\otimes U_{B}^{*} \mathbbm{1}_{n_{B}^{\prime}\times n})(A_{sa}^{(n)}\otimes B_{tb}^{(n)})\left| \psi_{n}\right\rangle.\]
Figure 2: Local dilation “\(\hookrightarrow\)” is the central concept of self-testing. We introduce the idea of support-preservingness and projectiveness for strategies, which are invariant under local dilations. There are two canonical ways of obtaining a support-preserving strategy and a projective strategy, respectively: restriction and Naimark dilation. If a strategy \(S\) is support-preserving/projective, then we can locally dilate \(S\) to its restriction/Naimark dilation and vice versa. Finally, if the restriction of \(S\) is projective, or if its Naimark dilation is support-preserving, then \(S\) must be both projective and support-preserving.
Take the smallest (or any) \(n^{\prime\prime}\geq\max\{n_{A}^{\prime},n_{B}^{\prime}\}\) such that \(n^{\prime\prime}\) is a multiple of \(m\). Then
\[\left(\mathbbm{1}_{n^{\prime\prime}\times n_{A}^{\prime}}U_{A}^{*} \mathbbm{1}_{n_{A}^{\prime}\times n}\otimes\mathbbm{1}_{n^{\prime\prime}\times n _{B}^{\prime}}U_{B}^{*}\mathbbm{1}_{n_{B}^{\prime}\times n}\right)(A_{sa}^{(n )}\otimes B_{tb}^{(n)})\left|\psi_{n}\right\rangle\] \[\approx_{\varepsilon}(\mathbbm{1}_{n^{\prime\prime}\times m} \otimes\mathbbm{1}_{n^{\prime\prime}\times m})(A_{sa}^{(m)}\otimes B_{tb}^{(m )})\left|\psi_{m}\right\rangle\] \[= (\left|0\right\rangle_{A^{\prime\prime}}\left|0\right\rangle_{B^{ \prime\prime}})\otimes(A_{sa}^{(m)}\otimes B_{tb}^{(m)})\left|\psi_{m}\right\rangle,\]
where \(\left|0\right\rangle_{A^{\prime\prime}}\in\mathcal{H}_{A^{\prime\prime}} \cong\mathbb{C}^{n^{\prime\prime}/m}\), \(\left|0\right\rangle_{B^{\prime\prime}}\in\mathcal{H}_{B^{\prime\prime}} \cong\mathbb{C}^{n^{\prime\prime}/m}\). It is clear that both \(V_{A}^{\prime}:=\mathbbm{1}_{n^{\prime\prime}\times n_{A}^{\prime}}U_{A}^{*} \mathbbm{1}_{n_{A}^{\prime}\times n}\) and \(V_{B}^{\prime}:=\mathbbm{1}_{n^{\prime\prime}\times n_{B}^{\prime}}U_{B}^{*} \mathbbm{1}_{n_{B}^{\prime}\times n}\) are isometries. So \(S_{n}\underset{V_{A}^{\prime}\otimes V_{B}^{\prime},\left|0\right\rangle_{A^ {\prime\prime}}\otimes\left|0\right\rangle_{B^{\prime\prime}}}{\epsilon} \otimes S_{m}\).
### Nearly support-preserving strategies
We introduce the idea of _support-preserving strategies_[11]. Given a pure strategy \(S=(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\})\), in the case where \(\left|\psi\right\rangle\) is not full-rank, the support of \(\left|\psi\right\rangle\) may or may not be an invariant subspace of the measurement operators. In a _support-preserving_ strategy, the measurement operators map the state still inside the support of the state. That is, a quantum strategy \(S=(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\})\) is called _support-preserving_ if
\[\mathrm{supp}_{A}\left(\left(A_{sa}\otimes\mathbbm{1}_{B}\right)\left|\psi \right\rangle\right)\subseteq\mathrm{supp}_{A}(\left|\psi\right\rangle),\ \mathrm{supp}_{B}\left(\left(\mathbbm{1}_{A}\otimes B_{tb}\right)\left|\psi \right\rangle\right)\subseteq\mathrm{supp}_{B}(\left|\psi\right\rangle),\]
holds for all \(s,a,t,b\). Alternatively, one also can think of it as the measurement operators being block-diagonal in the Schmidt basis of the state, as what the authors of [10] independently defined therein, which they refer to as "centrally-supported". It is given by the following condition:
\[[A_{sa},\Pi_{A}]=[B_{tb},\Pi_{B}]=0,\]
where \(\Pi_{A}\) and \(\Pi_{B}\) is the projection onto \(\mathrm{supp}_{A}(\left|\psi\right\rangle)\) and \(\mathrm{supp}_{B}(\left|\psi\right\rangle)\), respectively. As the latter form is easier to be generalised in the case of robust self-testing, we adopt it to the following definition of _nearly support-preserving_ strategies.
**Definition 3.2** (nearly support-preserving).: _Let \(\epsilon\geq 0\). A pure strategy \(S=(\left|\psi\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B},\{A_{sa} \},\{B_{tb}\})\) is called \(\varepsilon\)-support-preserving if_
\[\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}\leq\varepsilon,\ \|[\Pi_{B},B_{tb}]\|_{ \sigma_{B}}\leq\varepsilon,\]
_hold for all \(s,t,a,b\), where \(\Pi_{A}\) is the projection onto \(\mathrm{supp}_{A}(\left|\psi\right\rangle)\) (likewise for \(\Pi_{B}\) on Bob), \(\sigma_{A}=\mathrm{Tr}_{B}[\left|\psi\right\rangle\!\left\langle\psi\right|]\) is the density matrix on Alice (likewise for \(\sigma_{B}\) on Bob), and the state dependent norm is defined as \(\|X\|_{\sigma}:=\sqrt{\mathrm{Tr}[X^{*}X\sigma]}\). If further \(\varepsilon=0\), \(S\) is called _support-preserving_ for simplicity._
Note that \(\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}=\|[\Pi_{A},A_{sa}]\otimes\mathbbm{1}_{B} \left|\psi\right\rangle\|=\sqrt{\left\langle\psi\right|}(A_{sa}^{2}-A_{sa}\Pi_{ A}A_{sa})\otimes\mathbbm{1}_{B}\left|\psi\right\rangle\). This identity is useful in the calculation. Also note that all full-rank strategies are support-preserving by definition.
We will show that support-preservingness is an invariant property under local dilation. That is, if \(S\hookrightarrow\tilde{S}\), then \(S\) is support-preserving if and only if \(\tilde{S}\) is. So this characteristic would not change as we move along '\(\hookrightarrow\)'. To prove this, the following characterization of near support-preservingness, inspired by [10, Lemma 4.3], is needed.
**Lemma 3.3**.: _Let \(S=(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\})\) be a pure strategy._
1. _If_ \(S\) _is_ \(\varepsilon\)_-support-preserving, then there exist operators_ \(\hat{A}_{sa}\in\mathcal{H}_{B},\hat{B}_{tb}\in\mathcal{H}_{A}\) _such that_ \(A_{sa}\otimes\mathbbm{1}_{B}\left|\psi\right\rangle\approx_{\varepsilon} \mathbbm{1}_{A}\otimes\hat{A}_{sa}\left|\psi\right\rangle\) _and_ \(\mathbbm{1}_{A}\otimes B_{tb}\left|\psi\right\rangle\approx_{\varepsilon} \hat{B}_{tb}\otimes\mathbbm{1}_{B}\left|\psi\right\rangle\) _for all_ \(a,s,t,b\)
_._
2. _If there exist operators_ \(\hat{A}_{sa}\in\mathcal{H}_{B},\hat{B}_{tb}\in\mathcal{H}_{A}\) _such that_ \(A_{sa}\otimes\mathbb{I}_{B}\left|\psi\right>\approx_{\varepsilon}\mathbb{I}_{A} \otimes\hat{A}_{sa}\left|\psi\right>\) _and_ \(\mathbb{I}_{A}\otimes B_{tb}\left|\psi\right>\approx_{\varepsilon}\hat{B}_{tb} \otimes\mathbb{I}_{B}\left|\psi\right>\) _for all_ \(a,s,t,b\)_, then_ \(S\) _is_ \(2\varepsilon\)_-support-preserving._
Proof.: To prove (a), consider the Schmidt decomposition of the state
\[\left|\psi\right>=\sum_{i}\lambda_{i}\left|e_{i}\right>\left|f_{i}\right>, \lambda_{i}>0.\]
Define operators
\[\lambda_{A\to B} :=\sum_{i}\lambda_{i}\left|f_{i}\right>\left<e_{i}\right|,\] \[\lambda_{A\to B}^{-1} :=\sum_{i}\lambda_{i}^{-1}\left|f_{i}\right>\left<e_{i}\right|,\] \[\lambda_{B\to A} :=\sum_{i}\lambda_{i}\left|e_{i}\right>\left<f_{i}\right|=( \lambda_{A\to B})^{*},\] \[\lambda_{B\to A}^{-1} :=\sum_{i}\lambda_{i}^{-1}\left|e_{i}\right>\left<f_{i}\right|=( \lambda_{A\to B}^{-1})^{*},\]
and let \(\hat{A}_{sa}:=\lambda_{A\to B}A_{sa}^{\intercal}\lambda_{B\to A}^{-1} \in\mathcal{H}_{B},\hat{B}_{tb}:=\lambda_{B\to A}B_{tb}^{\intercal} \lambda_{A\to B}^{-1}\in\mathcal{H}_{A}\), where the transpose are with respect to the bases \(\{\left|e_{i}\right>_{A}\},\{\left|f_{i}\right>_{B}\}\), respectively. Then
\[\mathbb{I}_{A}\otimes\hat{A}_{sa}\left|\psi\right>= 1_{A}\otimes\lambda_{A\to B}A_{sa}^{\intercal}\sum_{i} \left|e_{i}\right>\left|e_{i}\right>\] \[= \sum_{i,j}\lambda_{j}\left<e_{j}|A_{sa}^{\intercal}\left|e_{i} \right>\left|e_{i}\right>\left|f_{j}\right>\] \[= \sum_{i,j}\lambda_{j}\left<e_{i}|A_{sa}|e_{j}\right>\left|e_{i} \right>\left|f_{j}\right>\] \[= \Pi_{A}A_{sa}\otimes\mathbb{1}_{B}\left|\psi\right>.\]
In the last equation we use the identity \(\Pi_{A}=\sum_{i}\left|e_{i}\right>\left<e_{i}\right|\). So
\[\|A_{sa}\otimes\mathbb{I}_{B}\left|\psi\right>-\mathbb{I}_{A} \otimes\hat{A}_{sa}\left|\psi\right>\|\] \[= \|A_{sa}\Pi_{A}\otimes\mathbb{I}_{B}\left|\psi\right>-\Pi_{A}A_{ sa}\otimes\mathbb{1}_{B}\left|\psi\right>\|=\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}.\]
Then \(A_{sa}\otimes\mathbb{I}_{B}\left|\psi\right>\approx_{\varepsilon}\mathbb{I}_{A }\otimes\hat{A}_{sa}\left|\psi\right>\) if \(\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}\leq\varepsilon\). The similar argument also works for Bob's operators.
To prove (b), note that \(\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}=\|\Pi_{A}A_{sa}\otimes\mathbb{1}\left|\psi \right>-A_{sa}\Pi_{A}\otimes\mathbb{1}\left|\psi\right>\|\). Then
\[\Pi_{A}A_{sa}\otimes\mathbb{1}\left|\psi\right> \approx_{\varepsilon}\Pi_{A}\otimes\hat{A}_{sa}\left|\psi\right>\] \[=\mathbb{1}\otimes\hat{A}_{sa}\left|\psi\right>\] \[\approx_{\varepsilon}A_{sa}\otimes\mathbb{1}\left|\psi\right>=A_{ sa}\Pi_{A}\otimes\mathbb{1}\left|\psi\right>.\]
So \(\Pi_{A}A_{sa}\otimes\mathbb{1}\left|\psi\right>\approx_{2\varepsilon}A_{sa}\Pi_{A }\otimes\mathbb{1}\left|\psi\right>\). The similar argument also works for Bob's operators.
The invariance of support-preserving under local dilation can be stated as follows:
**Proposition 3.4**.: _Let \(S\) and \(\tilde{S}\) be two pure strategies._
1. _If_ \(S\hookrightarrow\tilde{S}\)_, then_ \(S\) _is_ \(\varepsilon\)_-support-preserving if and only if_ \(\tilde{S}\) _is_ \(\varepsilon\)_-support-preserving._
_._
2. _If_ \(S\xleftrightarrow{\hat{\varepsilon}^{\prime}}\tilde{S}\)_, then_ \(\tilde{S}\) _being_ \(\varepsilon\)_-support-preserving implies that_ \(S\) _is_ \((4\varepsilon^{\prime}+2\varepsilon)\)_-support-preserving, and_ \(S\) _being_ \(\varepsilon\)_-support-preserving implies that_ \(\tilde{S}\) _is_ \((4\varepsilon^{\prime}+2\varepsilon)\)_-support-preserving._
Proof.: Let \(V_{A}\otimes V_{B}\) be the local isometry and \(\left|\mathrm{aux}\right\rangle\) be the auxiliary state in the exact/near local-dilation.
To prove (a), note that \(V_{A}\Pi_{A}V_{A}^{*}=\tilde{\Pi}_{A}\otimes\Pi_{A^{\prime}}\), where \(\tilde{\Pi}_{A}\) and \(\Pi_{A^{\prime}}\) are projections onto \(\mathrm{supp}_{A}\left|\tilde{\psi}\right\rangle\) and \(\mathrm{supp}_{A}\left|\mathrm{aux}\right\rangle\), respectively. Then
\[\left\|[\Pi_{A},A_{sa}]\right\|_{\sigma_{A}}^{2}= \left\langle\psi|(A_{sa}^{2}-A_{sa}\Pi_{A}A_{sa})\otimes 1_{B}|\psi\right\rangle\] \[= \left\langle\psi|A_{sa}V_{A}^{*}V_{A}(A_{sa}-\Pi_{A}V_{A}^{*}V_{A }A_{sa})\otimes V_{B}^{*}V_{B}|\psi\right\rangle\] \[= \left\langle\tilde{\psi},\mathrm{aux}[[(\tilde{A}_{sa}\otimes 1_{A ^{\prime}})(\tilde{A}_{sa}\otimes 1_{A^{\prime}}-V_{A}\Pi_{A}V_{A}^{*}(\tilde{A}_{ sa}\otimes 1_{A^{\prime}}))]\otimes 1_{\tilde{B},\tilde{B}}|\tilde{\psi},\mathrm{aux}\right\rangle\] \[= \left\langle\tilde{\psi},\mathrm{aux}[(\tilde{A}_{sa}^{2}-\tilde{ A}_{sa}\tilde{\Pi}_{A}\tilde{A}_{sa})\otimes\Pi_{A^{\prime}}\otimes 1_{\tilde{B},\tilde{B} }|\tilde{\psi},\mathrm{aux}\right\rangle\] \[= \left\langle\tilde{\psi}|(\tilde{A}_{sa}^{2}-\tilde{A}_{sa}\tilde {\Pi}_{A}\tilde{A}_{sa})\otimes 1_{\tilde{B}}|\tilde{\psi}\right\rangle=\left\|[ \tilde{\Pi}_{A},\tilde{A}_{sa}]\right\|_{\sigma_{\tilde{A}}}^{2}.\]
So \(\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}\leq\varepsilon\) if and only if \(\|[\tilde{\Pi}_{A},\tilde{A}_{sa}]\|_{\sigma_{\tilde{A}}}\leq\varepsilon\). The similar argument also works for Bob's operators.
In (b), we first prove the first implication.
Since \(\tilde{S}\) is \(\varepsilon\)-support-preserving, by Lemma 3.3 there exist \(\hat{\tilde{A}}_{sa}\) such that \(\tilde{A}_{sa}\otimes 1\left|\tilde{\psi}\right\rangle\approx_{\varepsilon} 1\otimes\hat{\tilde{A}}_{sa}\left|\tilde{\psi}\right\rangle\). From the near local dilation, we have that
\[(V_{A}V_{A}^{*}\otimes V_{B}V_{B}^{*})(\tilde{A}_{sa}\otimes 1_{\tilde{B}} \left|\tilde{\psi}\right\rangle\otimes\left|\mathrm{aux}\right\rangle)\approx _{\varepsilon}(V_{A}\otimes V_{B})(A_{sa}\otimes 1_{B})\left|\psi\right\rangle. \tag{4}\]
Consider the operator \(\hat{A}_{sa}:=V_{B}^{*}(\hat{\tilde{A}}_{sa}\otimes 1_{\hat{B}})V_{B}\), then
\[V_{A}\otimes V_{B}(1_{A}\otimes\hat{A}_{sa}\left|\psi\right\rangle) =V_{A}\otimes V_{B}(1_{A}\otimes V_{B}^{*}(\hat{\tilde{A}}_{sa} \otimes 1_{\hat{B}})V_{B}\left|\psi\right\rangle)\] \[=(V_{A}V_{A}^{*}\otimes V_{B}V_{B}^{*}(\hat{\tilde{A}}_{sa} \otimes 1_{\hat{B}}))(V_{A}\otimes V_{B})\left|\psi\right\rangle\] \[\approx_{\varepsilon^{\prime}}(V_{A}V_{A}^{*}\otimes V_{B}V_{B}^{* }\hat{\tilde{A}}_{sa})(|\tilde{\psi}\rangle\otimes\left|\mathrm{aux}\right\rangle)\] \[\approx_{\varepsilon}(V_{A}V_{A}^{*}\otimes V_{B}V_{B}^{*})(( \tilde{A}_{sa}\otimes 1_{\hat{B}})\left|\tilde{\psi}\right\rangle\otimes\left|\mathrm{aux}\right\rangle)\] \[\approx_{\varepsilon^{\prime}}V_{A}\otimes V_{B}(A_{sa}\otimes 1_{B}) \left|\psi\right\rangle. \tag{5}\]
So \(V_{A}\otimes V_{B}(1_{A}\otimes\hat{A}_{sa}\left|\psi\right\rangle)\approx_{2 \varepsilon^{\prime}+\varepsilon}V_{A}\otimes V_{B}(A_{sa}\otimes 1_{B}\left|\psi\right\rangle)\), which implies \((1_{A}\otimes\hat{A}_{sa}\left|\psi\right\rangle)\approx_{2\varepsilon^{ \prime}+\varepsilon}(A_{sa}\otimes 1_{B}\left|\psi\right\rangle)\). By Lemma 3.3, \(\|[\Pi_{A},A_{sa}]\|_{\sigma_{A}}\leq 4\varepsilon^{\prime}+2\varepsilon\) for all \(a,s\). The similar argument holds for Bob's operators. So we conclude that \(S\) is \((4\varepsilon^{\prime}+2\varepsilon)\)-support-preserving.
For the second implication of (b), given the existence of \(\hat{A}_{sa}\), consider \(\hat{\tilde{A}}_{sa}:=V_{B}\hat{A}_{sa}V_{B}^{*}\), then
\[1_{\tilde{A}}\otimes\hat{\tilde{A}}_{sa}\left|\tilde{\psi}\right\rangle \left|\mathrm{aux}\right\rangle =1_{\tilde{A},A^{\prime}}\otimes V_{B}\hat{A}_{sa}V_{B}^{*}(| \tilde{\psi}\rangle\left|\mathrm{aux}\right\rangle)\] \[\approx_{\varepsilon^{\prime}}V_{A}\otimes V_{B}\hat{A}_{sa}\left| \psi\right\rangle\] \[\approx_{\varepsilon}V_{A}A_{sa}\otimes V_{B}\left|\psi\right\rangle\] \[\approx_{\varepsilon^{\prime}}\tilde{A}_{sa}\otimes 1_{\tilde{B}}\left| \tilde{\psi}\right\rangle\left|\mathrm{aux}\right\rangle.\]
So by Lemma 3.3, \(\|[\bar{\Pi}_{A},\tilde{A}_{sa}]\|_{\sigma_{\tilde{A}}}=\|[\bar{\Pi}_{A}\otimes\Pi _{A^{\prime}},\tilde{A}_{sa}\otimes\mathds{1}_{A^{\prime}}]\|_{\sigma_{\tilde{A},A^{\prime}}}\leq 4\varepsilon^{\prime}+2\varepsilon\) for all \(a,s\). The similar argument holds for Bob's operators. So we conclude that \(\tilde{S}\) is \((4\varepsilon^{\prime}+2\varepsilon)\)-support-preserving.
(It has come to our attention that the exact (\(\varepsilon=0\)) case of the "only if" direction of part (a) of Proposition 3.4 was independently developed by [13, Proposition 4.6].)
### Nearly projective strategies
We introduce the definition of _nearly projective_ strategies. This notion quantifies 'how projective a strategy is on its state'.
**Definition 3.5** (nearly projective).: _Let \(\varepsilon\geq 0\). A strategy \(S=(|\psi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B},\)\(\{A_{sa}\},\{B_{tb}\})\) is called \(\varepsilon\)-projective if_
\[\langle\mathds{1}_{A}-A_{sa},A_{sa}\rangle_{\sigma_{A}}\leq \varepsilon^{2},\] \[\langle\mathds{1}_{B}-B_{tb},A_{tb}\rangle_{\sigma_{B}}\leq \varepsilon^{2}\]
_hold for all \(s,t,a,b\). Here \(\langle X,Y\rangle_{\sigma}:=\operatorname{Tr}[X^{*}Y\sigma]\)._
Note that \(\langle\mathds{1}_{A}-A_{sa},A_{sa}\rangle_{\sigma_{A}}=\langle\psi|(\mathds{1 }_{A}-A_{sa})A_{sa}\otimes\mathds{1}_{B}|\psi\rangle\), and this identity is useful in some calculations. Also note that being \(0\)-projective does not necessarily imply being projective: a non-projective strategy might be only non-projective outside of the support of the state, so it could be \(0\)-projective. But for full-rank strategies, being projective and \(0\)-projective are equivalent.
The projectiveness of strategies is another invariant property under local dilation. Namely,
**Proposition 3.6**.: _Let \(S\) and \(\tilde{S}\) be two pure strategies._
1. _If_ \(S\hookrightarrow\tilde{S}\)_, then_ \(S\) _is_ \(\varepsilon\)_-projective if and only if_ \(\tilde{S}\) _is_ \(\varepsilon\)_-projective._
2. _If_ \(S\overset{\varepsilon^{\prime}}{\hookrightarrow}\tilde{S}\)_, then_ \(\tilde{S}\) _being_ \(\varepsilon\)_-projective implies that_ \(S\) _is_ \((\sqrt{3\varepsilon^{\prime}}+\varepsilon)\)_-projective, and_ \(S\) _being_ \(\varepsilon\)_-projective implies that_ \(\tilde{S}\) _is_ \((\sqrt{3\varepsilon^{\prime}}+\varepsilon)\)_-projective._
Proof.: Since (a) is an implication of (b) (by taking \(\varepsilon^{\prime}=0\)), we only need to prove (b).
Given that \(S\overset{\varepsilon^{\prime}}{\hookrightarrow}\tilde{S}\), there exists a local isometry and auxiliary state such that
\[V[A_{sa}\otimes\mathds{1}\left|\psi\right\rangle]\approx_{ \varepsilon^{\prime}}(\tilde{A}_{sa}\otimes\mathds{1}\left|\tilde{\psi} \right\rangle)\otimes\left|\mathrm{aux}\right\rangle,\forall\ a,s \tag{6}\] \[V[\left|\psi\right\rangle]\approx_{\varepsilon^{\prime}}|\tilde{ \psi}\rangle\otimes\left|\mathrm{aux}\right\rangle \tag{7}\]
\((\ref{eq:1})-(\ref{eq:2})\):
\[V[(\mathds{1}-A_{sa})\otimes\mathds{1}\left|\psi\right\rangle]\approx_{2 \varepsilon^{\prime}}((\mathds{1}-\tilde{A}_{sa})\otimes\mathds{1}\left| \tilde{\psi}\right\rangle)\otimes\left|\mathrm{aux}\right\rangle \tag{8}\]
Then the inner product of (6) and (8):
\[\langle\psi|(A_{sa}-A_{sa}^{2})\otimes\mathds{1}|\psi\rangle\approx_{3 \varepsilon^{\prime}}\langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2}) \otimes\mathds{1}|\tilde{\psi}\rangle\,.\]
Note that both sides are real positive numbers, then
\[|\sqrt{\langle\psi|(A_{sa}-A_{sa}^{2})\otimes 1|\psi\rangle}-\sqrt{ \langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2})\otimes 1|\tilde{\psi} \rangle}|\] \[\leq |\sqrt{\langle\psi|(A_{sa}-A_{sa}^{2})\otimes 1|\psi\rangle}+ \sqrt{\langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2})\otimes 1| \tilde{\psi}\rangle}|\] \[= \frac{|\left\langle\psi|(A_{sa}-A_{sa}^{2})\otimes 1|\psi\right\rangle -\langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2})\otimes 1|\tilde{\psi} \rangle\,|}{|\sqrt{\langle\psi|(A_{sa}-A_{sa}^{2})\otimes 1|\psi\rangle}- \sqrt{\langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2})\otimes 1| \tilde{\psi}\rangle}|}\] \[\implies |\sqrt{\langle\psi|(A_{sa}-A_{sa}^{2})\otimes 1|\psi\rangle }-\sqrt{\langle\tilde{\psi}|(\tilde{A}_{sa}-\tilde{A}_{sa}^{2})\otimes 1| \tilde{\psi}\rangle}|\leq\sqrt{3\varepsilon^{\prime}}.\]
Then the two implications in (b) follows immediately.
### Restrictions of nonlocal strategies
When a pure strategy \(S\) is not full-rank, that is, the Schmidt rank of the state is strictly smaller than the local dimension, a projective/non-projective strategy might be non-projective/projective on the support. This naturally leads us to study the behaviour of the measurements on the support of the state. To this end, we define the _restriction_ of a strategy as follows.
**Definition 3.7** (Restriction of a strategy).: _Let \(S=\left(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\}\right)\) be a strategy. Consider the Schmidt decomposition of \(\left|\psi\right\rangle\),_
\[\left|\psi\right\rangle=\sum_{i=0}^{d-1}\lambda_{i}\left|e_{i}\right\rangle \left|f_{i}\right\rangle.\]
_We define the isometries_
\[U_{A}=\sum_{i=0}^{d-1}\left|e_{i}\right\rangle_{A}\!\!\left\langle i\right| \qquad\text{and}\qquad U_{B}=\sum_{i=0}^{d-1}\left|f_{i}\right\rangle_{B}\! \!\left\langle i\right|\]
_The restriction of \(S\) is the strategy \(S_{\text{res}}=\left(\left|\psi^{\prime}\right\rangle,\{A_{sa}^{\prime}\},\{B _{tb}^{\prime}\}\right)\), where_
\[A_{sa}^{\prime} =U_{A}^{*}A_{sa}U_{A}\] \[B_{tb}^{\prime} =U_{B}^{*}B_{tb}U_{B}\] \[\left|\psi^{\prime}\right\rangle =\sum_{i=0}^{d-1}\lambda_{i}\left|i\right\rangle\left|i\right\rangle =U_{A}^{*}\otimes U_{B}^{*}\left|\psi\right\rangle.\]
_It is indeed a well-defined POVM full-rank strategy._
Note that the projections on the supports of \(S\) can be written as \(\Pi_{A}=U_{A}U_{A}^{*},\Pi_{B}=U_{B}U_{B}^{*}\).
We will now see that if a non-full-rank strategy \(S\) is exactly/nearly support-preserving, then \(S\) and its restriction \(S_{\text{res}}\) (defined as in Definition 3.7) can be mutually exactly/nearly local-dilated.
**Proposition 3.8**.: _If a pure strategy \(S\) is \(\varepsilon\)-support-preserving, then \(S_{\text{res}}\overset{\varepsilon}{\hookrightarrow}S\) and \(S\overset{\varepsilon}{\hookrightarrow}S_{\text{res}}\), where \(S_{\text{res}}\) is the restriction of \(S\)._
Proof.: We show that \(S_{\mathrm{res}}\overset{\varepsilon}{\hookrightarrow}S\) with a separable auxiliary state, then \(S\overset{\varepsilon}{\hookrightarrow}S_{\mathrm{res}}\) follows from Proposition 3.1.
Consider isometries \(U_{A},U_{B}\) in Definition 3.7, and recall that \(U_{A}U_{A}^{*}=\Pi_{A},U_{B}U_{B}^{*}=\Pi_{B}\). Then
\[U_{A}\otimes U_{B}(A_{st}^{\prime}\otimes\mathbb{1}_{B})\left| \psi^{\prime}\right\rangle =U_{A}U_{A}^{*}A_{sa}U_{A}U_{A}^{*}\otimes U_{B}U_{B}^{*}\left| \psi\right\rangle\] \[=\Pi_{A}A_{sa}\Pi_{A}\otimes\Pi_{B}\left|\psi\right\rangle\] \[\approx_{\varepsilon}A_{sa}\Pi_{A}\otimes\Pi_{B}\left|\psi\right\rangle\] \[=A_{sa}\otimes\mathbb{1}_{B}\left|\psi\right\rangle.\]
A similar argument holds for Bob's operators. So \(S_{\mathrm{res}}\overset{\varepsilon}{\hookrightarrow}S\).
As a consequence of Proposition 3.8, if a game \(G\) pure self-tests a support-preserving canonical strategy \(\tilde{S}\), \(G\) also pure self-tests its restriction \(\tilde{S}_{\mathrm{res}}\). In other words, in this case, we can always take a full-rank canonical strategy.
In general, a projective strategy might become non-projective under restriction. Here, we show that the other way around can never happen: whenever a restriction is projective, the original strategy must be both projective and support-preserving.
**Theorem 3.9**.: _Let \(S=(\left|\psi\right\rangle,\left\{A_{sa}\right\},\left\{B_{tb}\right\})\) be a pure strategy and \(S_{\mathrm{res}}\) be its restriction._
1. _If_ \(S\) _is_ \(\varepsilon_{1}\)_-support-preserving and_ \(\varepsilon_{2}\)_-projective, then_ \(S_{\mathrm{res}}\) _is_ \((\varepsilon_{1}+\varepsilon_{2})\)_-projective._
2. _If_ \(S_{\mathrm{res}}\) _is_ \(\varepsilon_{3}\)_-projective, then_ \(S\) _is_ \(\varepsilon_{3}\)_-support-preserving and_ \(\varepsilon_{3}\)_-projective._
Proof.: We prove for Alice's side, and the same argument works also for Bob. By definition,
\[\left\|[\Pi_{A},A_{sa}]\right\|_{\sigma_{A}}^{2}= \left\langle\psi|(A_{sa}\Pi_{A}-\Pi_{A}A_{sa})(\Pi_{A}A_{sa}-A_{ sa}\Pi_{A})\otimes\mathbb{1}|\psi\right\rangle\] \[= \left\langle\psi|(A_{sa}\Pi_{A}A_{sa}-A_{sa}\Pi_{A}A_{sa}\Pi_{A}- \Pi_{A}A_{sa}\Pi_{A}A_{sa}+\Pi_{A}A_{sa}^{2}\Pi_{A})\otimes\mathbb{1}|\psi\right\rangle\] \[= \left\langle\psi|(A_{sa}^{2}-A_{sa}\Pi_{A}A_{sa})\otimes\mathbb{1} |\psi\right\rangle.\]
On the other hand, recall that \(S_{\mathrm{res}}=(\left|\psi^{\prime}\right\rangle,\left\{A_{sa}^{\prime} \right\},\left\{B_{tb}^{\prime}\right\})\) where \(|\psi^{\prime}\rangle=U_{A}^{*}\otimes U_{B}^{*}\left|\psi\right\rangle\), \(U_{A}^{\prime}=U_{A}^{*}A_{sa}U_{A}\), and \(U_{A}\) satisfies \(U_{A}U_{A}^{*}=\Pi_{A}\). So
\[\left\langle\mathbb{1}-A_{sa}^{\prime},A_{sa}^{\prime}\right\rangle_{ \sigma_{A}^{\prime}}= \left\langle\psi|(U_{A}\otimes U_{B})((\mathbb{1}-U_{A}^{*}A_{sa} U_{A})U_{A}^{*}A_{sa}U_{A}\otimes\mathbb{1})(U_{A}^{*}\otimes U_{B}^{*})|\psi\right\rangle\] \[= \left\langle\psi|(\Pi_{A}A_{sa}\Pi_{A}-\Pi_{A}A_{sa}\Pi_{A}A_{sa} \Pi_{A})\otimes\mathbb{1}|\psi\right\rangle\] \[= \left\langle\psi|(A_{sa}-A_{sa}\Pi_{A}A_{sa})\otimes\mathbb{1}| \psi\right\rangle.\]
Therefore
\[\left\langle\mathbb{1}-A_{sa}^{\prime},A_{sa}^{\prime}\right\rangle_{ \sigma_{A}^{\prime}}-\left\|[\Pi_{A},A_{sa}]\right\|_{\sigma_{A}}^{2}= \left\langle\psi|(A_{sa}-A_{sa}^{2})\otimes\mathbb{1}|\psi\right\rangle\] \[= \left\langle\psi|((\mathbb{1}-A_{sa})A_{sa})\otimes\mathbb{1}|\psi\right\rangle\] \[= \left\langle\mathbb{1}-A_{sa},A_{sa}\right\rangle_{\sigma_{A}}.\]
Then (a) is clear. For (b), note that both \(\left\langle\mathbb{1}-A_{sa},A_{sa}\right\rangle_{\sigma_{A}}\) and \(\left\|[\Pi_{A},A_{sa}]\right\|_{\sigma_{A}}^{2}\) are positive. So if \(\left\langle\mathbb{1}-A_{sa}^{\prime},A_{sa}^{\prime}\right\rangle_{\sigma_{A }^{\prime}}\leq\varepsilon_{3}\) then \(\left\langle\mathbb{1}-A_{sa},A_{sa}\right\rangle_{\sigma_{A}}\leq\varepsilon _{3}\) and \(\left\|[\Pi_{A},A_{sa}]\right\|_{\sigma_{A}}^{2}\leq\varepsilon_{3}\).
**Corollary 3.10**.: _The restriction \(S_{\mathrm{res}}\) is projective if and only \(S\) is support-preserving and \(0\)-projective (i.e. projective on the support of the state)._
### Naimark dilation of nonlocal strategies
The Naimark dilation theorem provides an essential framework for characterizing POVMs, having significant influence not only in this study but also in the broader domains of operator theory and quantum information theory. For a given (finite) set of POVMs, it Naimark dilation can be defined as the following:
**Definition 3.11**.: _Let \(\{R_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\), be a family of POVMs on \(\mathcal{H}\). \((\{P_{ij}\}_{j=1}^{m_{i}},V)\) is called a Naimark dilation of \(\{R_{ij}\}_{j=1}^{m_{i}}\), if \(\{P_{ij}\}_{j=1}^{m_{i}}\) is a family of PVMs on \(\mathcal{H}^{\prime}\), \(V:\mathcal{H}\to\mathcal{H}^{\prime}\), and \(R_{ij}=V^{*}P_{ij}V\) for all \(i,j\)._
Within the range of its diverse forms, we here introduce a specific variant of the Naimark dilation theorem, particularly suited clearly to handle finite-dimensional POVMs. This construction can also be found in [14, Proposition 9.6 and Theorem 9.8]. It's important to note that while this construction serves to illuminate the intuition behind Naimark dilations, the results we present later in the paper are not tied to this specific example. Our general theorem applies to any Naimark dilation, regardless of its particular structure.
**Construction 3.12**.: Let \(\{R_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\), be a family of POVM's on \(\mathcal{H}\). Construct projective measurements \(\{P_{ij}\}_{j=1}^{m_{i}}\) and an isometry \(V\) such that \(R_{ij}=V^{*}P_{ij}V\) as follows:
For a single POVM \(\{R_{j}\}_{j=1}^{m}\), define \(P_{j}=\mathbbm{1}_{B(\mathcal{H})}\otimes e_{j}e_{j}^{*}\) in \(B(\mathcal{H}\otimes\mathbb{C}^{m})\), \(1\leq j\leq m\) and \(V:\mathcal{H}\to\mathcal{H}\otimes\mathbb{C}^{m}\), \(\varphi\mapsto\sum_{j=1}^{m}\sqrt{R_{j}}\varphi\otimes e_{j}\).
Now assume we did the construction for \(n\) POVM's. Take a family \(\{R_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n+1\). Then we have PVM's \(\{Q_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\), on \(\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n}}\) and an isometry \(V_{1}:\mathcal{H}\to\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes \mathbb{C}^{m_{n}}\) such that \(R_{ij}=V_{1}^{*}Q_{ij}V_{1}\). From \(\{R_{n+1j}\}_{j=1}^{m_{n+1}}\), we get a POVM on \(\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n}}\) by taking
\[\check{R}_{n+1j}=V_{1}R_{n+1j}V_{1}^{*}\]
for \(j\neq 1\) and
\[\check{R}_{(n+1)1}=V_{1}R_{(n+1)1}V_{1}^{*}+(1_{B(\mathcal{H}\otimes\mathbb{C }^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n}})}-V_{1}V_{1}^{*}).\]
Similar to the single POVM case, we define
\[P_{n+1j}=1_{B(\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{ C}^{m_{n}})}\otimes e_{j}e_{j}^{*}\]
in \(B(\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n+1}})\), \(1\leq j\leq m\) and
\[V_{2}:\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{ n}}\to\mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n+1}}, \varphi\mapsto\sum_{j=1}^{m_{n+1}}\sqrt{\check{R}_{n+1j}}\varphi\otimes e_{j}.\]
For \(1\leq i\leq n\), we let
\[P_{ij}=V_{2}Q_{ij}V_{2}^{*}\quad\text{for $i\neq 1$ and }\;P_{i1}=V_{2}Q_{i1}V_{2}^{*}+(1_{B( \mathcal{H}\otimes\mathbb{C}^{m_{1}}\otimes\cdots\otimes\mathbb{C}^{m_{n+1}})} -V_{2}V_{2}^{*}).\]
Finally, we define \(V:=V_{2}\circ V_{1}\).
The following theorem shows that this construction gives us a valid projective measurement.
**Theorem 3.13**.: _[_14_, Theorem 9.8]_ _Let \(\{R_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\), be a family of POVM's on \(\mathcal{H}\). Then the \(\{P_{ij}\}_{j=1}^{m_{i}}\) given in Construction 3.12 are projective measurements, and we have \(R_{ij}=V^{*}P_{ij}V\) for the isometry \(V\) given therein._
In our subsequent analysis, we show that our results hold for all Naimark dilations, thus are not limited by the specific details of this construction.
Given the Naimark dilation of multiple POVMs, one can talk about the Naimark dilation of a strategy:
**Definition 3.14** (Naimark dilation of quantum strategies).: _Given a pure strategy \(S=\left(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\}\right)\), a PVM strategy \(S_{\mathrm{Naimark}}=\left(V_{A}\otimes V_{B}\left|\psi\right\rangle,\{P_{sa} \},\{Q_{tb}\}\right)\) is called a Naimark dilation of \(S\), if \((\{P_{sa}\},V_{A})\) is a Naimark dilation of \(\{A_{sa}\}\), and \((\{Q_{tb}\},V_{B})\) is a Naimark dilation of \(\{B_{tb}\}\)._
And not surprisingly, they generate the same statistics:
**Lemma 3.15**.: _Any pure strategy gives the same correlation as its Naimark dilations._
Proof.: Let \(S=(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\})\) and \(S_{\mathrm{Naimark}}=(V_{A}\otimes V_{B}\left|\psi\right\rangle,\{P_{sa}\},\{ Q_{tb}\})\). Using \(A_{sa}=V_{A}^{*}P_{sa}V_{A},B_{tb}=V_{B}^{*}Q_{tb}V_{B}\),we get
\[\left\langle\psi|A_{sa}\otimes B_{tb}|\psi\right\rangle=\left\langle\psi|V^{* }VA_{sa}\otimes B_{tb}V^{*}V|\psi\right\rangle=\left\langle V\psi|P_{sa} \otimes Q_{tb}|V\psi\right\rangle,\]
where \(V=V_{A}\otimes V_{B}\).
As an analog of Proposition 3.8, we will show that \(S\) and \(S_{\mathrm{Naimark}}\) are mutually locally dilated if \(S\) is projective. To prove this, we need the following lemma:
**Lemma 3.16**.: _Let \(\{R_{ij}\}_{j=1}^{m}\), \(1\leq i\leq n\), be a collection of POVM's on \(\mathcal{H}\), \(\sigma\) be a density matrix on \(\mathcal{H}\), and \(\left|\psi\right\rangle\) be a purification of \(\sigma\). Then any Naimark dilation \((\{P_{ij}\},V)\) of \(\{R_{ij}\}\) satisfies_
\[\left\|VR_{ij}\otimes\mathbbm{1}\left|\psi\right\rangle-P_{ij}V\otimes \mathbbm{1}\left|\psi\right\rangle\right\|^{2}=\left\langle\psi|(\mathbbm{1}- R_{ij})R_{ij}\otimes\mathbbm{1}_{B}|\psi\right\rangle.\]
Proof.: Using \(V^{*}P_{ij}V=R_{ij}\), we get
\[\left\|VR_{ij}\otimes\mathbbm{1}\left|\psi\right\rangle-P_{ij}V \otimes\mathbbm{1}\left|\psi\right\rangle\right\|^{2}\] \[= \left\langle\psi|(R_{ij}V^{*}VR_{ij}+V^{*}P_{ij}P_{ij}V-V^{*}P_{ ij}VR_{ij}-R_{ij}V^{*}P_{ij}V)\otimes\mathbbm{1}|\psi\right\rangle\] \[= \left\langle\psi|(R_{ij}^{2}+R_{ij}-R_{ij}^{2}-R_{ij}^{2})\otimes \mathbbm{1}|\psi\right\rangle\] \[= \left\langle\psi|(\mathbbm{1}-R_{ij})R_{ij}\otimes\mathbbm{1}_{B }|\psi\right\rangle.\]
Applying Lemma 3.16 in the context of nonlocal strategies, we have the following proposition:
**Proposition 3.17**.: _If a pure strategy \(S\) is \(\varepsilon\)-projective, then \(S\overset{\varepsilon}{\hookrightarrow}S_{\mathrm{Naimark}}\) and \(S_{\mathrm{Naimark}}\overset{\varepsilon}{\hookrightarrow}S\), where \(S_{\mathrm{Naimark}}\) is any Naimark dilation of \(S\)._
Proof.: It is clear from Lemma 3.16 that \(S\overset{\varepsilon}{\underset{V_{A}\otimes V_{B}}{\longleftarrow}}S_{ \mathrm{Naimark}}\), where \(V_{A},V_{B}\) are isometries given in Definition 3.11. Then \(S_{\mathrm{Naimark}}\overset{\varepsilon}{\hookrightarrow}S\) follows from Proposition 3.1.
We now show that if a Naimark dilation of a strategy is support-preserving, then the original one must be both projective and support-preserving (an analog of Theorem 3.9).
**Theorem 3.18**.: _Let \(S=(\left|\psi\right\rangle,\left\{A_{sa}\right\},\left\{B_{tb}\right\})\) be a pure strategy and \(S_{\mathrm{Naimark}}\) be any Naimark dilation of \(S\)._
1. _If_ \(S\) _is_ \(\varepsilon_{1}\)_-support-preserving and_ \(\varepsilon_{2}\)_-projective, then_ \(S_{\mathrm{Naimark}}\) _is_ \((\varepsilon_{1}+\varepsilon_{2})\)_-support-preserving._
2. _If_ \(S_{\mathrm{Naimark}}\) _is_ \(\varepsilon_{3}\)_-support-preserving, then_ \(S\) _is_ \(\varepsilon_{3}\)_-support-preserving and_ \(\varepsilon_{3}\)_-projective._
Proof.: We prove for Alice's side, and the same argument works also for Bob. Let \(\Pi\) be the projection on the support of \(\left|\psi\right\rangle\) on Alice's side. Let \((\left\{P_{sa}\right\},V)\) be the Naimark dilation of \(\left\{A_{sa}\right\}\). Note that
\[\|[V\Pi V^{*},P_{sa}]\|_{V\sigma V^{*}}^{2}= \|P_{sa}V\Pi V^{*}V\otimes\mathbbm{1}\left|\psi\right\rangle-V \Pi V^{*}P_{sa}V\otimes\mathbbm{1}\left|\psi\right\rangle\|^{2}\] \[= \left\langle\psi|A_{sa}\otimes\mathbbm{1}\left|\psi\right\rangle -\left\langle\psi|A_{sa}\Pi A_{sa}\otimes\mathbbm{1}\left|\psi\right\rangle.\right.\]
And
\[\|[\Pi,A_{sa}]\|_{\sigma}^{2}= \|\Pi A_{sa}\otimes\mathbbm{1}\left|\psi\right\rangle-A_{sa}\Pi \otimes\mathbbm{1}\left|\psi\right\rangle\|^{2}\] \[= \left\langle\psi|A_{sa}^{2}\otimes\mathbbm{1}\left|\psi\right\rangle -\left\langle\psi|A_{sa}\Pi A_{sa}\otimes\mathbbm{1}\left|\psi\right\rangle.\right.\]
So
\[\|[V\Pi V^{*},P_{sa}]\|_{V\sigma V^{*}}^{2}-\left\langle\mathbbm{ 1}-A_{sa},A_{sa}\right\rangle_{\sigma}\] \[= \left\langle\psi|A_{sa}^{2}\otimes\mathbbm{1}\left|\psi\right\rangle -\left\langle\psi|A_{sa}\Pi A_{sa}\otimes\mathbbm{1}\left|\psi\right\rangle\right.\] \[= \|[\Pi,A_{sa}]\|_{\sigma}^{2}.\]
Then (a) is clear. For (b), note that both \(\left\langle\mathbbm{1}-A_{sa},A_{sa}\right\rangle_{\sigma}\) and \(\|[\Pi,A_{sa}]\|_{\sigma}^{2}\) are positive. So \(S_{\mathrm{Naimark}}\) being \(\varepsilon_{3}\)-support-preserving implies that \(S\) is \(\varepsilon_{3}\)-projective and \(\varepsilon_{3}\)-support-preserving.
**Corollary 3.19**.: _The Naimark dilation \(S_{\mathrm{Naimark}}\) is support-preserving if and only if \(S\) is support-preserving and \(0\)-projective (i.e. projective on the support of the state)._
## 4 Lifting Assumptions
In this section, we aim to lift the assumptions that are commonly made in the literature. Specifically, we will establish the following theorem, which is our main result.
**Theorem 4.1** (Main theorem).: _Let \(\tilde{S}\) be a pure strategy that is optimal for a nonlocal game \(G\). Then the following two implications hold:_
1. _If_ \(\tilde{S}\) _is full-rank and_ \(G\) _is a robust pure PVM self-test for_ \(\tilde{S}\)_, then_ \(G\) _is a robust assumption-free self-test for_ \(\tilde{S}\)_._
2. _If_ \(\tilde{S}\) _is projective and_ \(G\) _is a robust pure full-rank self-test for_ \(\tilde{S}\)_, then_ \(G\) _is a robust assumption-free self-test for_ \(\tilde{S}\)_._
We will prove Theorem 4.1\((a)\) and \((b)\) both in two steps. For part \((a)\), we first show how to get rid of the PVM assumption in the following subsections. Similarly, we show that we can lift the full-rank assumption in part \((b)\) in Subsection 4.2. Then, for both Theorem 4.1\((a)\) and \((b)\), we get from pure to mixed states in Subsection 4.3.
Theorem 4.1 tells us that, if a strategy \(\tilde{S}\) is proved self-tested with the assumption of either full-rank states or projective measurements (for the arbitrary strategy), and \(\tilde{S}\) itself is full-rank and projective, then those assumptions can be lifted for free. Moreover, we show that for \(\tilde{S}\) to be assumption-free self-tested, it has to be both support-preserving and \(0\)-projective:
**Theorem 4.2**.: _If \(G\) is an assumption-free self-test for \(\tilde{S}\), then \(\tilde{S}\) must be \(0\)-projective and support-preserving. Moreover, \(G\) is an assumption-free self-test for the restriction of \(\tilde{S}\)._
Proof.: Take any pure strategy \(S\) that wins \(G\) optimally, then so does its Naimark dilation \(S_{\mathrm{Naimark}}\) and its restriction \(S_{\mathrm{res}}\). Since \(G\) is an assumption-free self-test, it is also a pure self-test. So \(S_{\mathrm{res}}\hookrightarrow\tilde{S}\), which implies that \(\tilde{S}\) is support-preserving using Proposition 3.4 and the fact that \(S_{\mathrm{res}}\) is support-preserving. Similarly, \(S_{\mathrm{Naimark}}\hookrightarrow\tilde{S}\) implies \(\tilde{S}\) to be \(0\)-projective by Proposition 3.6 and the fact that \(S_{\mathrm{Naimark}}\) is projective.
Since \(\tilde{S}\) is \(0\)-projective and support-preserving, \(\tilde{S}\hookrightarrow\tilde{S}_{\mathrm{res}}\). So \(G\) is an assumption-free self-test for \(\tilde{S}_{\mathrm{res}}\).
(The same conclusion for pure self-tests from correlation has been shown also in [23, Proposition 4.14] via a different approach.) Notice that \(\tilde{S}_{\mathrm{res}}\) is both projective and full-rank in the above proof. This means that for assumption-free self-tests, we can always take the canonical strategy \(\tilde{S}\) to be both projective and full-rank. This is somehow also the best one can hope for, because we can never show that the canonical strategy _is_ projective and full-rank: consider \(\tilde{S}^{\prime}=\left(\left|\tilde{\psi}\right\rangle\otimes\left|0\right\rangle _{A}\left|0\right\rangle_{B},\left\{\tilde{A}_{sa}\otimes 1\right\},\left\{\tilde{B}_{tb }\otimes 1\right\}\right)\). Then \(\tilde{S}^{\prime}\hookrightarrow\tilde{S}\) and \(\tilde{S}\hookrightarrow\tilde{S}^{\prime}\). So \(G\) also self-tests \(\tilde{S}^{\prime}\).
As a final remark before we go into the steps of the proof of Theorem 4.1, we note that most of the results in this section also apply to correlation self-tests, while one of them (namely, ones in Subsection 4.3) additionally requires that the correlation is extreme in the quantum set. See Appendix A for a detailed discussion.
### Lifting the PVM assumption
Here we show that robust pure PVM self-test implies robust pure self-test if the canonical strategy is full-rank, with the building blocks from Section 3.
**Theorem 4.3** (robust pure PVM implies robust pure).: _Let a game \(G\) be a robust pure PVM self-test for a full-rank canonical strategy \(\tilde{S}\). Then_
1. \(\tilde{S}\) _is a projective strategy, and_
2. \(G\) _is also a robust pure self-test for_ \(\tilde{S}\)_._
Proof.: We first prove (a). Note that robust self-tests always imply exact self-tests by taking \(\varepsilon=0\) (which causes \(\delta=0\)).
For any \(S\) that generates the same correlation of \(\tilde{S}\), consider its Naimark dilation \(S_{\mathrm{Naimark}}\). Since \(G\) is a PVM self-test for \(\tilde{S}\), it holds that \(S_{\mathrm{Naimark}}\hookrightarrow\tilde{S}\). By the invariance of projectiveness (Proposition 3.6), \(\tilde{S}\) is \(0\)-projective, thus projective.
Now we prove (b). For any \(\varepsilon\), let \(\varepsilon^{\prime}=\varepsilon/5\). Since \(G\) is a robust pure PVM self-test, for such \(\varepsilon^{\prime}\) there exist \(\delta^{\prime}\) such that, any \(\delta^{\prime}\)-optimal pure projective strategy \(S_{\mathrm{proj}}\) for \(G\) satisfies \(S_{\mathrm{proj}}\overset{\varepsilon^{\prime}}{\hookrightarrow}\tilde{S}\).
Consider a non-projective strategy \(S_{\text{non-proj}}\) that is \(\delta^{\prime}\)-optimal for \(G\). Since its Naimark dilation \(S_{\text{Naimark}}\) is projective and \(\delta^{\prime}\)-optimal, it holds that \(S_{\text{Naimark}}\xrightarrow{\varepsilon^{\prime}}\tilde{S}\). Note that \(\tilde{S}\) is assumed to be full-rank (thus support-preserving), by the invariance of support-preserving (Proposition 3.4) \(S_{\text{Naimark}}\) is \(4\varepsilon^{\prime}\)-support-preserving. Then by Theorem 3.18\(S_{\text{non-proj}}\) is \(4\varepsilon^{\prime}\)-support-preserving. Then \(S_{\text{non-proj}}\xrightarrow{4\varepsilon^{\prime}}S_{\text{Naimark}}\) by Proposition 3.17. By transitivity, \(S_{\text{non-full}}\xrightarrow{\varepsilon^{\prime}+4\varepsilon^{\prime}= \varepsilon}\tilde{S}\).
Let \(\delta=\delta^{\prime}\). So we conclude that \(S_{\text{non-proj}}\xrightarrow{\varepsilon}\tilde{S}\) for any \(\delta\)-optimal \(S_{\text{non-full}}\), that is, \(G\) is also a robust pure self-test.
**Remark 4.4**.:
* _Previously, work_ _[_11_, Theorem 3.7]_ _shows that in some special cases where the correlation is synchronous or binary, PVM assumption can be lifted for exact self-tests. Here we show that this in fact be done in a more general scenario, and for robust self-tests as well._
* _Exact version of the (b) part of the theorem and its proof hold automatically by taking_ \(\varepsilon=0\) _(which causes_ \(\delta=0\)_)._
* _If there is already an explicit_ \((\delta,\varepsilon)\) _dependence in the PVM self-test,_ e.g._,_ \(\varepsilon=O(\delta^{2})\)_, then our proof still works and give the result that any_ \(\delta\)_-optimal strategy is a_ \(5O(\delta^{2})\)_-local-dilation._
### Lifting the full-rank assumption
Once again, using the tools from Section 3, we will now show we can get rid of the full-rank assumption if our canonical strategy is projective.
**Theorem 4.5**.: _Let a game \(G\) be a robust pure full-rank self-test for a projective canonical strategy \(\tilde{S}\). Then_
1. \(\tilde{S}\) _is support-preserving, and_
2. \(G\) _is also a robust pure self-test for_ \(\tilde{S}\)_._
Proof.: We first prove (a). Note that robust self-tests always imply exact self-tests by taking \(\varepsilon=0\) (which causes \(\delta=0\)).
For any \(S\) that generates the same correlation of \(\tilde{S}\), consider its restriction \(S_{\text{res}}\). Since \(G\) is a full-rank self-test for \(\tilde{S}\) it holds that \(S_{\text{res}}\hookrightarrow\tilde{S}\). By the invariance of support-preservingness (Proposition 3.4), \(\tilde{S}\) is support-preserving.
Now we prove (b). For any \(\varepsilon\), let \(\varepsilon^{\prime}\) be the positive number such that \(\varepsilon^{\prime}+\sqrt{3\varepsilon^{\prime}}=\varepsilon\). Since \(G\) is a robust pure full-rank self-test, for such \(\varepsilon^{\prime}\) there exist \(\delta^{\prime}\) such that, any \(\delta^{\prime}\)-optimal pure full-rank strategy \(S_{\text{full}}\) for \(G\) satisfies \(S_{\text{full}}\xleftarrow{\varepsilon^{\prime}}\tilde{S}\).
Consider a non-full-rank strategy \(S_{\text{non-full}}\) that is \(\delta^{\prime}\)-optimal for \(G\). Since its restriction \(S_{\text{res}}\) is full-rank and \(\delta^{\prime}\)-optimal, it holds that \(S_{\text{res}}\xleftarrow{\varepsilon^{\prime}}\tilde{S}\). Note that \(\tilde{S}\) is assumed to be projective, by the invariance of projectiveness (Proposition 3.6) \(S_{\text{res}}\) is \(\sqrt{3\varepsilon^{\prime}}\)-projective. Then by Theorem 3.9, \(S_{\text{non-full}}\) is \(\sqrt{3\varepsilon^{\prime}}\)-support-preserving. Then \(S_{\text{non-full}}\xleftarrow{\sqrt{3\varepsilon^{\prime}}}S_{\text{res}}\) by Proposition 3.8. By transitivity, \(S_{\text{non-full}}\xleftarrow{\varepsilon^{\prime}+\sqrt{3\varepsilon^{\prime }}=\varepsilon}\tilde{S}\).
Let \(\delta=\delta^{\prime}\). So we conclude that \(S_{\text{non-full}}\overset{\varepsilon}{\rightarrow}\tilde{S}\) for any \(\delta\)-optimal \(S_{\text{non-full}}\), that is, \(G\) is also a robust pure self-test.
**Remark 4.6**.:
* _Exact version of the (b) part of the theorem and its proof hold automatically by taking_ \(\varepsilon=0\) _(which causes_ \(\delta=0\)_)._
* _If there is already an explicit_ \((\delta,\varepsilon)\) _dependence in the full-rank self-test,_ e.g._,_ \(\varepsilon=O(\delta^{2})\)_, then our proof still works and give the result that any_ \(\delta\)_-optimal strategy is a_ \(O(\delta)\)_-local-dilation._
### Lifting the purity assumption
Until now, we focused on strategies using a pure state. In general, there might be strategies that use mixed states. Note that many self-testing theorems are shown only for pure strategies, therefore it is worth investigating whether or not those theorems also hold for mixed strategies. We address this task and show that a pure self-test is a mixed self-test as long as the canonical strategy has a pure, full-rank state. The main result in this subsection is the following.
**Theorem 4.7**.: _Let \(t\subseteq\{\text{PVM}\}\) and \(G\) robust pure \(t\) self-tests \(\tilde{S}\), where \(|\tilde{\psi}\rangle\) has full Schmidt rank. Then \(G\) robust mixed \(t\) self-tests \(\tilde{S}\)._
#### 4.3.1 Eigenspace of game operator
Let \(\tilde{S}=\left(|\tilde{\psi}\rangle\,,\{\tilde{A}_{sa}\},\{\tilde{B}_{tb}\}\right)\) be the canonical strategy self-tested by a game \(G=(\mathcal{S},\mathcal{T},\mathcal{A},\)\(\mathcal{B},\pi,\mathcal{V})\). We want to understand the set \(\tilde{Q}\) of all quantum states that yield optimal quantum strategies when measured using the measurements from \(\tilde{S}\). We will show that \(|\tilde{\psi}\rangle\) is the only state that wins the game optimally. Define the operator
\[\tilde{W}:=\sum_{a,b,s,t}\pi(s,t)\mathcal{V}(a,b|s,t)(\tilde{A}_{sa}\otimes \tilde{B}_{tb}).\]
Then that is equivalent to that the eigenspace of the largest eigenvalue of \(\tilde{W}\) is 1-dimensional. We introduce key lemmas, then present the proof of this property.
Given an optimal strategy \(S=\left(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\}\right)\) for \(G\), the following lemma characterizes this set \(Q\), the set of all quantum states that yield optimal quantum strategies, in terms of the \(W\) operator.
**Lemma 4.8**.: _Let \(S=\left(|\psi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B},\{A_{sa}\},\{B_{ tb}\}\right)\) be any optimal pure strategy for some game \(G=(\mathcal{S},\mathcal{T},\mathcal{A},\mathcal{B},\pi,\mathcal{V})\). Define the set_
\[Q:=\{|\phi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}:\omega(\left(|\phi \rangle\,,\{A_{sa}\},\{B_{tb}\}\right),G)=\omega_{q}(G)\}.\]
_Define the operator_
\[W:=\sum_{a,b,s,t}\pi(s,t)\mathcal{V}(a,b|s,t)(A_{sa}\otimes B_{tb}).\]
_Then \(Q=\operatorname{St}(V_{\max(\sigma(W))})\), where \(V_{\lambda}\) denotes the \(\lambda\)-eigenspace of \(W\) and \(\sigma(W)\) is the spectrum of \(W\)._
Proof.: Let \(\lambda_{\max}:=\max\left(\sigma(W)\right)\). A key thing to observe is that \(\left\langle\psi^{\prime}\right|W\left|\psi^{\prime}\right\rangle\) is the success probability of strategy \(S^{\prime}=(\{A_{sa}\},\{B_{tb}\},\left|\psi^{\prime}\right\rangle)\) (see Lemma 2.4). It now follows that
\[\left\langle\psi^{\prime}\right|W\left|\psi^{\prime}\right\rangle=\omega_{q}(G )\quad\Longleftrightarrow\quad S^{\prime}\text{ is an optimal strategy for }G\quad\Longleftrightarrow\quad\left|\psi^{\prime}\right\rangle\in Q. \tag{9}\]
Since the unit vectors \(\left|\psi\right\rangle\) that maximize the expression \(\left\langle\psi^{\prime}\right|W\left|\psi^{\prime}\right\rangle\) are precisely the quantum states in \(\operatorname{St}\left(V_{\lambda_{\max}}\right)\), the desired statement follows.
Our main use for this lemma is that it, for pure self-tested strategies, allows us to characterise \(Q\) in relation to a vector space. In particular, \(Q=\operatorname{St}(\operatorname{span}(Q))\).
While the above result already characterises the state space of optimal states for a given set of measurement operators, it turns out that in some cases, it is actually possible to say more; namely that if all those states have full Schmidt rank, then that space necessarily has dimension \(1\). The statement and proof of this result due to Cubitt, Montanaro and Winter [13], though it is restated here for the sake of completeness.
**Lemma 4.9** ([13]).: _Let \(S\) be a subspace of the bipartite space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), where \(\dim\mathcal{H}_{A}=\dim\mathcal{H}_{B}=d\). If every nonzero state in \(S\) has Schmidt rank \(d\), then \(\dim S=1\)._
Proof.: For contradiction, we assume that there exists an (at least) two-dimensional subspace \(S\) where every unit vector in \(S\) has Schmidt rank \(d\). Let \(\left|\varphi\right\rangle,\left|\psi\right\rangle\in S\subseteq\mathbb{C}^{d} \otimes\mathbb{C}^{d}\) be linearly independent. We will show that there exists \(x\in\mathbb{C}\) such that the (unnormalised) \(\left|\phi_{x}\right\rangle=\left|\varphi\right\rangle+x\left|\psi\right\rangle\) has Schmidt rank less than \(d\), which contradicts the hypothesis.
Arrange the coefficients of vectors in the computational basis \(\{\left|i\right\rangle\left|j\right\rangle\}_{i,j=0,\ldots,d-1}\), into a \(d\times d\) matrix. Then the Schmidt rank of the state vector equals the linear rank of the associated matrix. So, \(\left|\psi_{x}\right\rangle\) has Schmidt rank less than \(d\) if and only if the determinant of the associated matrix of \(\left|\psi_{x}\right\rangle\) is \(0\). Also note that the determinant is a non-constant polynomial in \(x\) of degree \(d\). Hence, it must have a root \(x_{0}\in\mathbb{C}\), and the corresponding \(\left|\phi_{x_{0}}\right\rangle\) has Schmidt rank less than \(d\).
Apart from this, we will need another result characterising the Schmidt rank of optimal states; namely that they all have minimal Schmidt rank. This will be useful for proving that the conditions of the above result holds.
**Lemma 4.10**.: _Let \(G\) be nonlocal game that pure self-tests the strategy \(\tilde{S}=(\left|\tilde{\psi}\right\rangle,\{\tilde{A}_{sa}\},\{\tilde{B}_{tb }\})\). Then \(\left|\tilde{\psi}\right\rangle\) has minimum Schmidt rank across the states of all optimal pure strategies._
Proof.: Let \(t\) be the Schmidt rank of \(\left|\tilde{\psi}\right\rangle\). For contradiction, assume that there exists a strategy \(S=(\left|\phi\right\rangle,\{A_{sa}\},\{B_{tb}\})\) with Schmidt rank \(s<t\). By the definition of pure self-testing, there exist local isometries \(V_{A},V_{B}\) and a state \(\left|\text{aux}\right\rangle\) such that
\[\left(V_{A}\otimes V_{B}\right)\left|\phi\right\rangle=\left|\tilde{\psi} \right\rangle\otimes\left|\text{aux}\right\rangle.\]
Since local isometries preserve Schmidt rank, the Schmidt rank (with respect to Alice/Bob partition) of \(\left|\tilde{\psi}\right\rangle\otimes\left|\text{aux}\right\rangle\) is \(s\). This is a contradiction, since tensoring with a state \(\left|\text{aux}\right\rangle\) cannot decrease the Schmidt rank of \(\left|\tilde{\psi}\right\rangle\).
Now we are ready to present the following proposition:
**Proposition 4.11**.: _Let \(\tilde{S}=(\left|\tilde{\psi}\right\rangle,\{\tilde{A}_{sa}\},\{\tilde{B}_{tb}\})\) be the canonical full-rank strategy self-tested by a game \(G=(\mathcal{S},\mathcal{T},\mathcal{A},\mathcal{B},\pi,\mathcal{V})\). Define the operator_
\[\tilde{W}:=\sum_{a,b,s,t}\pi(s,t)\mathcal{V}(a,b|s,t)(\tilde{A}_{sa}\otimes \tilde{B}_{tb}),\]
_and the set_
\[\tilde{Q}:=\{|\phi\rangle:\langle\phi|\tilde{W}|\phi\rangle=\omega_{q}(G)\}.\]
_Then \(\dim\operatorname{span}\{\tilde{Q}\}=1\)._
Proof.: Let \(\lambda_{0}\) be the largest eigenvalue of \(\tilde{W}\), which is also the quantum value of \(G\). Let \(V_{\lambda_{0}}\) be the eigenspace of \(\tilde{W}\) corresponding to \(\lambda_{0}\). Then \(\tilde{Q}\) coincides the set of all unit vectors in \(V_{\lambda_{0}}\).
By Lemma 4.10, \(|\tilde{\psi}\rangle\) has the minimal rank among states in \(\tilde{Q}\). Then in \(V_{\lambda_{0}}\) all non-zero vectors are full-rank. By Lemma 4.9, \(\dim V_{\lambda_{0}}=1=\dim\operatorname{span}(\tilde{Q})\).
#### 4.3.2 Pure self-tests imply mixed self-tests
The following lemma can be seen as a first step in the proof of showing that a pure, robust self-test is a mixed, robust self-test, if the canonical quantum strategy has a state of full Schmidt rank. It shows that any purification of a mixed quantum strategy can be \(\varepsilon^{\prime}\)-dilated to a quantum strategy that uses the operators of the canonical strategy of the pure self-test.
**Lemma 4.12**.: _Let \(G\) be a robust, pure self-test for \(\tilde{S}=\left(\left|\tilde{\psi}\right\rangle,\{\tilde{A}_{sa}\},\{\tilde{B}_ {tb}\}\right)\). Let \(\rho_{AB}\) be a mixed state for which \(S=(\rho_{AB},\{A_{sa}\},\{B_{tb}\})\) is a \(\delta\)-optimal strategy and consider \(S^{(1)}=(\left|\psi\right\rangle_{ABP},\{A_{sa}\},\{B_{tb}\otimes 1_{P}\})\), where \(\left|\psi\right\rangle_{ABP}\) is a purification of \(\rho_{AB}\). Then \(S^{(2)}=\left(X\left|\psi\right\rangle_{ABP},\{\tilde{A}_{sa}\otimes 1_{ \tilde{A}}\},\{\tilde{B}_{tb}\otimes 1_{\tilde{B}}\otimes 1_{P}\}\right)\) is a local \(2\varepsilon\)-dilation of \(S^{(1)}\), where \(X\) is an isometry obtained from the robust, pure self-test._
Proof.: We have two pure strategies, \(S^{(1)}\) and \((\left|\psi\right\rangle_{ABP},\{A_{sa}\otimes 1_{P}\},\{B_{tb}\})\), which are \(\delta\)-optimal. Then by the pure robustness, we have that
\[V_{AP}\otimes V_{B}[(A_{sa}\otimes 1_{P})\otimes 1_{B}\left|\psi \right\rangle_{ABP}]\approx_{\varepsilon}(\tilde{A}_{sa}\otimes 1_{\tilde{B}} \left|\tilde{\psi}\right\rangle)\otimes\left|\mathrm{aux}_{1}\right\rangle,\] \[V_{AP}\otimes V_{B}[(1_{A}\otimes 1_{P})\otimes B_{tb}\left|\psi \right\rangle_{ABP}]\approx_{\varepsilon}(1_{\tilde{A}}\otimes\tilde{B}_{tb} \left|\tilde{\psi}\right\rangle)\otimes\left|\mathrm{aux}_{1}\right\rangle, \tag{10}\] \[V_{AP}\otimes V_{B}[\left|\psi\right\rangle_{ABP}]\approx_{ \varepsilon}|\tilde{\psi}\rangle\otimes\left|\mathrm{aux}_{1}\right\rangle,\] (11) \[W_{A}\otimes W_{BP}[A_{sa}\otimes(1_{B}\otimes 1_{P})\left|\psi \right\rangle_{ABP}]\approx_{\varepsilon}(\tilde{A}_{sa}\otimes 1_{\tilde{B}} \left|\tilde{\psi}\right\rangle)\otimes\left|\mathrm{aux}_{2}\right\rangle,\] (12) \[W_{A}\otimes W_{BP}[1_{A}\otimes(B_{tb}\otimes 1_{P})\left|\psi \right\rangle_{ABP}]\approx_{\varepsilon}(1_{\tilde{A}}\otimes\tilde{B}_{tb} \left|\tilde{\psi}\right\rangle)\otimes\left|\mathrm{aux}_{2}\right\rangle,\] \[W_{A}\otimes W_{BP}[\left|\psi\right\rangle_{ABP}]\approx_{ \varepsilon}|\tilde{\psi}\rangle\otimes\left|\mathrm{aux}_{2}\right\rangle. \tag{13}\]
Let \(X:=W_{A}\otimes V_{B}\otimes 1_{P}\). We need to show that
\[X[A_{sa}\otimes 1_{B}\otimes 1_{P}\left|\psi\right\rangle_{ABP}] \approx_{2\varepsilon}(\tilde{A}_{sa}\otimes 1_{\tilde{B}} \otimes 1_{\tilde{A}\hat{B}P})X\left|\psi\right\rangle_{ABP},\] \[X[1_{A}\otimes B_{tb}\otimes 1_{P}\left|\psi\right\rangle_{ABP}] \approx_{2\varepsilon}(1_{\tilde{A}}\otimes\tilde{B}_{tb} \otimes 1_{\tilde{A}\hat{B}P})X\left|\psi\right\rangle_{ABP}\]
for all \(a,b,s,t\).
Equations (10), (11) imply
\[V_{AP}\otimes V_{B}[(1_{A}\otimes 1_{P})\otimes B_{tb}\left|\psi \right\rangle_{ABP}]\approx_{2\varepsilon}(1_{\tilde{A}}\otimes\tilde{B}_{tb} \otimes 1_{\tilde{A}\hat{B}P})(V_{AP}\otimes V_{B})[\left|\psi\right\rangle_{ABP}].\]
Applying \(V_{AP}^{*}\otimes 1_{\hat{B}\hat{B}}\) to the left of both sides yields
\[1_{AP}\otimes V_{B}B_{tb}[\left|\psi\right\rangle_{ABP}]\approx_{2\varepsilon} 1_{AP}\otimes(\tilde{B}_{tb}\otimes 1_{\hat{B}})V_{B}[\left|\psi\right\rangle_{ABP}]. \tag{14}\]
Similarly, we obtain
\[W_{A}A_{sa}\otimes 1_{BP}[\left|\psi\right\rangle_{ABP}]\approx_{2\varepsilon} (\tilde{A}_{sa}\otimes 1_{\hat{A}})W_{A}\otimes 1_{BP}[\left|\psi\right\rangle_{ABP}] \tag{15}\]
from the equations (12), (13).
Now, applying \(W_{A}\otimes 1_{\hat{B}\hat{B}P}\) to the left of both sides of equation (14) gives us
\[W_{A}\otimes V_{B}\otimes 1_{P}[1_{A}\otimes B_{tb}\otimes 1_{P}\left|\psi \right\rangle_{ABP}]\approx_{2\varepsilon}(1_{\tilde{A}}\otimes\tilde{B}_{tb }\otimes 1_{\tilde{A}\hat{B}P})(W_{A}\otimes V_{B}\otimes 1_{P})\left|\psi \right\rangle_{ABP}.\]
Finally, we deduce
\[W_{A}\otimes V_{B}\otimes 1_{P}[A_{sa}\otimes 1_{B}\otimes 1_{P}\left|\psi \right\rangle_{ABP}]\approx_{2\varepsilon}(\tilde{A}_{sa}\otimes 1_{\tilde{B}} \otimes 1_{\hat{A}\hat{B}P})(W_{A}\otimes V_{B}\otimes 1_{P})\left|\psi \right\rangle_{ABP}\]
from applying \(V_{B}\otimes 1_{\tilde{A}\hat{A}P}\) to the left of both sides of equation (15).
The next lemma shows that the strategy we constructed before is actually almost optimal for the nonlocal game.
**Lemma 4.13**.: _Let \(G\) be a pure, robust self-test of \(\tilde{S}\) and let \(S^{(2)}\) be as in Lemma 4.12. Then \(S^{(2)}\) is \((\delta+C\varepsilon)\)-optimal, where \(C\) depends on \(G\)._
Proof.: Let \(p_{1}(a,b|s,t)\) and \(p_{2}(a,b|st)\) be the correlation of \(S^{(1)}\) and \(S^{(2)}\), respectively. It holds
\[\left|p_{1}(a,b|s,t)-p_{2}(a,b|s,t)\right|\] \[=\left|\operatorname{Tr}(((A_{sa}\otimes B_{tb}\otimes 1_{P})\] \[\qquad\qquad-(W_{A}\otimes V_{B}\otimes 1_{P})^{*}(\tilde{A}_{sa} \otimes\tilde{B}_{tb}\otimes 1_{\hat{A}\hat{B}P})(W_{A}\otimes V_{B}\otimes 1_{P})) \left|\psi\right\rangle\left\langle\psi\right|_{ABP})\right|\] \[=\left|\left\langle(W_{A}\otimes V_{B}\otimes 1_{P})\left|\psi \right\rangle,((W_{A}\otimes V_{B}\otimes 1_{P})(A_{sa}\otimes B_{tb} \otimes 1_{P})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-(\tilde {A}_{sa}\otimes\tilde{B}_{tb}\otimes 1_{\hat{A}\hat{B}P})(W_{A}\otimes V_{B} \otimes 1_{P}))\left|\psi\right\rangle\right|\] \[\leq\left|\left|(W_{A}\otimes V_{B}\otimes 1_{P})\left|\psi \right\rangle\right|\left|\cdot\left|\left|((W_{A}\otimes V_{B}\otimes 1_{P})(A_{sa} \otimes B_{tb}\otimes 1_{P})\right.\right.\right.\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.\left.\left.\left.\left.\left. \left(A_{sa}\otimes V_{B}\otimes 1_{\hat{A}\hat{B}P})(W_{A}\otimes V_{B} \otimes 1_{P})\right)\left|\psi\right\rangle\right|\right|\right.\] \[=\left|\left|((W_{A}\otimes V_{B}\otimes 1_{P})(A_{sa}\otimes B_{tb} \otimes 1_{P})-(\tilde{A}_{sa}\otimes\tilde{B}_{tb}\otimes 1_{\hat{A}\hat{B}P})(W_{A} \otimes V_{B}\otimes 1_{P}))\left|\psi\right\rangle\right|\right|\]
for all \(a,b,s,t\), where the inequality comes from the Cauchy-Schwarz inequality. Since \(S^{(2)}\) is a local \(2\epsilon\)-dilation of \(S^{(1)}\) by Lemma 4.12, we know
\[\left|\left|((W_{A}\otimes V_{B}\otimes 1_{P})(A_{sa}\otimes B_{tb} \otimes 1_{P})-(\tilde{A}_{sa}\otimes\tilde{B}_{tb}\otimes 1_{\hat{A}\hat{B}P})(W_{A} \otimes V_{B}\otimes 1_{P}))\left|\psi\right\rangle\right|\] \[\leq 2\max\{|O_{A}|,|O_{B}|\}\varepsilon.\]
Thus, we get
\[\left|w(S^{(1)},G)- w(S^{(2)},G)\right|\] \[=|\sum_{s,t}\pi(s,t)\sum_{a,b}\mathcal{V}(a,b|s,t)(p_{1}(a,b|s,t)- p_{2}(a,b|s,t))|\] \[\leq\sum_{s,t}\pi(s,t)\sum_{a,b}\mathcal{V}(a,b|s,t)|(p_{1}(a,b|s,t )-p_{2}(a,b|s,t))|\] \[\leq 2\left(\sum_{s,t}\pi(s,t)\sum_{a,b}\mathcal{V}(a,b|s,t) \right)\max\{|O_{A}|,|O_{B}|\}\varepsilon.\]
Since \(S^{(1)}\) is \(\delta\)-optimal, we deduce that \(S^{(2)}\) is \((\delta+C\varepsilon)\)-optimal.
Finally, we will see that the almost optimal strategy from the previous lemma can be \(\varepsilon^{\prime}\)-dilated to the canonical strategy of the pure, robust self-test.
**Lemma 4.14**.: _Let \(G\) be a pure, robust self-test of \(\tilde{S}\), where \(\ket{\tilde{\psi}}\) has full Schmidt rank and let \(S^{(2)}\) be as in Lemma 4.12. Then \(\tilde{S}\) is a local \((\sqrt{2\frac{\delta+C\varepsilon}{\Delta}})\)-dilation of \(S^{(2)}\), where \(\Delta\) depends on \(\tilde{S}\) and \(G\)._
Proof.: We will show that
\[\|X\ket{\psi}_{ABP}-\ket{\tilde{\psi}}\otimes\ket{\mathrm{aux}}\|\leq\sqrt{2 \frac{\delta+C\varepsilon}{\Delta}}.\]
Define the game operator \(\tilde{W}:=\sum_{a,b,s,t}\pi(s,t)\mathcal{V}(a,b|s,t)(\tilde{A}_{sa}\otimes \tilde{B}_{tb})\). Let \(\{\lambda_{i}\}\) be the eigenvalues of \(\tilde{W}\) (ordered decreasingly), then by Proposition 4.11 we have \(\lambda_{0}>\lambda_{1}\). Consider the decomposition of \(X\ket{\psi}_{ABP}\) in the eigenspaces of \(\tilde{W}\)
\[X\ket{\psi}_{ABP}=\sum_{j=0}^{d-1}\sqrt{p_{j}}\ket{\varphi_{j}}_{\tilde{A} \tilde{B}}\ket{\mathrm{aux}_{j}}_{\tilde{A}\tilde{B}P},\]
where \(\{\ket{\varphi_{j}}\}\) are eigenvectors of \(\tilde{W}\). By definition \(\ket{\varphi_{0}}=\ket{\tilde{\psi}}\). We will now lower bound \(p_{0}\) to see that \(X\ket{\psi}_{ABP}\) is close to \(\ket{\tilde{\psi}}\otimes\ket{\mathrm{aux}_{0}}\). It holds
\[w(S^{(2)},G)=\mathrm{Tr}[W\left(\sum_{i}p_{i}\ket{\varphi_{i}}\!\bra{\varphi_ {i}}\right)]=\sum_{i}p_{i}\lambda_{i}\leq p_{0}\lambda_{0}+(1-p_{0})\lambda_{1}\]
for the game value of \(S^{(2)}\). On the other hand, we know that \(w(S^{(2)},G)\geq\lambda_{0}-(\delta+C\varepsilon)\) by Lemma 4.13. Therefore, we have
\[\lambda_{0}-(\delta+C\varepsilon)\leq p_{0}\lambda_{0}+(1-p_{0})\lambda_{1},\]
which implies \(p_{0}\geq 1-\frac{\delta+C\varepsilon}{\Delta}\), where \(\Delta:=\lambda_{0}-\lambda_{1}\). Then
\[\|X\ket{\psi}_{ABP}-\ket{\tilde{\psi}}\otimes\ket{\mathrm{aux}_ {0}}\|= \sqrt{(1-\sqrt{p_{0}})^{2}+\sum_{j>0}p_{j}}\] \[= \sqrt{2-2\sqrt{p_{0}}}\] \[\leq \sqrt{2\frac{\delta+C\varepsilon}{\Delta}}.\]
We therefore get
\[\|(1_{\tilde{A}}\otimes\tilde{B}_{tb}\otimes 1_{\tilde{B}P})X\ket{ \psi}_{ABP} -(1_{\tilde{A}}\otimes\tilde{B}_{tb})\ket{\tilde{\psi}}\otimes\ket{ \mathrm{aux}_{0}}\|\] \[\leq\|X\ket{\psi}_{ABP}-\ket{\tilde{\psi}}\otimes\ket{\mathrm{ aux}_{0}}\|\] \[\leq\sqrt{2\frac{\delta+C\varepsilon}{\Delta}},\]
since \(1_{\tilde{A}}\otimes\tilde{B}_{tb}\otimes 1_{\tilde{B}P}\) is a contraction. Similarly, we obtain
\[\|(\tilde{A}_{sa}\otimes 1_{\tilde{B}}\otimes 1_{\tilde{B}P})X\ket{\psi}_{ABP}-( \tilde{A}_{sa}\otimes 1_{\tilde{B}})\ket{\tilde{\psi}}\otimes\ket{\mathrm{aux}_{0}} \|\leq\sqrt{2\frac{\delta+C\varepsilon}{\Delta}}.\]
This finishes the proof.
By putting together the previous lemmas, we can prove Theorem 4.7.
Proof of Theorem 4.7.: Let \(\varepsilon\geq 0\) and let \(S\) be a \(\delta\)-optimal, mixed strategy, where we choose \(\delta\) as in the robust pure self-test. Then by Lemmas 4.12 and 4.14 as well as transitivity, we know that \(\tilde{S}\) is a local \((2\varepsilon+\sqrt{2\frac{\delta+C\varepsilon}{\Delta}})\)-dilation of the pure quantum strategy \(S^{(1)}\) associated to \(S\).
We note that, Theorem 4.7, or more specifically, Lemma 4.14 does not directly translate to self-testing from correlation, essentially because that there is no game operator for self-testing from correlation. Nevertheless, the result still holds if we impose an additional requirement of the correlation to be extreme. See Appendix A for a full proof.
### Proof of Theorem 4.1
We are now ready to prove our main theorem:
Proof of Theorem 4.1.: (a): by Theorem 4.3, \(G\) is a robust pure self-test for \(\tilde{S}\). Then by Theorem 4.7\(G\) is an assumption-free self-test.
(b): by Theorem 4.5, \(G\) is a robust pure self-test for \(\tilde{S}\). By Theorem 4.2, \(\tilde{S}\) is support-preserving. So we take its restriction \(\tilde{S}_{\text{res}}\), and \(G\) also robust pure self-tests \(\tilde{S}_{\text{res}}\). Then using Theorem 4.7\(G\) is an assumption-free self-test for \(\tilde{S}_{\text{res}}\). From Proposition 3.8, \(G\) is an assumption-free self-test for \(\tilde{S}\).
## 5 Equivalence of Definitions
In this section, we examine two commonly cited definitions of local dilation in existing literature. We demonstrate that under specific circumstances, each of these definitions is equivalent to with the ones we have adopted.
### Local dilation in a matrix form
When the arbitrary strategy is mixed, rather than considering local dilation condition (Definition 2.5) in a "vector form" one could instead consider a "matrix-form" condition (see Appendix C in [1]):
**Definition 5.1** (Local dilation (alternative)).: _Given two strategies_
\[S =(\rho_{AB}\in B(\mathcal{H}_{A}\otimes\mathcal{H}_{B}),\{A_{sa} \}_{s\in\mathcal{S},a\in\mathcal{A}},\{B_{tb}\}_{t\in\mathcal{T},b\in \mathcal{B}})\text{ and }\] \[\tilde{S} =(|\tilde{\psi}\rangle\in\mathcal{H}_{\tilde{A}}\otimes\mathcal{ H}_{\tilde{B}},\{\tilde{A}_{sa}\}_{s\in\mathcal{S},a\in\mathcal{A}},\{\tilde{B}_{ tb}\}_{t\in\mathcal{T},b\in\mathcal{B}})\]
_we write \(S\hookrightarrow_{1}\tilde{S}\) if there exist spaces \(\mathcal{H}_{\hat{A}},\mathcal{H}_{\hat{B}}\), a local isometry \(U=U_{A}\otimes U_{B}\), with \(U_{A}:\mathcal{H}_{A}\rightarrow\mathcal{H}_{\tilde{A}}\otimes\mathcal{H}_{ \hat{A}}\), \(U_{B}:\mathcal{H}_{B}\rightarrow\mathcal{H}_{\tilde{B}}\otimes\mathcal{H}_{ \tilde{B}}\) and a state \(|\text{\rm aux}\rangle\in B(\mathcal{H}_{\hat{A}}\otimes\mathcal{H}_{\hat{B}})\) such that for all \(s,t,a,b\) we have_
\[U(A_{sa}\otimes B_{tb})\rho_{AB}U^{*}=(\tilde{A}_{sa}\otimes\tilde{B}_{tb})\, |\tilde{\psi}\rangle\langle\tilde{\psi}|\otimes\sigma_{\text{\rm aux}}. \tag{16}\]
This kind of definition of local dilation can also be used to construct an alternative definition for self-testing. We will in this appendix prove that the two definitions for local dilations in fact are equivalent.
**Lemma 5.2**.: _Let \(\left|\phi\right\rangle\in\mathcal{H}_{S}\) and \(\left|\psi\right\rangle\in\mathcal{H}_{T}\) be states and let \(\{S_{i}\}_{i}\subset B(\mathcal{H}_{S})\) and \(\{T_{i}\}_{i}\subseteq B(\mathcal{H}_{T})\) be POVMs. Let \(U:\mathcal{H}_{S}\rightarrow\mathcal{H}_{T}\) be an isometry. If_
\[US_{i}\left|\phi\right\rangle\!\left\langle\phi\right|U^{*}=(T_{i}\left|\psi \right\rangle\!\left\langle\psi\right|)\otimes\left|\mathrm{aux}\right\rangle \!\left\langle\mathrm{aux}\right| \tag{17}\]
_for all \(i\), then there exists a state \(\left|\mathrm{aux}^{\prime}\right\rangle\), such that_
\[US_{i}\left|\phi\right\rangle=(T_{i}\left|\psi\right\rangle)\left|\mathrm{aux }^{\prime}\right\rangle \tag{18}\]
_and_
\[\left|\mathrm{aux}\right\rangle\!\left\langle\mathrm{aux}\right|=\left| \mathrm{aux}^{\prime}\right\rangle\!\left\langle\mathrm{aux}^{\prime}\right| \tag{19}\]
Proof.: Sum (17) over \(i\) to get
\[U\left|\phi\right\rangle\!\left\langle\phi\right|U^{*}=\left|\psi\right\rangle \!\left\langle\psi\right|\otimes\left|\mathrm{aux}\right\rangle\!\left\langle \mathrm{aux}\right| \tag{20}\]
using the fact that \(\{S_{i}\}_{i}\) and \(\{T_{i}\}_{i}\) are POVMs. Both of these operators have rank one, and therefore have a single non-zero eigenvalue. Observe that the eigenspace with the non-zero eigenvalue of the left-hand side is spanned by \(U\left|\phi\right\rangle\) and the eigenspace with the non-zero eigenvalue of the right-hand side is spanned by \(\left|\psi\right\rangle\left|\mathrm{aux}\right\rangle\). Furthermore, these two spaces are equivalent. We can therefore conclude that
\[U\left|\phi\right\rangle=e^{-i\gamma}\left|\psi\right\rangle\left|\mathrm{aux }\right\rangle, \tag{21}\]
for some \(\gamma\in\mathbb{R}\). They are in other words they are equal up to global phase. This phase change can simply be absorbed into \(\left|\mathrm{aux}\right\rangle\), creating a new state \(\left|\mathrm{aux}^{\prime}\right\rangle=e^{-i\gamma}\left|\mathrm{aux}\right\rangle\). Right multiply (17) with (21) and we get the desired equation of
\[US_{i}\left|\phi\right\rangle=(T_{i}\left|\psi\right\rangle)\left|\mathrm{aux }^{\prime}\right\rangle, \tag{22}\]
proving (18). Finally, (19) follows directly from the definition of \(\left|\mathrm{aux}^{\prime}\right\rangle\).
We are also going to use the following observation, proven in [10] as Observation C.1:
**Lemma 5.3** ([10]).: _Let \(\rho_{ST}^{0},\rho_{ST}^{1}\in B(\mathcal{H}_{S}\otimes\mathcal{H}_{T})\) be positive semidefinite operators. If_
\[\rho_{ST}^{0}+\rho_{ST}^{1}=\left|\psi\right\rangle\!\left\langle\psi\right|_{S }\otimes\sigma_{T}\]
_for some \(\left|\psi\right\rangle\!\left\langle\psi\right|_{S}\in B(\mathcal{H}_{S})\) and \(\sigma_{T}\in B(\mathcal{H}_{T})\), then_
\[\rho_{ST}^{i}=\left|\psi\right\rangle\!\left\langle\psi\right|_{S}\otimes \sigma_{T}^{i}\]
_for \(i\in\{0,1\}\) and some \(\sigma_{T}^{i}\in B(\mathcal{H}_{T})\)_
The next theorem shows that the "matrix-form" condition (16) is in fact equivalent to the vector-form condition (3). The take-away here is that it does not matter which of the two variants of local dilation we base our self-testing definition on.
**Theorem 5.4**.: _Let \(S\) and \(\tilde{S}\) be two strategies. Then_
\[S\hookrightarrow\tilde{S}\quad\Longleftrightarrow\quad S\hookrightarrow_{1} \tilde{S}\]
Proof.: Let \(\tilde{S}=(\left|\tilde{\psi}\right\rangle,\{\tilde{A}_{sa}\},\{\tilde{B}_{tb}\})\) and \(S=(\rho_{AB},\{A_{sa}\},\{B_{tb}\})\).
We start by showing \(S\hookrightarrow\tilde{S}\Rightarrow S\hookrightarrow_{1}\tilde{S}\). \(S\hookrightarrow\tilde{S}\) implies that for any purification \(\left|\psi\right\rangle_{ABP}\) of \(\rho_{AB}\) there exists local isometries \(U=U_{A}\otimes U_{B}\) and a state \(\left|\text{aux}\right\rangle\) such that
\[(U\otimes\mathbbm{1}_{P})(A_{sa}\otimes B_{tb}\otimes\mathbbm{1}_{P})\left| \psi\right\rangle_{ABP}=(\tilde{A}_{sa}\otimes\tilde{B}_{tb})\left|\tilde{ \psi}\right\rangle_{\tilde{A}\tilde{B}}\otimes\left|\text{aux}\right\rangle_{ \tilde{A}\tilde{B}}. \tag{23}\]
If we sum over \(a,b\), using \(\{A_{sa}\otimes B_{tb}\}_{a,b}\) is a POVM, we get
\[(U\otimes\mathbbm{1}_{P})\left|\psi\right\rangle_{ABP}=\left|\tilde{\psi} \right\rangle_{\tilde{A}\tilde{B}}\otimes\left|\text{aux}\right\rangle_{ \tilde{A}\tilde{B}}. \tag{24}\]
We can now combine (23) and (24) through an outer product to get
\[(U\otimes\mathbbm{1})(A_{sa}\otimes B_{tb}\otimes\mathbbm{1}_{P})\left|\psi \right\rangle\!\left\langle\psi\right|_{ABP}(U\otimes\mathbbm{1}_{P})^{*}=( \tilde{A}_{sa}\otimes\tilde{B}_{tb})\left|\tilde{\psi}\right\rangle\!\left\langle \tilde{\psi}\right|_{\tilde{A}\tilde{B}}\otimes\left|\text{aux}\right\rangle\! \left\langle\text{aux}\right|_{\tilde{A}\tilde{B}P}.\]
Finally, by using that \(\left|\psi\right\rangle\!\left\langle\psi\right|_{ABP}\) is a purification of \(\rho_{AB}\) and tracing out the purification space, we get
\[U(A_{sa}\otimes B_{tb})\rho_{AB}U^{*}=(\tilde{A}_{sa}\otimes\tilde{B}_{tb}) \left|\tilde{\psi}\right\rangle\!\left\langle\tilde{\psi}\right|_{\tilde{A} \tilde{B}}\otimes\text{Tr}_{P}(\left|\text{aux}\right\rangle\!\left\langle \text{aux}\right|_{\tilde{A}\tilde{B}P}).\]
which shows \(S\hookrightarrow_{1}\tilde{S}\).
Secondly, we will show \(S\hookrightarrow_{1}\tilde{S}\Rightarrow S\hookrightarrow\tilde{S}\). \(S\hookrightarrow_{1}\tilde{S}\) implies there exist a local isometry \(U=U_{A}\otimes U_{B}\) such that
\[U(A_{sa}\otimes B_{tb})\rho_{AB}U^{*}=(\tilde{A}_{sa}\otimes\tilde{B}_{tb}) \left|\tilde{\psi}\right\rangle\!\left\langle\tilde{\psi}\right|\otimes\sigma_ {\text{aux}}. \tag{25}\]
for some state \(\sigma_{\text{aux}}\in B(\mathcal{H}_{\hat{A}}\otimes\mathcal{H}_{\hat{B}})\). If we sum both sides over \(a,b\), using \(\{A_{sa}\otimes B_{tb}\}_{a,b}\) is a POVM, then we get
\[U\rho_{AB}U^{*}=\left|\tilde{\psi}\right\rangle\!\left\langle\tilde{\psi} \right|\otimes\sigma_{\text{aux}}. \tag{26}\]
Now, consider any purification \(\left|\psi\right\rangle_{ABP}\) of \(\rho_{AB}\). Consider the Schmidt decomposition of \(\left|\psi\right\rangle_{ABP}\) over the spaces \((\mathcal{H}_{A}\otimes\mathcal{H}_{B})\) and \(\mathcal{H}_{P}\). This gives
\[\left|\psi\right\rangle_{ABP}=\sum_{i}\lambda_{i}\left|\alpha_{i}\right\rangle _{AB}\left|\beta_{i}\right\rangle_{P}. \tag{27}\]
If we trace out the purification space of this state, using that it indeed is a purification, we get that
\[\rho_{AB}=\sum_{i}\lambda_{i}^{2}\left|\alpha_{i}\right\rangle\!\left\langle \alpha_{i}\right|. \tag{28}\]
If we now substitute (28) into (29) we arrive at
\[\sum_{i}\lambda_{i}^{2}U\left|\alpha_{i}\right\rangle\!\left\langle\alpha_{i} \right|U^{*}=\left|\tilde{\psi}\right\rangle\!\left\langle\tilde{\psi}\right| \otimes\sigma_{\text{aux}}. \tag{29}\]
Observe that this is a sum of positive semidefinite operators, and therefore by Lemma 5.3 we have
\[U\left|\alpha_{i}\right\rangle\!\left\langle\alpha_{i}\right|U^{*}=\left| \tilde{\psi}\right\rangle\!\left\langle\tilde{\psi}\right|\otimes\left|\text{ aux}_{i}\right\rangle\!\left\langle\text{aux}_{i}\right|, \tag{30}\]
for some pure state \(\left|\text{aux}_{i}\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). By Lemma 5.2, there exists a state \(\left|\text{aux}_{i}^{\prime}\right\rangle\) such that
\[U\left|\alpha_{i}\right\rangle=\left|\tilde{\psi}\right\rangle\!\left|\text{ aux}_{i}^{\prime}\right\rangle. \tag{31}\]
and \(\left|\text{aux}_{i}^{\prime}\right\rangle\left\langle\text{aux}_{i}^{\prime} \right|=\left|\text{aux}_{i}\right\rangle\left\langle\text{aux}_{i}\right|\). We now fix \(i\) and right-multiply (25) with (31), after simplifying both sides, this gives
\[U(A_{sa}\otimes B_{tb})\left|\alpha_{i}\right\rangle=\left(\left(\tilde{A}_{sa} \otimes\tilde{B}_{tb}\right)\left|\tilde{\psi}\right\rangle\right)\left|\text{ aux}_{i}^{\prime}\right\rangle \tag{32}\]
where we have used the fact that \(\left|\alpha_{i}\right\rangle\) and \(\left|\alpha_{j}\right\rangle\) are orthogonal and \(\left|\text{aux}_{i}^{\prime}\right\rangle\) and \(\left|\text{aux}_{j}^{\prime}\right\rangle\) are orthogonal when \(i\neq j\). Finally, apply \(U\otimes\mathds{1}\) to \(\left|\psi\right\rangle_{ABP}\) as
\[(U\otimes\mathds{1})(A_{sa}\otimes B_{tb}\otimes\mathds{1}_{P}) \left|\psi\right\rangle_{ABP} =\sum_{i}\lambda_{i}U(A_{sa}\otimes B_{tb})\left|\alpha_{i} \right\rangle_{AB}\left|\beta_{i}\right\rangle_{P}\] \[=(\tilde{A}_{sa}\otimes\tilde{B}_{tb})\left|\tilde{\psi}\right\rangle \otimes\left(\sum_{i}\lambda_{i}\left|\text{aux}_{i}^{\prime}\right\rangle \left|\beta_{i}\right\rangle_{P}\right)\]
where the first equality used the Schmidt decomposition of \(\left|\psi\right\rangle_{ABP}\) and the second equality substituted in (32). Setting
\[\left|\text{aux}^{\prime}\right\rangle=\left(\sum_{i}\lambda_{i}\left|\text{ aux}_{i}^{\prime}\right\rangle\left|\beta_{i}\right\rangle_{P}\right)\]
implies \(S\hookrightarrow\tilde{S}\).
### Extraction local dilation
Here we look at a slightly different definition for local dilation between pure full-rank strategies. The idea is that we can map the POVMs from one to the other via the conjugation of unitaries.
**Definition 5.5** (Extraction local dilation).: _Given two pure full-rank strategies_
\[S =(\left|\psi\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B},\{A_{sa}\}_{s\in\mathcal{S},a\in\mathcal{A}},\{B_{tb}\}_{t\in\mathcal{T},b \in\mathcal{B}})\text{ and }\] \[\tilde{S} =(\left|\tilde{\psi}\right\rangle\in\mathcal{H}_{\tilde{A}} \otimes\mathcal{H}_{\tilde{B}},\{\tilde{A}_{sa}\}_{s\in\mathcal{S},a\in \mathcal{A}},\{\tilde{B}_{tb}\}_{t\in\mathcal{T},b\in\mathcal{B}})\]
_we write \(S\hookrightarrow_{2}\tilde{S}\) if there exist a local unitary \(U=U_{A}\otimes U_{B}\), with \(U_{A}:\mathcal{H}_{A}\rightarrow\mathcal{H}_{\tilde{A}}\otimes\mathcal{H}_{ \tilde{A}}\), \(U_{B}:\mathcal{H}_{B}\rightarrow\mathcal{H}_{\tilde{B}}\otimes\mathcal{H}_{ \tilde{B}}\) such that for all \(s,t,a,b\) we have_
\[U\left|\psi\right\rangle =\left|\tilde{\psi}\right\rangle\otimes\left|\text{aux}\right\rangle, \tag{33}\] \[U_{A}A_{sa}U_{A}^{\ast} =\tilde{A}_{sa}\otimes\mathds{1}_{\tilde{A}},\] (34) \[U_{B}B_{tb}U_{B}^{\ast} =\tilde{B}_{tb}\otimes\mathds{1}_{\tilde{B}}. \tag{35}\]
_In case we want to name the local isometry and the auxiliary state from (33), we write \(S\underset{U,\left|\text{aux}\right\rangle}{\longrightarrow}\tilde{S}\)._
We show that for full rank strategies, this type of local dilations is in fact equivalent to the one presented in Definition 2.5. One thing that is important to note in the following lemma is that \(S\hookrightarrow_{2}\tilde{S}\) is only defined when both \(S\) and \(\tilde{S}\) are pure full-rank strategies.
**Lemma 5.6**.: _Let \(S\) and \(\tilde{S}\) be two pure full-rank strategies. Then_
\[S\hookrightarrow\tilde{S}\quad\Longleftrightarrow\quad S \hookrightarrow_{2}\tilde{S}\]
Proof.: Let \(\tilde{S}=(\left|\tilde{\psi}\right\rangle,\left\{\tilde{A}_{sa},\left\{\tilde{B}_{ tb}\right\}\right\})\) and \(S=(\left|\psi\right\rangle,\left\{A_{sa}\right\},\left\{B_{tb}\right\})\).
We start by showing \(S\xhookrightarrow{U,\left|\text{aux}\right\rangle}_{2}\tilde{S}\Rightarrow S \hookrightarrow\tilde{S}\). Tensor (34) and (35), and right-multiply with (33) to get
\[(U_{A}A_{sa}U_{A}^{*}\otimes U_{B}B_{tb}U_{B}^{*})U\left|\psi\right\rangle=( \tilde{A}_{sa}\otimes 1_{\tilde{A}}\otimes\tilde{B}_{tb}\otimes 1_{\tilde{B}}) \left|\tilde{\psi}\right\rangle\otimes\left|\text{aux}\right\rangle.\]
By \(U\) being a unitary, implying \(U^{*}U=1\), we have
\[U(A_{sa}\otimes B_{tb})\left|\psi\right\rangle=\left[(\tilde{A}_{sa}\otimes \tilde{B}_{tb})\left|\tilde{\psi}\right\rangle\right]\otimes\left|\text{aux} \right\rangle.\]
implying \(S\hookrightarrow\tilde{S}\).
We then show \(S\xhookrightarrow{V,\left|\text{aux}\right\rangle}\tilde{S}\Rightarrow S \hookrightarrow_{2}\tilde{S}\). Consider the Schmidt decomposition
\[\left|\text{aux}\right\rangle_{\tilde{A}\tilde{B}}=\sum_{i=0}^{r-1}\lambda_{ i}\left|\alpha_{i}\right\rangle\left|\beta_{i}\right\rangle \tag{36}\]
where \(r\) is the Schmidt rank of \(\left|\text{aux}\right\rangle_{\tilde{A}\tilde{B}}\). Furthermore, observe that \(\dim(\mathcal{H}_{A})=r\cdot\dim(\mathcal{H}_{\tilde{A}})\). Define the isometries
\[T_{\tilde{A}}:=\sum_{i=0}^{r-1}\left|\alpha_{i}\right\rangle\!\left\langle i \right|,\qquad T_{\tilde{B}}:=\sum_{i=0}^{r-1}\left|\beta_{i}\right\rangle\! \left\langle i\right|\]
where \(\left|i\right\rangle\in\mathbb{C}^{r}\), and observe that \(T_{\tilde{A}}T_{\tilde{A}}^{*}\) is a projection onto \(\text{supp}_{\tilde{A}}(\left|\text{aux}\right\rangle_{\tilde{A}\tilde{B}})\) and \(T_{\tilde{B}}T_{\tilde{B}}^{*}\) is a projection onto \(\text{supp}_{\tilde{B}}(\left|\text{aux}\right\rangle_{\tilde{A}\tilde{B}})\).
Next note that
\[\left|\tilde{\psi}\right\rangle\left|\text{aux}\right\rangle=VV^{*}V\left| \psi\right\rangle=VV^{*}\left|\tilde{\psi}\right\rangle\left|\text{aux}\right\rangle\]
and so \(V_{A}V_{A}^{*}\) act with identity on \(\text{supp}_{\tilde{A}\tilde{A}}(\left|\tilde{\psi}\right\rangle\left|\text{ aux}\right\rangle)=\mathcal{H}_{\tilde{A}}\otimes\text{supp}_{\tilde{A}}(\left|\text{aux}\right\rangle)\), and similarly for \(V_{B}V_{B}^{*}\) on \(\mathcal{H}_{\tilde{B}}\). Define
\[W_{A}:=(1_{\tilde{A}}\otimes T_{\tilde{A}}^{*})V_{A},\qquad W_{B}:=(1_{\tilde{ B}}\otimes T_{\tilde{B}}^{*})V_{B},\qquad\left|\text{aux}^{\prime}\right\rangle:=(T_{ \tilde{A}}^{*}\otimes T_{\tilde{B}}^{*})\left|\text{aux}\right\rangle.\]
Observe that \(\left|\text{aux}^{\prime}\right\rangle\in\mathbb{C}^{r}\otimes\mathbb{C}^{r}\) has full Schmidt rank and that \(W_{A}\) and \(W_{B}\) are square matrices. We claim that \(W_{A}\) is unitary. This can be seen by
\[W_{A}W_{A}^{*}=(1_{\tilde{A}}\otimes T_{\tilde{A}}^{*})V_{A}V_{A}^{*}(1_{ \tilde{A}}\otimes T_{\tilde{A}})=(1_{\tilde{A}}\otimes T_{\tilde{A}}^{*})(1_{ \tilde{A}}\otimes T_{\tilde{A}})=1_{\tilde{A}}\otimes 1_{r}\]
using that \(V_{A}V_{A}^{*}\) act with identity on \(\mathcal{H}_{\tilde{A}}\otimes\text{supp}_{\tilde{A}}(\left|\text{aux}\right\rangle)\) and that \(T_{\tilde{A}}\) is an isometry. A similar argument shows that \(W_{B}\) is unitary. Now we have
\[(W_{A}\otimes W_{B})(A_{sa}\otimes B_{tb})\left|\psi\right\rangle=(1_{\tilde{ A}\tilde{B}}\otimes T_{\tilde{A}\tilde{B}})\left[(\tilde{A}_{sa}\otimes\tilde{B}_{ tb})\left|\tilde{\psi}\right\rangle\right]\otimes\left|\text{aux}\right\rangle=\left[( \tilde{A}_{sa}\otimes\tilde{B}_{tb})\left|\tilde{\psi}\right\rangle\right] \otimes\left|\text{aux}^{\prime}\right\rangle.\]
Hence
\[(W_{A}A_{sa}W_{A}^{*}\otimes W_{B}B_{tb}W_{B}^{*})(W_{A}\otimes W_{B})\left|\psi \right\rangle=\left[(\tilde{A}_{sa}\otimes\tilde{B}_{tb})\left|\tilde{\psi} \right\rangle\right]\otimes\left|\text{aux}^{\prime}\right\rangle.\]
Since \(\left|\tilde{\psi}\right\rangle\otimes\left|\text{aux}^{\prime}\right\rangle\) has full Schmidt rank, it follows that
\[W_{A}A_{sa}W_{A}^{*}=\tilde{A}_{sa}\otimes 1_{r},\qquad W_{B}B_{tb}W_{B}^{*}= \tilde{B}_{tb}\otimes 1_{r}.\]
From this, we can conclude \(S\hookrightarrow_{2}\tilde{S}\).
Separation of Definitions
### Separating pure full-rank self-tests and pure self-tests
In Section 5.2 we showed that the extraction definition of local dilation and the definition we adopt are equivalent for full-rank strategies. Then it is clear that, pure full-rank "extraction" self-tests and pure full-rank self-tests are equivalent. Here we however shows that, there exist Bell inequalities that pure full-rank self-tests but does not pure self-test a full-rank strategy \(\tilde{S}\). Note that this specific \(\tilde{S}\) is not projective, therefore does not violate Theorem 4.5, and moreover, it showcases the necessity of the projectiveness of \(\tilde{S}\) as an assumption of Theorem 4.5.
Consider the canonical strategy of CHSH game \(\tilde{S}_{\text{CHSH}}=(\left|\Phi^{+}\right\rangle,\left\{\mathcal{X}, \mathcal{Z}\right\},\left\{\mathcal{H},\mathcal{G}\right\})\), where \(\left|\Phi^{+}\right\rangle=(\left|00\right\rangle+\left|11\right\rangle)/ \sqrt{2}\), and \(\mathcal{X},\mathcal{Z},\mathcal{H},\mathcal{G}\) are the measurements corresponding to the binary observables \(X,Z,H:=\frac{1}{\sqrt{2}}(X+Z),G:=\frac{1}{\sqrt{2}}(X-Z)\), respectively. (That is, \(\mathcal{X}=\left\{\left|+\right\rangle\langle+\right|,\left|-\right\rangle \langle-\right|\)}\), \(\mathcal{Z}=\left\{\left|0\right\rangle\langle 0\right|,\left|1\right\rangle \left\{1\right\}\), etc.) It is well-known that the CHSH game is an assumption-free self-test for \(\tilde{S}_{\text{CHSH}}\)[17].
Then we incorporate a three-output POVM \(\mathcal{M}=\left\{M_{0},M_{1},M_{2}\right\}\) to Bob's side, where
\[M_{0} =\frac{1}{3}(\mathbb{1}+Z),\] \[M_{1} =\frac{1}{3}(\mathbb{1}-\frac{1}{2}Z+\frac{\sqrt{3}}{2}X),\] \[M_{2} =\frac{1}{3}(\mathbb{1}-\frac{1}{2}Z-\frac{\sqrt{3}}{2}X).\]
It is clear that \(M_{i}\geq 0,\sum_{i}M_{i}=1\), so \(\mathcal{M}\) is a valid (non-projective) POVM. We will show that the strategy \(\tilde{S}=(\left|\Phi^{+}\right\rangle,\left\{\mathcal{X},\mathcal{Z}\right\},\left\{\mathcal{H},\mathcal{G},\mathcal{M}\right\})\) is full-rank self-tested. For this we need Holder's inequality
\[\operatorname{Tr}[AB]\leq\|A\|_{\infty}\|B\|_{1},\]
where \(\|A\|_{\infty}:=\sup_{\|v\|=1}\|Av\|\) is the infinity norm, and \(\|B\|_{1}:=\operatorname{Tr}|B|=\operatorname{Tr}[\sqrt{B^{*}B}]\) is the trace norm.
**Proposition 6.1**.: _Consider pure non-projective strategy \(\tilde{S}=(\left|\Phi^{+}\right\rangle,\left\{\mathcal{X},\mathcal{Z}\right\},\left\{\mathcal{H},\mathcal{G},\mathcal{M}\right\})\)._
1. _The correlation_ \(\tilde{p}\) _generated by_ \(\tilde{S}\) _is extreme (in the quantum set of correlation)._
2. \(\tilde{p}\) _pure full-rank self-tests_ \(\tilde{S}\)_._
Proof.: Consider a pure full-rank strategy \(S=(\left|\psi\right\rangle,\left\{\mathcal{A}_{0},\mathcal{A}_{1}\right\}, \left\{\mathcal{B}_{0},\mathcal{B}_{1},\mathcal{B}_{2}\right\})\) that generates \(\tilde{p}\). Let \(\mathcal{A}_{i}=\left\{A_{i}^{+},A_{i}^{-}\right\}\), \(\mathcal{B}_{i}=\left\{B_{i}^{+},B_{i}^{-}\right\}\) for \(i=0,1\) where \(A_{i}^{+},A_{i}^{-},B_{i}^{+},B_{i}^{-}\) are POVM elements. Define observables \(A_{i}:=A_{i}^{+}-A_{i}^{-}\) and \(B_{i}:=B_{i}^{+}-B_{i}^{-}\). Let \(\mathcal{B}_{2}=\left\{F_{0},F_{1},F_{2}\right\}\). Define the following two functionals:
\[\beta_{0} :=\left\langle\psi|A_{0}\otimes B_{0}+A_{0}\otimes B_{1}+A_{1} \otimes B_{0}-A_{1}\otimes B_{1}|\psi\right\rangle,\] \[\beta_{1} :=\left\langle\psi|A_{0}\otimes F_{0}-\frac{1}{2}A_{0}\otimes F_ {1}+\frac{\sqrt{3}}{2}A_{1}\otimes F_{1}-\frac{1}{2}A_{0}\otimes F_{2}-\frac{ \sqrt{3}}{2}A_{1}\otimes F_{2}|\psi\right\rangle.\]
And, from direct calculation, one can see that \(\tilde{S}\) satisfies \(\beta_{0}=2\sqrt{2}\) and \(\beta_{1}=1\).
To prove (a), we will show that the correlation satisfying \(\beta_{0}=2\sqrt{2}\) and \(\beta_{1}=1\) is unique in the quantum set.
Since the CHSH inequality is a full-rank self-test, achieving \(\beta_{0}=2\sqrt{2}\) implies that there exist unitary \(U_{A},U_{B}\) such that
\[U_{A}A_{0}U_{A}^{*} =Z\otimes\mathbbm{1}_{A^{\prime}},\] \[U_{A}A_{1}U_{A}^{*} =X\otimes\mathbbm{1}_{A^{\prime}},\] \[U_{A}\otimes U_{B}\ket{\psi} =\ket{\Phi^{+}}_{AB}\otimes\ket{\text{aux}}_{A^{\prime}B^{\prime }}.\]
Let us now consider the three-outcome measurement and define operators:
\[G_{j}:=\text{Tr}_{B^{\prime}}\left[(\mathbbm{1}_{B}\otimes\sigma_{B^{\prime}}^{ 1/2})U_{B}^{*}F_{j}U_{B}(\mathbbm{1}_{B}\otimes\sigma_{B^{\prime}}^{1/2}) \right],\]
where \(\sigma_{B^{\prime}}=\text{Tr}_{A^{\prime}}[\ket{\text{aux}}\bra{\text{aux}}]\). It is easy to see that the effective operators \(G_{j}\) fully determine the correlation, since the observables of Alice completely ignore the \(A^{\prime}\) system.
Let us also define \(\{T_{j}\}_{j=0}^{2}\) and note that they can be computed explicitly:
\[T_{0} :=\text{Tr}_{AA^{\prime}B^{\prime}}\left[U_{A}^{*}A_{0}U_{A} \otimes\mathbbm{1}_{BB^{\prime}}\ket{\psi}\bra{\psi}\right]=\frac{1}{2}Z,\] \[T_{1} :=\text{Tr}_{AA^{\prime}B^{\prime}}\left[U_{A}^{*}\big{(}-\frac{ 1}{2}A_{0}+\frac{\sqrt{3}}{2}A_{1}\big{)}U_{A}\otimes\mathbbm{1}_{BB^{\prime}} \ket{\psi}\bra{\psi}\right]=\frac{1}{2}\Big{(}-\frac{1}{2}Z+\frac{\sqrt{3}}{2} X\Big{)},\] \[T_{2} :=\text{Tr}_{AA^{\prime}B^{\prime}}\left[U_{A}^{*}\big{(}-\frac{ 1}{2}A_{0}-\frac{\sqrt{3}}{2}A_{1}\big{)}U_{A}\otimes\mathbbm{1}_{BB^{\prime}} \ket{\psi}\bra{\psi}\right]=\frac{1}{2}\Big{(}-\frac{1}{2}Z-\frac{\sqrt{3}}{2} X\Big{)}.\]
It is easy to verify that the functional \(\beta_{1}\) can be rewritten as:
\[\beta_{1}=\sum_{j}\text{Tr}(T_{j}G_{j}).\]
Each term can be upper-bounded using Holder's inequality:
\[\beta_{1}\leq\sum_{j}\lVert T_{j}\rVert_{\infty}\lVert G_{j}\rVert_{1}=\frac{ 1}{2}\sum_{j}\text{Tr}\,G_{j}=1,\]
where we used the fact that \(\lVert T_{j}\rVert_{\infty}=\frac{1}{2}\). It is easy to determine the conditions under which these inequalities hold as equalities: since for every \(T_{j}\) the positive part is one-dimensional, the \(G_{j}\) operator must be proportional to these rank-1 projectors. The completeness condition allows us to deduce the proportionality constants, and finally we conclude that:
\[G_{0} =\frac{1}{3}(\mathbbm{1}+Z)=M_{0},\] \[G_{1} =\frac{1}{3}(\mathbbm{1}-\frac{1}{2}Z+\frac{\sqrt{3}}{2}X)=M_{1},\] \[G_{2} =\frac{1}{3}(\mathbbm{1}-\frac{1}{2}Z-\frac{\sqrt{3}}{2}X)=M_{2}.\]
This allows us to fully compute the statistics, which means that it is indeed the unique correlation satisfying \(\beta_{0}=2\sqrt{2}\) and \(\beta_{1}=1\). Therefore, this point is an exposed point of the \(\beta_{0}=2\sqrt{2}\) face of the quantum set, and it must be (at least) extreme within the entire quantum set.
To prove (b), consider
\[H_{j}:=(\mathbbm{1}\otimes\sigma_{B^{\prime}}^{1/2})U_{B}^{*}F_{j}U_{B}( \mathbbm{1}\otimes\sigma_{B^{\prime}}^{1/2})\]
and note that \(G_{j}=\operatorname{Tr}_{B^{\prime}}H_{j}\). Since \(G_{j}\) are rank-1 PSD operators, we must have
\[H_{j}=G_{j}\otimes K_{j},\]
for some \(K_{j}\geq 0\) satisfying \(\operatorname{Tr}K_{j}=1\). Now, if \(\sigma_{B^{\prime}}\) is full-rank we can actually reconstruct the original measurement operators:
\[F_{j}=G_{j}\otimes(\sigma_{B^{\prime}}^{-1/2}K_{j}\sigma_{B^{\prime}}^{-1/2}).\]
Using the completeness relation \(\sum_{j}F_{j}=1\) and the fact that the \(G_{j}\) operators correspond to an extremal three-outcome measurement on a qubit, we find that the only solution is \(K_{j}=\sigma_{B^{\prime}}\). Then \(F_{j}=U_{B}^{*}(M_{j}\otimes\mathbbm{1}_{B^{\prime}})U_{B}\). So \(\tilde{S}\) is a full-rank self-tested.
On the other hand, from Theorem 4.1 (part (c)), \(\tilde{S}\) cannot be assumption-free self-tested since it is not \(0\)-projective. So we conclude that \(\tilde{S}\) gives an example where a pure full-rank self-test does not imply a pure self-test.
**Corollary 6.2**.: _There exists a correlation that is a pure full-rank self-test, but not a pure self-test._
Interestingly, this result provides the first instance where a quantum correlation has no pure full-rank PVM realization.
### Separating pure PVM self-tests and pure self-tests
Here, we show that pure PVM self-tests do not necessarily imply pure self-tests with a non-full-rank canonical \(\tilde{S}\). Specifically, we will employ the correlation given in Proposition 6.1. Also, we will need the concept of _minimal_ Naimark dilation [1]. A Naimark dilation \(\{P_{i}\in\mathcal{B}(\mathcal{H}^{\prime})\}_{i=1}^{m}\) of POVM \(\{R_{i}\in\mathcal{B}(\mathcal{H})\}_{i=1}^{m}\) is minimal if and only if \(\mathcal{H}^{\prime}=\operatorname{span}\{P_{i}V\left|\psi\right\rangle:\left| \psi\right\rangle\in\mathcal{H},i\in[1,m]\}\). One important fact about minimal Naimark dilation is that it is unique up to unitary.
**Theorem 6.3** ([1], Theorem 2.22).: _Let \((\{P_{i}\}_{i=1}^{m},V)\), \((\{P_{i}^{\prime}\}_{i=1}^{m},V^{\prime})\) be two minimal Naimark dilations of \(\{R_{i}\}_{i=1}^{m}\). Then there exists unitary \(U\) such that \(V^{\prime}=UV\) and \(UP_{i}U^{*}=P_{i}^{\prime}\)._
We generalise the concept of minimal Naimark dilation in the context of nonlocal strategies.
**Definition 6.4**.: _Let \(\{R_{ij}\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\) be a family of POVMs. A Naimark dilation \((\{P_{ij}\}_{j=1}^{m_{i}},V)\) of \(\{R_{ij}\}\) is minimal if, for at least one \(i_{0}\in[1,n]\), \((\{P_{i_{0}j}\}_{j},V)\) is a minimal Naimark dilation of \(\{R_{i_{0}j}\}_{j}\)._
_Let \(S=\left(\left|\psi\right\rangle,\{A_{sa}\},\{B_{tb}\}\right)\) be a pure strategy. A pure PVM strategy \(S^{\prime}=(V_{A}\otimes V_{B}\left|\psi\right\rangle,\{P_{sa}\},\{Q_{tb}\})\) is a minimal Naimark dilation of \(S\), if \((\{P_{sa}\},V_{A})\) is a minimal Naimark dilation of \(\{A_{sa}\}\), and \((\{Q_{tb}\},V_{B})\) is a minimal Naimark dilation of \(\{B_{tb}\}\)._
Minimal Naimark dilation of nonlocal strategies always exists, but is not unique (up to local unitary) in general, since those PVM which are non-minimal could be very different outside the support of the state. Nevertheless, we can show that in a special case, the minimal Naimark dilations of \(S\) are equivalent up to local dilatio
**Lemma 6.5**.: _Let \(\{R_{ij}\}\) be a family of POVMs on \(\mathcal{H}\) with at most one non-projective measurement. Then for any two minimal Naimark dilations \((\{P_{ij}\},V)\), \((\{P^{\prime}_{ij}\},V^{\prime})\), there exist unitary \(U\) such that_
\[UV=V^{\prime}\] \[UP_{ij}V\left|\psi\right\rangle=P_{ij}V^{\prime}\left|\psi\right\rangle,\forall\left|\psi\right\rangle\in\mathcal{H}.\]
Proof.: The case where all \(\{R_{ij}\}\) are projections is trivial, because \(\{R_{ij}\}\) is the minimal Naimark dilation of itself. Without loss of generality, we assume \(\{R_{1j}\}_{j}\) to be the non-projective measurement. By definition, \(\{P_{1j}\}\) and \(\{P^{\prime}_{1j}\}\) are two minimal Naimark dilations of \(\{R_{1j}\}\). So, by Theorem 6.3 there exist unitary \(U\) such that \(UV=V^{\prime}\), and \(UP_{1j}U^{*}=P^{\prime}_{1j}\). Also note that \(R^{2}_{ij}=R_{ij}\) for all \(i\neq 1\), so
\[\left\|[VV^{*},P_{ij}]V\left|\psi\right\rangle\right\|^{2}\] \[= \left\langle\psi|V^{*}P_{ij}V|\psi\right\rangle-\left\langle\psi |V^{*}P_{ij}VV^{*}P_{ij}V|\psi\right\rangle\] \[= \left\langle\psi|(R_{ij}-R^{2}_{ij})|\psi\right\rangle=0.\]
So \(P_{ij}V\left|\psi\right\rangle=P_{ij}VV^{*}V\left|\psi\right\rangle=VV^{*}P_{ ij}V\left|\psi\right\rangle=VR_{ij}\left|\psi\right\rangle\). Similarly, \(P^{\prime}_{ij}V^{\prime}\left|\psi\right\rangle=V^{\prime}R_{ij}\left|\psi\right\rangle\).
Then the following holds:
\[UP_{1j}V\left|\psi\right\rangle= UP_{1j}U^{*}UV\left|\psi\right\rangle=P^{\prime}_{1j}V^{\prime}\left|\psi\right\rangle\] \[UP_{ij}V\left|\psi\right\rangle= UVR_{ij}\left|\psi\right\rangle\] \[= V^{\prime}R_{ij}\left|\psi\right\rangle\] \[= P^{\prime}_{ij}V^{\prime}\left|\psi\right\rangle,\ \forall i\neq 1,\]
as required.
For the case of single POVM, any Naimark dilation of \(\{R_{i}\}\) is a Naimark dilation of some minimal Naimark dilation of \(\{R_{i}\}\). It is not true in the case of multiple POVMs or for non-local strategies. Nevertheless, we prove the following:
**Lemma 6.6**.: _Let \(\{R_{ij}\in\mathcal{B}(\mathcal{H})\}_{j=1}^{m_{i}}\), \(1\leq i\leq n\), be a family of POVMs with at most one non-projective measurement, and let \((\{P_{ij}\in\mathcal{B}(\mathcal{H}^{\prime})\},V)\) be a Naimark dilation of \(\{R_{ij}\}\). Then there exists a minimal Naimark dilation \((\{P^{\min}_{ij}\in\mathcal{B}(\mathcal{H}^{\min})\},V^{\min})\) of \(\{R_{ij}\}\) and an isometry \(V^{\prime}:\mathcal{H}^{\min}\rightarrow\mathcal{H}^{\prime}\) such that_
\[V^{\prime}V^{\min}=V,\] \[V^{\prime}P^{\min}_{ij}V^{\min}\left|\psi\right\rangle=P_{ij}V \left|\psi\right\rangle,\forall\left|\psi\right\rangle\in\mathcal{H}.\]
Proof.: The case where all \(\{R_{ij}\}\) are projections is trivial. We assume \(\{R_{1j}\}_{j}\) to be the non-projective measurement. Consider the subspace
\[\mathcal{H}^{\min}:=\bigoplus_{j\in[1,m_{1}]}\mathcal{H}^{\min}_{j}\ \text{of}\ \mathcal{H}^{\prime}\,\ \text{where}\ \mathcal{H}^{\min}_{j}:=\text{span}\{P_{1j}V\left|\psi\right\rangle:\left|\psi \right\rangle\in\mathcal{H}\}.\]
Here \(\bigoplus\) refers to the internal direct sum. It is clear that \(V\mathcal{H}\subseteq\mathcal{H}^{\min}\subseteq\mathcal{H}^{\prime}\). Let \(V^{\prime\min}\) be the canonical embedding from \(V\mathcal{H}\) to \(\mathcal{H}^{\min}\), and \(V^{\prime}\) be the canonical embedding from \(\mathcal{H}^{\min}\) to \(\mathcal{H}^{\prime}\). Let \(U\) be the unitary from \(\mathcal{H}\) to \(V\mathcal{H}\). Let \(V^{\min}:=V^{\prime\min}U\).
We construct
\[P_{1j}^{\min}:=(V^{\prime})^{*}P_{1j}V^{\prime},\]
\[P_{i1}^{\min}:=V^{\min}R_{i1}(V^{\min})^{*}+(I-V^{\min}(V^{\min})^{*}),i\neq 1\]
\[P_{ij}^{\min}:=V^{\min}R_{ij}(V^{\min})^{*},i\neq 1,j\neq 1.\]
It is clear that \(P_{ij}^{\min}\) are projections for \(i\neq 1\). For \(P_{1j}^{\min}\), note that \(\mathcal{H}_{j}^{\min}\subseteq\mathrm{Range}(P_{1j})\), so \(P_{1j}\) commutes with \((V^{\prime})^{*}V^{\prime}\). Then \((P_{1j}^{\min})^{2}=P_{1j}^{\min}\). Also note that \(\mathcal{H}^{\min}=\mathrm{span}\{P_{1j}V\left|\psi\right\rangle:\left|\psi \right\rangle\in\mathcal{H},j\in[1,m_{1}]\}\), so \(\{P_{ij}^{\min}\}\) is a minimal Naimark dilation of \(\{R_{ij}\}\). The following holds:
\[V^{\prime}P_{1j}^{\min}V^{\min}\left|\psi\right\rangle= V^{\prime}(V^{\prime})^{*}P_{1j}V^{\prime}V^{\min}\left|\psi\right\rangle\] \[= P_{1j}V^{\prime}(V^{\prime})^{*}V^{\prime}V^{\min}\left|\psi \right\rangle=P_{1j}V\left|\psi\right\rangle,\] \[V^{\prime}P_{ij}^{\min}V^{\min}\left|\psi\right\rangle= V^{\prime}V^{\min}R_{ij}\left|\psi\right\rangle=VR_{ij}V^{*}V\left| \psi\right\rangle=P_{ij}V\left|\psi\right\rangle,\forall i\neq 1.\]
So we conclude that \((\{P_{ij}^{\min}\},V^{\min})\) satisfies the required property.
Applying Lemma 6.5 and 6.6 in the context of non-local strategies, we have the following:
**Proposition 6.7**.: _Let \(\tilde{S}\) be a pure full-rank strategy with at most one non-projective measurement on each side. Then any Naimark dilations of \(\tilde{S}\) are local-dilations of each other._
Proof.: Consider two Naimark dilations \(S_{1}\) and \(S_{2}\) of \(\tilde{S}\). By Lemma 6.6, there exists minimal Naimark dilations \(\tilde{S}_{1}^{\min}\) and \(\tilde{S}_{2}^{\min}\) of \(\tilde{S}\) such that \(\tilde{S}_{1}^{\min}\hookrightarrow S_{1}\), \(\tilde{S}_{2}^{\min}\hookrightarrow S_{2}\). Then from Proposition 3.1
\[S_{1}\hookrightarrow\tilde{S}_{1}^{\min},S_{2}\hookrightarrow\tilde{S}_{2}^{ \min}.\]
Also, from Lemma 6.5 we know that \(\tilde{S}_{1}^{\min}\) and \(\tilde{S}_{2}^{\min}\) are local dilations of each other. So we conclude that \(S_{1}\hookrightarrow S_{2}\) and \(S_{2}\hookrightarrow S_{1}\).
**Theorem 6.8**.: _Let \(\tilde{S}\) be a pure full-rank strategy with at most one non-projective measurement on each side. Then if \(p\) (or \(G\)) full-rank self-tests \(\tilde{S}\), \(p\) (or \(G\)) also PVM self-tests any Naimark dilation of \(\tilde{S}\)._
Proof.: Consider a pure PVM strategy \(S_{\mathrm{PVM}}\) that generates the same correlation as \(\tilde{S}=(\left|\tilde{\psi}\right\rangle,\left\{\tilde{A}_{sa}\right\},\{ \tilde{B}_{tb}\})\). From full-rank self-test, the restriction of \(S_{\mathrm{PVM}}\) is equivalent to \(\tilde{S}\) attached with some auxiliary state up to local unitary. In other words, \(S_{\mathrm{PVM}}\) is a Naimark dilation of \(\tilde{S}\otimes\left|\mathrm{aux}\right\rangle=(\left|\tilde{\psi}\right\rangle \left|\mathrm{aux}\right\rangle,\left\{\tilde{A}_{sa}\otimes\mathbb{1}_{ \mathrm{aux},A}\right\},\{\tilde{B}_{tb}\otimes\mathbb{1}_{\mathrm{aux},B}\})\). Note that \(\left|\tilde{\psi}\right\rangle\left|\mathrm{aux}\right\rangle\) is also full-rank, then from Proposition 6.7 and the transitivity of local dilation,
\[S_{\mathrm{PVM}}\hookrightarrow\tilde{S}_{\mathrm{Naimark}}\otimes\left| \mathrm{aux}\right\rangle\hookrightarrow\tilde{S}_{\mathrm{Naimark}}\]
for any Naimark dilation \(\tilde{S}_{\mathrm{Naimark}}\) of \(\tilde{S}\).
**Corollary 6.9**.: _Let \(\tilde{p}\) be the correlation generated by pure non-projective strategy \(\tilde{S}=(\left|\Phi^{+}\right\rangle,\left\{\mathcal{X},\mathcal{Z}\right\},\left\{\mathcal{H},\mathcal{G},\mathcal{M}\right\})\). Then \(\tilde{p}\) is a PVM self-test for any Naimark dilation of \(\tilde{S}\), but not a pure self-test._
Now we present a minimal Naimark dilation for \(\tilde{S}\). Since the measurements for Alice are projective, they are minimal themselves. For Bob, let \(V\) be the canonical embedding \(\mathbb{C}^{2}\rightarrow\mathbb{C}^{3}\) (that is, in the computational basis \(V=\mathbb{1}_{3\times 2}\)). Then for \(\mathcal{M}\), let rank-1 projections \(M^{\prime}_{i}=|e_{i}\rangle\langle e_{i}|\) for \(i=0,1,2\), where
\[|e_{0}\rangle =\frac{1}{\sqrt{3}}\left(\begin{array}{c}\sqrt{2}\\ 0\\ 1\end{array}\right), \tag{37}\] \[|e_{1}\rangle =\frac{1}{\sqrt{6}}\left(\begin{array}{c}-1\\ -\sqrt{3}\\ \sqrt{2}\end{array}\right),\] (38) \[|e_{2}\rangle =\frac{1}{\sqrt{6}}\left(\begin{array}{c}-1\\ \sqrt{3}\\ \sqrt{2}\end{array}\right). \tag{39}\]
And let \(\mathcal{M}^{\prime}=\{M^{\prime}_{0},M^{\prime}_{1},M^{\prime}_{2}\}\). According to [1], \((\mathcal{M}^{\prime},V)\) is a minimal Naimark dilation of \(\mathcal{M}\). For \(\mathcal{G}\) and \(\mathcal{H}\), since they are projective themselves, we just need to ensure their projectiveness outside the range of \(V\) when we extend them. To do this, we let \(H_{\pm}\), \(G_{\pm}\) be the \(\pm 1\)-eigenspace projection of \(H,G\), respectively. Define \(H^{\prime}_{+}=VH_{+}V^{*}+\mathbb{1}-VV^{*}\), \(H^{\prime}_{-}=VH_{-}V^{*}\) (that is, \(H^{\prime}_{+}=H_{+}\oplus 1,H^{\prime}_{-}=H_{-}\oplus 0\)), and \(G^{\prime}_{+}=VG_{+}V^{*}+\mathbb{1}-VV^{*}\), \(G^{\prime}_{-}=VG_{-}V^{*}\). Then \((\mathcal{H}^{\prime}=\{H^{\prime}_{+},H^{\prime}_{-}\},V)\), \((\mathcal{G}^{\prime}=\{G^{\prime}_{+},G^{\prime}_{-}\},V)\) are Naimark dilations if \(\mathcal{H}\) and \(\mathcal{G}\), respectively. So we conclude that \(\tilde{S}_{PVM}=\left(\mathbb{1}\otimes V\left|\Phi^{+}\right\rangle,\{ \mathcal{X},\mathcal{Z}\},\{\mathcal{H}^{\prime},\mathcal{G}^{\prime}, \mathcal{M}^{\prime}\}\right)\) is a minimal Naimark dilation of \(\tilde{S}\).
### Separating (standard) self-tests and abstract state self-tests
We show that Corollary 6.9 also answers the open question raised in [10], separating abstract state self-testing defined therein and (standard) self-testing in a case where there is no full-rank strategy in a certain class of strategies (namely, the class of all pure PVM strategies). Recall that, in an abstract state self-test the higher order moments are the same for all strategy inducing the correlation \(\tilde{p}\).
**Definition 6.10** ([10]).: _Let \(t\subseteq\left\{\text{pure},\text{full-rank},\text{PVM}\right\}\). A correlation \(\tilde{p}\) is an abstract state \(t\) self-test if for every \(k,l\geq 1\), \(a_{1},\ldots,a_{k}\in\mathcal{A}\),\(s_{1},\ldots,s_{k}\in\mathcal{S}\),\(b_{1},\ldots,b_{l}\in\mathcal{B}\),\(t_{1},\ldots,t_{l}\in\mathcal{T}\), the value_
\[\langle\psi|A_{s_{1}a_{1}}\cdots A_{s_{k}a_{k}}\otimes B_{t_{1}b_{1}}\cdots B_ {t_{l}b_{l}}|\psi\rangle\]
_is the same across all \(t\) strategies inducing the correlation \(\tilde{p}\)._
**Proposition 6.11**.: _Let \(\tilde{p}\) be the correlation generated by the pure non-projective strategy \(\tilde{S}=\left(\left|\Phi^{+}\right\rangle,\{\mathcal{X},\mathcal{Z}\},\{ \mathcal{H},\mathcal{G},\mathcal{M}\}\right)\). Then \(\tilde{p}\) is not an abstract state PVM self-test._
Proof.: According to the definition of abstract state self-testing, it suffices to find two pure PVM strategies for \(p\) which give different higher-order moments.
Define \(S^{1}_{PVM}=\left(\mathbb{1}\otimes V\left|\Phi^{+}\right\rangle,\{\mathcal{X},\mathcal{Z}\},\{\mathcal{H}^{\prime},\mathcal{G}^{\prime},\mathcal{M}^{ \prime}\}\right)\) as in the previous subsection. Now consider another dilation \(\mathcal{H}^{\prime\prime}\) of \(\mathcal{H}\), namely, \(H^{\prime\prime}_{+}=VH_{+}V^{*}\), \(H^{\prime\prime}_{-}=VH_{-}V^{*}+\mathbb{1}-VV^{*}\) (that is, \(H^{\prime\prime}_{+}=H_{+}\oplus 0,H^{\prime\prime}_{-}=H_{-}\oplus 1\)). Let \(S^{2}_{PVM}=\left(\mathbb{1}\otimes V\left|\Phi^{+}\right\rangle,\{\mathcal{X},\mathcal{Z}\},\{\mathcal{H}^{\prime\prime},\mathcal{G}^{\prime},\mathcal{M}^{ \prime}\}\right)\).
Then direct calculation shows that
\[\langle\Phi^{+}|(1\otimes V^{*})(1\otimes(M_{0}^{\prime}H_{+}^{\prime}M_{0}^{ \prime}))(1\otimes V)|\Phi^{+}\rangle=\frac{4-\sqrt{2}}{18},\]
\[\langle\Phi^{+}|(1\otimes V^{*})(1\otimes(M_{0}^{\prime}H_{+}^{\prime\prime}M_{ 0}^{\prime}))(1\otimes V)|\Phi^{+}\rangle=\frac{2-\sqrt{2}}{18}.\]
So \(S^{1}_{\text{PVM}}\) and \(S^{2}_{\text{PVM}}\) are of different higher order moments.
Note that by to [13, Theorem 3.5], abstract state self-testing is equivalent to (standard) self-testing under the condition that \(\tilde{p}\) is extreme and there exists a full-rank \(t\) strategies inducing the correlation \(\tilde{p}\). Therefore, our results indicates that the condition of [13, Theorem 3.5] is crucial: there exists extreme correlation \(\tilde{p}\) such that, the class of PVM strategies admits no full-rank strategy for \(\tilde{p}\), where \(\tilde{p}\) is a (standard) PVM-self-test but not an abstract state PVM-self-test.
## 7 Acknowledgements
This work is funded by the European Union under the Grant Agreement No 101078107, QInteract and Grant Agreement No 101017733, VERIQTAS as well as VILLUM FONDEN via the QMATH Centre of Excellence (Grant No 10059) and Villum Young Investigator grant (No 37532). S. S. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2092 CASA - 390781972. He furthermore has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101030346. P. B. acknowledges the support from CNPq. J. K. is supported by the HOMING grant from the Foundation for Polish Science. We thank Jurij Volcic for valuable discussion on Naimark dilation.
|
2305.07967 | Structured Low-Rank Tensor Learning | We consider the problem of learning low-rank tensors from partial
observations with structural constraints, and propose a novel factorization of
such tensors, which leads to a simpler optimization problem. The resulting
problem is an optimization problem on manifolds. We develop first-order and
second-order Riemannian optimization algorithms to solve it. The duality gap
for the resulting problem is derived, and we experimentally verify the
correctness of the proposed algorithm. We demonstrate the algorithm on
nonnegative constraints and Hankel constraints. | Jayadev Naram, Tanmay Kumar Sinha, Pawan Kumar | 2023-05-13T17:04:54Z | http://arxiv.org/abs/2305.07967v1 | # Structured Low-Rank Tensor Learning
###### Abstract
We consider the problem of learning low-rank tensors from partial observations with structural constraints, and propose a novel factorization of such tensors, which leads to a simpler optimization problem. The resulting problem is an optimization problem on manifolds. We develop first-order and second-order Riemannian optimization algorithms to solve it. The duality gap for the resulting problem is derived, and we experimentally verify the correctness of the proposed algorithm. We demonstrate the algorithm on nonnegative constraints and Hankel constraints.
## 1 Introduction
With the rise in the availability of multidimensional data such as colour images, video sequences, and hyperspectral images, tensor-based techniques have started gaining attention as traditional matrix-based methods cannot exploit the underlying structure present in higher-dimensional data. Many recent applications of tensor reconstruction techniques also enforce structural constraints such as nonnegativity ([31], [41]) or a Hankel structure ([38], [39]).
We propose a framework for dealing with tensor completion problems with general structural constraints. In particular, we consider the structured low-rank tensor learning problem of the following form:
\[\min_{\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}} C\|\mathcal{W}_{\Omega}-\mathcal{Y}_{\Omega}\|^{2}+R(\mathcal{W})\] (1) subject to \[A(\mathcal{W})\geq 0,\]
where \(\mathcal{Y}_{\Omega}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) is a partially observed tensor for indices given in the set \(\Omega\), \((\mathcal{W}_{\Omega})_{i_{1},\ldots,i_{K}}=w_{i_{1},\ldots,i_{K}}\) for \((i_{1},\ldots,i_{K})\in\Omega\), \(C>0\) denotes the cost parameter, \(R:\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\rightarrow\mathbb{R}\) is a regularizer that induces low-rank constraint on the tensor, and \(A:\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\rightarrow\mathbb{R}^{n}\) is a linear map that induces structural constraint on the tensor.
Following [26], we learn the tensor as a sum of \(K\) tensors, \(\mathcal{W}=\mathcal{W}^{(1)}+\cdots+\mathcal{W}^{(K)}\), and use the following low-rank regularizer:
\[R(\mathcal{W})=\sum_{k=1}^{K}\frac{1}{\lambda_{k}}\left\|W_{k}^{(k)}\right\|_{* }^{2}. \tag{2}\]
The main contributions of the paper are given below.
* We propose a novel factorization for modeling structured low-rank tensors through a partial dual problem of (1).
* We develop first-order and second-order Riemannian optimization algorithms that exploit proposed factorization's inherent geometric structure.
* We compute the expression for the duality gap and verify the correctness of the proposed algorithm through experiments.
* We apply the proposed algorithm to the nonnegative constraint and the Hankel constraint.
## 2 Notation
We follow [2] for our tensor notation. We present a few important notions here. Tensors are denoted by uppercase calligraphic letters, e.g., \(\mathcal{W}\). Matrices are denoted by uppercase letters, e.g., \(X\). For a square matrix \(X\in\mathbb{R}^{n\times n}\), we denote its trace by \(tr(X)\). \(\mathbb{R}_{+}\) denotes the interval \([0,\infty)\). The inner product of two tensors is defined as follows:
\[\langle\mathcal{W},\mathcal{U}\rangle=\sum_{i_{1}=1}^{n_{1}}\sum_{i_{2}=1}^{n_{ 2}}\cdots\sum_{i_{K}=1}^{n_{K}}w_{i_{1},\ldots,i_{K}}u_{i_{1},\ldots,i_{K}}.\]
A mode-\(k\) fiber of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\), denoted by \(w_{i_{1},\ldots,i_{k-1},\cdot,i_{k+1},\ldots,i_{K}}\), is a vector obtained by fixing all but \(k\)-th index of \(W\). The mode-\(k\) unfolding of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) is a matrix \(unfold_{k}(\mathcal{W})=W_{k}\in\mathbb{R}^{n_{k}\times n_{1}\cdots n_{k-1}n_ {k+1}\cdots n_{K}}\) formed by arranging the mode-\(k\) fibers to be the columns of the resulting matrix. Similarly, we can define the \(k\)-mode folding operation (\(fold_{k}\)) as the inverse of the unfolding operation - it converts a given matrix to a tensor of a suitable order. The \(k\)-mode product of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) with a matrix \(U\in\mathbb{R}^{m\times n_{k}}\) is defined as follows:
\[(\mathcal{W}\times_{k}U)_{i_{1},\ldots,i_{k-1},j,i_{k+1},\ldots,i_{K}}=\sum_{ i_{k}}^{n_{k}}w_{i_{1},\ldots,i_{K}}u_{j,i_{k}},\ \text{i.e.,}\ \mathcal{X}=\mathcal{W}\times_{k}U\iff X_{k}=UW_{k}.\]
## 3 Related Work
There are a number of well-known methods for tensor completion using different approaches for enforcing the low-rank constraint. Many methods use a generalization of the matrix trace norm regularizer to the tensor case. Two well-known regularizers are the overlapped trace norm regularizer ([28], [33] and [32]) and latent trace norm regularizer ([36]). The overlapped trace norm regularizer uses the regularizer \(R(\mathcal{W})=\sum_{k=1}^{K}\|W_{k}\|_{*}\). In the latent trace norm regularizer model, the tensor is modelled as the sum of \(K\)-tensors, \(\mathcal{W}=\sum_{k=1}^{K}\mathcal{W}^{(k)}\in\mathbb{R}^{n_{1}\times\cdots \times n_{K}}\), and the regularizer is defined as
\[R(\mathcal{W})=\inf_{\begin{subarray}{c}\mathcal{W}^{(k)},\\ k\in\{1,\ldots,K\}\end{subarray}}\sum_{k=1}^{K}\left\|W^{(k)}_{k}\right\|_{*}.\]
Other methods for tensor completion use tensor decompositions like the Tucker decomposition and CP decomposition to learn the tensors. [29] and [30] formulate the tensor completion problem as an optimization problem on the Riemannian manifold of fixed multi-linear rank tensors.
A general formulation for low-rank matrix completion problems with structural constraints was developed in [34]. It provided a unified framework for dealing with general linear inequality and equality constraints.
Recently, tensor completion problems with structural constraints have started attracting attention. [31] and [41] develop nonnegative tensor completion algorithms for image and video reconstruction tasks. [35] falls under the current framework which considers the special case of nonnegative constraints in detail with vast experimentation. [38] models image completion with missing slices as a higher-order Hankel tensor completion problem. [40] uses low-rank Hankel tensor model for estimating traffic states from partial observations.
## 4 Dual Framework
Following [26], we construct a partial dual problem to the primal problem (1), incorporating the structural constraint into the formulation. We do this using the approach outlined in [34]. We generalize Theorem 1 of [26] to the case of tensors with structural constraints. The following lemma [27] is used in the development of the dual formulation.
**Lemma 1**: _For a matrix \(X\in\mathbb{R}^{d\times T}\), the nuclear norm of \(X\) satisfies the following relation:_
\[\|X\|_{*}^{2}=\min_{\Theta\in P^{d},\,\text{range}(X)\subseteq\text{range}( \Theta)}\langle\Theta^{\dagger}X,X\rangle,\]
_where \(P^{d}=\{S\in\mathbb{R}^{d\times d}:S\succeq 0,\text{tr}(S)=1\}\), \(\text{range}(\Theta)=\{\Theta z:z\in\mathbb{R}^{d}\}\), \(\Theta^{\dagger}\) denotes the pseudo-inverse of \(\Theta\). For a given \(X\), the minimizer is \(\bar{\Theta}=\sqrt{XX^{T}}/\text{tr}(\sqrt{XX^{T}})\)._
Using the above lemma, we can write (1) as
\[\begin{array}{ll}\min_{\begin{subarray}{c}\Theta_{k}\in P^{n_{k}},\\ k\in\{1,\ldots,K\}\end{subarray}}\min_{\begin{subarray}{c}\mathcal{W}^{(k)},\\ k\in\{1,\ldots,K\}\end{subarray}}&C\left\|\bigg{(}\sum_{k=1}^{K}\mathcal{W}^{( k)}\bigg{)}_{\Omega}-\mathcal{Y}_{\Omega}\right\|^{2}+\sum_{k=1}^{K}\frac{1}{2 \lambda_{k}}\langle\Theta_{k}^{\dagger}W_{k}^{(k)},W_{k}^{(k)}\rangle\\ \text{subject to}&A(\mathcal{W})\geq 0.\end{array} \tag{3}\]
**Theorem 2**: _The following minimax problem is equivalent to the problem (3):_
\[\min_{\begin{subarray}{c}\Theta_{k}\in P^{n_{k}},\\ k\in\{1,\ldots,K\}\end{subarray}}\max_{\begin{subarray}{c}\mathcal{Z}\in \mathcal{C},\,s\in\mathbb{R}^{n}\\ k\in\{1,\ldots,K\}\end{subarray}}\langle\mathcal{Z},\mathcal{Y}_{\Omega} \rangle-\frac{1}{4C}\|\mathcal{Z}\|^{2}-\sum_{k=1}^{K}\frac{\lambda_{k}}{2} \langle Z_{k}+(A^{*}(s))_{k},\Theta_{k}[Z_{k}+(A^{*}(s))_{k}]\rangle, \tag{4}\]
_where \(\mathcal{C}=\{\mathcal{Z}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}:\, \mathcal{Z}=\mathcal{Z}_{\Omega}\}\). The optimal solution \(\bar{\mathcal{W}}\) of (3) is related to the optimal solution \(\{\bar{\Theta}_{1},\ldots,\bar{\Theta}_{K},\bar{\mathcal{Z}},\bar{s}\}\) of (4) by_
\[\bar{\mathcal{W}}=\sum_{k=1}^{K}\lambda_{k}(\bar{\mathcal{Z}}+A^{*}(\bar{s})) \times_{k}\bar{\Theta}_{k}. \tag{5}\]
For proof, see Appendix B.
From (5) it can be seen that the structured and low-rank constraints on \(\mathcal{W}\) can be decomposed into structured constraints on \(s\) and low-rank constraints on \(\Theta_{k}\), which leads to a simpler optimization method.
## 5 Proposed Algorithm
Due to the low-rank constraint on \(\mathcal{W}\), each \(\Theta_{k}\) has a low rank. Therefore, a fixed-rank parameterized problem can be given by writing \(\Theta_{k}=U_{k}U_{k}^{T}\). The problem (4) can then be written as follows.
\[\min_{U\in S_{r_{1}}^{n_{1}}\times\cdots\times S_{r_{K}}^{n_{K}}}g(U), \tag{6}\]
where \(U=(U_{1},\ldots,U_{K})\), \(S_{r}^{n}=\{U\in\mathbb{R}^{n\times r}:\,\|U\|_{F}=1\}\), and
\[g(U)=\max_{\mathcal{Z}\in\mathcal{C},s\in\mathbb{R}^{n}}\left\langle\mathcal{Z}, \mathcal{Y}_{\Omega}\right\rangle-\frac{1}{4C}\|\mathcal{Z}\|^{2}-\sum_{k=1}^{ K}\frac{\lambda_{k}}{2}\left\|U_{k}^{T}(Z_{k}+(A^{*}(s))_{k})\right\|^{2}. \tag{7}\]
The optimization problem in (7) is strongly convex for a given \(U\), while problem (6) is a non-convex problem in \(U\).
The set \(S_{r_{1}}^{n_{1}}\times\cdots\times S_{r_{K}}^{n_{K}}\) is a Riemannian manifold ([25], [26]), and thus problem (6) is an optimization problem on a manifold. We solve it using either Riemannian conjugate gradient or Riemannian Trust-Region algorithm, depending on the structural constraint. The proposed algorithm is shown in Algo. 1. For more details on optimization on general manifolds, we refer the reader to [1] and [3].
The Riemannian optimization algorithm in Algo. 1 requires computing the Euclidean gradient and its directional derivative, which are given in the following lemma.
**Lemma 3**: _Let \(\{\hat{\mathcal{Z}},\hat{s}\}\) be the maximizer of the convex problem (7) at \(U\). Then, the Euclidean gradient \(\nabla g(U)\) is given by_
\[\nabla g(U)=-(\lambda_{1}P_{1},\ldots,\lambda_{K}P_{K}),\]
_where \(P_{k}=(\hat{Z}_{k}+(A^{*}(\hat{s}))_{k})(\hat{Z}_{k}+(A^{*}(\hat{s}))_{k})^{T} U_{k}\). Let \(V\in\mathbb{R}^{n_{1}\times r_{1}}\times\cdots\times\mathbb{R}^{n_{K}\times r_{K}}\) and \(\dot{Z}_{k},\dot{s}\) denote the directional derivatives of \(Z_{k}\) and \(s\) along \(V\) respectively. Then, the directional derivative of \(\nabla g\) at \(U\) along \(V\) is_
\[D\nabla g(U)[V]=-(\lambda_{1}Q_{1},\ldots,\lambda_{K}Q_{K}),\]
_where \(Q_{k}=(\hat{Z}_{k}+(A^{*}(\hat{s}))_{k})(\hat{Z}_{k}+(A^{*}(\hat{s}))_{k})^{T} V_{k}+2\,\text{sym}((\hat{Z}_{k}+(A^{*}(\hat{s}))_{k})(\hat{Z}_{k}+(A^{*}( \hat{s}))_{k})^{T}))U_{k}\) and \(\text{sym}(X)=(X+X^{T})/2\)._
**Remark 4**: _It can be seen that computing \(D\nabla g(U)[V]\) requires the terms \(\hat{\mathcal{Z}}\) and \(\dot{s}\). These terms can be computed by applying directional derivative along \(V\) on the first-order optimality conditions of the problem (7) at \(\{\hat{\mathcal{Z}},\hat{s}\}\)._
```
Data:\(\mathcal{Y}_{\Omega}\), rank=\((r_{1},\ldots,r_{K})\), \(\varepsilon\), \((\lambda_{1},\ldots,\lambda_{K})\) Result:\(\hat{\mathcal{W}}=\sum_{k=1}^{K}\lambda_{k}(\hat{\mathcal{Z}}+A^{*}(\hat{s})) \times_{k}(U_{k}U_{k}^{T})\) for\(t=1,2,\cdots\)do Check Termination: if \(\|\nabla g(U^{(t)})\|\leq\varepsilon\) then break; Solve for \(\hat{\mathcal{Z}}^{(t)}\) and \(\hat{s}^{(t)}\) in (7); Compute cost \(g(U^{(t)})\), gradient \(\nabla g(U^{(t)})\) and the directional derivative of \(\nabla g(U^{(t)})\); Update \(U\): \(U^{(t+1)}\) = RiemannianCG-update(\(U^{(t)}\)) or \(U^{(t+1)}\) = RiemannianTR-update(\(U^{(t)}\)); end
```
**Algorithm 1**Proposed Algorithm for Structured Low-Rank Tensor Completion
We have the following result regarding the optimality of the proposed algorithm. It is a generalization of Theorem 3 in [34].
**Theorem 5**: _Let \(\hat{U}=(U_{1},\ldots,U_{K})\) be a feasible solution of problem (6) and \(\left\{\hat{\mathcal{Z}},\hat{s}\right\}\) be the maximizer of the convex problem (7) at \(U=\hat{U}\). Let \(\hat{\mathcal{A}}=A^{*}(\hat{s})\), and \(\sigma_{\hat{k}}\) be the maximum singular value of \(\hat{Z}_{k}+\hat{A}_{k}\). Let \(\hat{\Theta}=(\hat{\Theta}_{1},\ldots,\hat{\Theta}_{K})\), where \(\hat{\Theta}_{k}=\hat{U}_{k}\hat{U}_{k}^{T}\). Then, \(\left\{\hat{\Theta},\hat{\mathcal{Z}},\hat{s}\right\}\) is a candidate solution for the partial dual problem (4) and we have the following expression for the duality gap \(\Delta\):_
\[\Delta=\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\left(\sigma_{k}^{2}-\|\hat{U}_{k}^{ T}(\hat{Z}_{k}+\hat{A}_{k})\|^{2}\right). \tag{8}\]
For proof see Appendix C.
## 6 Applications
We consider several popular applications for our proposed method. See [26] for the case where there are no structure constraints.
### Nonnegative Tensor completion
The nonnegative tensor completion problem is
\[\begin{split}\min_{\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots \times n_{K}}}&\qquad C\|\mathcal{W}_{\Omega}-\mathcal{Y}_{ \Omega}\|^{2}+\sum_{k=1}^{K}\frac{1}{\lambda_{k}}\left\|W_{k}^{(k)}\right\|_{ *}^{2}\\ \text{subject to}&\qquad\mathcal{W}\geq 0.\end{split} \tag{9}\]
The fixed-rank dual problem of (9) is
\[\min_{U\in S_{r_{1}}^{n_{1}\times\cdots\times S_{r_{K}}^{n_{K}}}}g(U), \tag{10}\]
\(g(U)\) is given by
\[g(U)=\max_{\mathcal{S}\in\mathbb{R}_{+}^{n_{1}\times\cdots\times n_{K}}}\biggl{(} \max_{\mathcal{Z}\in\mathcal{C}}\left\langle\mathcal{Z},\mathcal{Y}_{\Omega} \right\rangle-\frac{1}{4C}\|\mathcal{Z}\|^{2}-\sum_{k=1}^{K}\frac{\lambda_{k} }{2}\left\|U_{k}^{T}Z_{k}+U_{k}^{T}S_{k}\right\|^{2}\biggr{)}. \tag{11}\]
where \(U=(U_{1},\ldots,U_{K})\) and \(\mathbb{R}_{+}^{n_{1}\times\cdots\times n_{K}}\) is the set of all tensors of size \(n_{1}\times\cdots n_{K}\) with non-negative entries.
The problem (11) can be solved alternatively for \(\mathcal{Z}\) and \(\mathcal{S}\). Then the resulting problem in \(\mathcal{Z}\) is unconstrained least-squares problem, which can be solved using linear conjugate gradient algorithm (For various preconditioned CG approaches, see [4, 5, 6, 7, 8, 9, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]), and over \(\mathcal{S}\) it is non-negative least-squares problem which can be solved using [37]. The cost \(g(U)\) and the gradient \(\nabla g(U)\) can be easily computed following Lemma 3, hence, we employ Riemannian conjugate gradient to solve the outer optimization problem over \(U\). The per-iteration complexity of the proposed algorithm is \(O\biggl{(}T|\Omega|\sum_{k=1}^{K}r_{k}+\sum_{k=1}^{K}n_{k}r_{k}^{2}+\sum_{k=1 }^{K}r_{k}^{3}\biggr{)}\), where \(T\) is the sum of iterations of CG and NNLS solver for solving \(\mathcal{Z}\) and \(\mathcal{S}\) respectively.
### Hankel Tensor Completion
The primal problem is given by
\[\min_{\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}}C\|\mathcal{W}_{ \Omega}-\mathcal{Y}_{\Omega}\|^{2}+\sum_{k=1}^{K}\frac{1}{\lambda_{k}}\left\| \mathcal{H}_{2k}(\mathcal{W}^{(k)})\right\|_{*}^{2}, \tag{12}\]
where \(\mathcal{H}_{k}=\textit{unfold}_{k}\circ\mathcal{H}\) and \(\mathcal{H}\) is the \(k\)-th order Hankel transform such that for \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\), \(\mathcal{H}(\mathcal{W})\in\mathbb{R}^{\tau_{1}\times n_{1}-\tau_{1}+1\times \cdots\times\tau_{K}\times n_{K}-\tau_{K}+1}\) is a \(2K\) order tensor for some \(\tau=(\tau_{1},\ldots,\tau_{K}),\) which is the duplication parameter of the Hankel transform (see [38]). The fixed-rank dual problem of (12) is
\[\min_{U\in S_{r_{1}}^{n_{1}-\tau_{1}+1}\times\cdots\times S_{r_{K}}^{n_{K}- \tau_{K}+1}}g(U), \tag{13}\]
where \(U=(U_{1},\ldots,U_{K}),\) and
\[g(U)=\max_{\mathcal{Z}\in\mathcal{C},\mathcal{S}}\langle\mathcal{Z},\mathcal{Y }_{\Omega}\rangle-\frac{1}{4C}\|\mathcal{Z}\|^{2}-\sum_{k=1}^{K}\frac{\lambda _{k}}{2}\left\|U_{k}^{T}S_{k}\right\|^{2},\quad\text{subject to}\quad\mathcal{ H}^{*}(\mathcal{S})=\mathcal{Z}. \tag{14}\]
The problem (14) can be solved using linear conjugate gradient algorithm over \(\mathcal{Z}\) and \(\mathcal{S}\), and the equality constraint is ensured at each step by performing a projection step. The cost \(g(U)\), the gradient \(\nabla g(U),\) and the directional derivative \(D\nabla g(U)[V]\) can be easily computed following Lemma 3 and Remark 4 above. We employ Riemannian trust-region algorithm to solve (13). The per-iteration complexity of the proposed algorithm is \(O\bigg{(}TI|\Omega|\sum_{k=1}^{K}r_{k}+\sum_{k=1}^{K}n_{k}r_{k}^{2}+\sum_{k=1 }^{K}r_{k}^{3}\bigg{)}\), where \(T\) is the iterations of CG and \(I|\Omega|\) is the number of non-zero entries of \(\mathcal{S}\).
## 7 Toy Experiment
We consider a toy problem of size \(100\times 100\times 3\) with 10% train data for both Nonnegative and Hankel Tensor Completion problems. The ranks \((r_{1},r_{2},r_{3})\) are chosen as \((10,10,3)\) for both the experiments, and the regularization constants \(\lambda_{k}\)'s are chosen according to [26]. The maximum iterations for Riemannian optimization problem was set to 200. The duplication parameter \(\tau\) in the Hankel transform was set to \((10,10,1)\). The plots of the variation in gradient norm and relative duality gap with iterations for both problems is shown in Appendix A in Figures 1 and 2. We observe that in both the cases the plots of gradient norm, and the relative duality gap decreases rapidly with iterations.
|
2303.05226 | On thick subcategories of the category of projective presentations | We study thick subcategories of the category of 2-term complexes of
projective modules over an associative algebra. We show that those thick
subcategories that have enough injectives are in explicit bijection with 2-term
silting complexes and complete cotorsion pairs. We also provide a bijection
with left finite wide subcategories of the module category and prove that all
these maps are compatible with previously known correspondences. We discuss
possible applications to stability conditions. | Monica Garcia | 2023-03-09T13:03:49Z | http://arxiv.org/abs/2303.05226v3 | # On thick subcategories of the category of projective presentations
###### Abstract.
We study thick subcategories of the category of 2-term complexes of projective modules over an associative algebra. We show that those thick subcategories that have enough injectives are in explicit bijection with 2-term silting complexes and complete cotorsion pairs. We also provide a bijection with left finite wide subcategories of the module category and prove that all these maps are compatible with previously known correspondences. We discuss possible applications to stability conditions.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Extriangulated categories
* 2.2 Cotorsion pairs and \(\tau\)-tilting theory
* 3 Main results
* 3.1 Thick subcategories and cotorsion pairs
* 3.2 Linking thick and wide subcategories
* 4 Geometric interpretation
* 4.1 Geometric Invariant Theory
* 4.2 Determinantal invariants
* 4.3 Semistability in \(\operatorname{mod}\Lambda\)
* 4.4 Towards a notion of semistability in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)
## 1. Introduction
In their seminal paper on \(\tau\)-tilting theory [1], T. Adachi, O. Iyama and I. Reiten studied the relationship between several classes of objects, namely, support \(\tau\)-tilting modules, 2-term silting complexes, and functorially finite torsion classes. Since then, driven by applications to cluster theory ([1, 1]) and stability conditions ([1, 2]), among others, many classes of objects have been added to this list. In the category \(\operatorname{mod}\Lambda\) of finitely generated modules over an associative algebra \(\Lambda\) (see Section 2 for our assumptions), this list includes
* support \(\tau\)-tilting objects ([1, 2, 3], \(\operatorname{DIR}^{+}17\)],...),
* the lattice of torsion pairs ([1, 2, 3],...),
* the poset of wide subcategories ([1, 2, 3],...),
to name a few. These classes of objects are known to be intimately related to each other, see for instance [10, 11, 12]. Another important category associated to an algebra \(\Lambda\) is the category of \(2\)-term complexes of finitely generated projective modules, which we will denote by \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). This category plays an essential role, for instance, in the categorification of g-vectors of cluster algebras ([1, 10]), and appears naturally as the extended co-heart of particular co-t-structures on the homotopy category \(\mathcal{K}^{b}(\operatorname{proj}\Lambda)\) ([10, 11]). Many of the objects in \(\operatorname{mod}\Lambda\) we have alluded to have a "mirror" analog in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\), namely
* \(2\)-term silting complexes, as the analog of support \(\tau\)-tilting modules ([12, 13, 14],...);
* cotorsion pairs, as the analog of torsion pairs ([15, 16, 17],...).
Like their counterparts in \(\operatorname{mod}\Lambda\), they have been shown be in one-to-one correspondence, see [17, 18]. In this paper, we introduce to this last list the class of thick subcategories of \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) and claim that they form the "mirror" analog of wide subcategories in \(\operatorname{mod}\Lambda\). We explicitly relate them to cotorsion pairs and \(2\)-term silting complexes. Our work is motivated by the possible extension of the theory of stability conditions to \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) (see Section 4 for further discussion). We use as inspiration the description of certain wide subcategories of \(\operatorname{mod}\Lambda\) as subcategories of semistable modules for a particular collection of stability conditions ([15, 16, 17, 18]). Before stating our results, let us recall the explicit bijections between some of the classes of objects listed above.
**Theorem A**.: _[_1_, Theorem 0.5]__[_1_, Theorem 30]__[_1_, Theorem 1.2]__[_1_, Theorem 1.1]__Let \(\Lambda\) be a finite-dimensional algebra over a field. There are explicit bijections between the sets of_
1. _Isomorphism classes of basic support_ \(\tau\)_-tilting modules in_ \(\operatorname{mod}\Lambda\)_._
2. _Functorially finite torsion pairs in_ \(\operatorname{mod}\Lambda\)_._
3. _Left finite wide subcategories of_ \(\operatorname{mod}\Lambda\)_._
4. _Left finite semistable subcategories of_ \(\operatorname{mod}\Lambda\)_._
The bijection from (1) to (2) takes a support \(\tau\)-tilting module \((M,P)\) to the torsion pair \(\vartheta(M)=(\operatorname{Fac}(M),M^{\perp})\). Here, \(\operatorname{Fac}(M)=\{N\in\operatorname{mod}\Lambda\mid\exists\ M^{\prime} \to N\to 0\text{ s.e.s. with }M^{\prime}\in\operatorname{add}(M)\}={}^{\perp}(\tau M)\cap P^{\perp}\). The bijection from (2) to (3) is given by the map \(\alpha(\mathcal{T})=\{M\in\mathcal{T}\mid\forall(g:N\to M)\in\mathcal{T},\ \ker(g)\in \mathcal{T}\}\) for any torsion pair \((\mathcal{T},\mathcal{F})\). Finally, the bijection from (1) to (4) is obtained by proving that \(\alpha(\operatorname{Fac}(M))=M^{\perp}_{\rho}\cap\operatorname{Fac}(M)= \mathscr{W}_{g^{M_{\rho}-g^{P}}}\), where the later is the semistable subcategory associated to the _g-vector_ of \(M_{\rho}\) minus the g-vector of \(P\). Here, \(M_{\rho}\) is the basic module such that \(\operatorname{add}(M_{\rho})=\operatorname{add}(M_{1})\) for \(M_{1}\) satisfying that
\[\Lambda\to M_{0}\to M_{1}\to 0 \tag{1.1}\]
is a minimal left \(M\)-approximation of \(\Lambda\).
Both, support \(\tau\)-tilting modules and torsion pairs, turn out to have "mirror" analogs in the extriangulated category \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\).
**Theorem B**.: _[_1_, Theorem 3.2]_ _Let \(\Lambda\) be a finite-dimensional algebra over a field. There exists an explicit bijection between_
1. _Isomorphism classes of basic support_ \(\tau\)_-tilting modules in_ \(\operatorname{mod}\Lambda\)_._
2. _Isomorphism classes of basic silting objects in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)
This bijection takes any \(2\)-term silting object \(U\) and sends it to \(H^{0}(U)\). Its inverse sends a support \(\tau\)-tilting module \((M,P)\) to \(\underset{\downarrow(f,0)}{\bigcup(f,0)}\,,\) where \(\underset{\downarrow f}{\bigcup f}\) is a minimal projective presentation of \(M\).
Let \(U\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) be a basic silting object. Inside of \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) there is a conflation
\[\Lambda\xrightarrow{f}U_{0}\to U_{1}\overset{g}{\dashrightarrow}\Lambda[1] \tag{1.2}\]
where \(f\) is a minimal left \(U\)-approximation of \(\Lambda\) and \(g\) a minimal right \(U\)-approximation of \(\Lambda[1]\). We will denote \(U_{\lambda}\) and \(U_{\rho}\) the direct summands of \(U\) satisfying that \(U=U_{\lambda}\oplus U_{\rho}\), \(U_{0}\in\operatorname{add}(U_{\lambda})\) and \(U_{1}\in\operatorname{add}(U_{\rho})\), respectively. We note that the short exact sequence (1.1) can be obtained by applying the cohomological functor \(H^{*}(-)\) to the triangle (1.2).
The following is an application of a more general theorem concerning cotorsion pairs in triangulated categories:
**Theorem C**.: _[_20_, Theorem 3.6]_ _There is a well defined map_
\[\Phi:\operatorname{cotor}\mathcal{K}_{\Lambda}\to\operatorname{tors}\Lambda\]
_between the set \(\operatorname{cotor}\mathcal{K}_{\Lambda}\) of cotorsion pairs in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) and the set \(\operatorname{tors}\Lambda\) of torsion pairs in \(\operatorname{mod}\Lambda\). This map restricts to a bijection between_
1. _Functorially finite torsion pairs in_ \(\operatorname{mod}\Lambda\)_._
2. _Complete cotorsion pairs in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._
In [1], the authors constructed an explicit bijection \(\Psi\) between cotorsion pairs in a general extriangulated category \(\mathcal{K}\) and its silting subcategories. When \(\mathcal{K}=\mathcal{K}_{\Lambda}\), this gives
**Theorem D**.: _[_1_, Theorem 5.7]_ _The following sets are in one-to-one correspondence:_
1. _Isomorphism classes of basic silting objects in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._
2. _Complete cotorsion pairs in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._
_This correspondance takes a complete cotorsion pair \((\mathcal{X},\mathcal{Y})\) and sends it to \(\Psi((\mathcal{X},\mathcal{Y}))=U\), where \(U\) is a basic additive generator of the silting category \(\mathcal{X}\cap\mathcal{Y}\)._
The notion of thick subcategory of an extriangulated category was first introduced in [21] in order to generalize the notion of localization of both exact and triangulated categories. In this paper, we complete the "mirror" version of Theorem A in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) using thick subcategories. This is done in such a way that all the bijections appearing in the previous theorems commute with each other.
**Theorem (3.1)**.: _Let \(\Lambda\) be a finite-dimensional algebra over a field, and let \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). There exist well defined maps_
\[\operatorname{cotor}\mathcal{K}_{\Lambda}\xleftarrow{\beta}\text{thick}\, \mathcal{K}_{\Lambda}\]
_such that when restricted to the set \(\operatorname{c-cotor}\mathcal{K}_{\Lambda}\) of complete cotorsion pairs and the set \(\operatorname{inj-thick}\mathcal{K}_{\Lambda}\) of thick subcategories with enough injectives, they are inverse of each other._
**Theorem (3.12)**.: _Let \(\Lambda\) be a finite-dimensional algebra over a field and take \(\mathcal{K}_{\Lambda}\) as before. There exist inclusion-reversing maps_
_such that, when restricted to thick subcategories with enough injectives and the set \(\text{\rm f-wide}\,\Lambda\) of left finite wide subcategories, they make the following diagram commute_
_In particular, \(\mathscr{W}\) and \(U\in\operatorname{silt}\mathcal{K}_{\Lambda}\mapsto\operatorname{thick}(U_{ \rho})\in\operatorname{inj\text{-thick}}\mathcal{K}_{\Lambda}\) are bijective._
Putting all previous theorems together we get the following result.
**Corollary 1.1**.: _There are explicit bijections between_
1. _Isomorphism classes of basic silting objects in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._
2. _Complete cotorsion pairs in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._
3. _Thick subcategories in_ \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) _with enough injectives._
_These bijections are compatible with those in Theorems A, B, C, and D. In other words, the following diagram commutes_
## Acknowledgments
I am grateful to P.-G. Plamondon for the many constructive discussions, his availability, and his support throughout the development of my Ph.D. thesis, from which this article emanates. I would also like to thank Yann Palu for his helpful
comments, in particular about Proposition 3.9. Lastly, I thank the ISM Discovery School on Mutations organizers, where I acquired crucial tools for the progress of this work.
## 2. Preliminaries
We fix \(\Bbbk\) an algebraically closed field of characteristic zero. Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)-algebra. We will mostly consider the case where \(\Lambda\cong\Bbbk Q/I\) with \(Q=(Q_{0},Q_{1})\) a finite quiver and \(I\) an admissible ideal of the path algebra \(\Bbbk Q\). We write \(S_{i}\) for the \(i\)-th simple module associated to a vertex \(i\in Q_{0}\) with corresponding projective cover \(P_{i}\twoheadrightarrow S_{i}\). Recall that we can associate to \(\Lambda\) the triangulated category \(\mathcal{D}^{b}(\operatorname{mod}\Lambda)\) of bounded complexes of finite-dimensional modules, where \(X[1]\) will denote the shift of any \(X\in\mathcal{D}^{b}(\operatorname{mod}\Lambda)\). We consider as well the category of bounded complexes of projective modules \(\mathcal{K}^{b}(\operatorname{mod}\Lambda)\), and we will denote by \(\mathcal{K}^{[a,b]}(\operatorname{proj}\Lambda)\) with \(a\leq b\in\mathbb{Z}\) the extension-closed subcategory of \(\mathcal{K}^{b}(\operatorname{mod}\Lambda)\) of complexes concentrated between degrees \(a\) and \(b\). In particular, we will study the category \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\), which we view as the category of morphisms between projective modules up to homotopy.
Let \(n=|Q_{0}|\). Denote by \(K_{0}(\operatorname{mod}\Lambda)\cong K_{0}(\mathcal{D}^{b}(\operatorname{ mod}\Lambda))\) the Grothendieck group of \(\operatorname{mod}\Lambda\), which is canonically isomorphic to the lattice \(\bigoplus_{i=1}^{n}\mathbb{Z}[S_{i}]\). Similarly, we consider its dual \(K_{0}(\operatorname{proj}\Lambda)\simeq K_{0}(\mathcal{K}^{b}(\operatorname{ mod}\Lambda))\cong K_{0}(\mathcal{K}_{\Lambda})\cong\bigoplus_{i=1}^{n} \mathbb{Z}[P_{i}]\). The Euler form associated to these two groups is given by
\[\langle-,-\rangle:K_{0}(\operatorname{proj}\Lambda)\times K_{0}( \operatorname{mod}\Lambda)\longrightarrow\mathbb{Z}\] \[([P_{j}],[S_{i}])\mapsto\langle[P_{j}],[S_{i}]\rangle=\begin{cases} 1&i=j\\ 0&i\neq j\end{cases}\]
In particular, for every \(M\in\operatorname{mod}\Lambda\) and \(X=\begin{matrix}X^{-1}\\ \downarrow_{x}\end{matrix}\in\mathcal{K}_{\Lambda}\), this pairing is given by
\[\langle[X],[M]\rangle=\langle[X^{0}]-[X^{-1}],[M]\rangle=\dim_{\Bbbk}( \operatorname{Hom}_{\Lambda}(X^{0},M))-\dim_{\Bbbk}(\operatorname{Hom}_{ \Lambda}(X^{-1},M)).\]
### Extriangulated categories
Extriangulated categories were introduced by Nakaoka and Palu in [19] as a way to generalize both triangulated and exact categories. Since \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) is an extension-closed subcategory of the triangulated category \(\mathcal{D}^{b}(\operatorname{mod}\Lambda)\), then it is extriangulated. In this setting, the bifunctor associated to \(\mathcal{K}_{\Lambda}\) that gives its extriangulated structure is given by \(\mathbb{E}_{\mathcal{K}_{\Lambda}}(X,Y)=\operatorname{Hom}_{\mathcal{D}^{b}( \operatorname{mod}\Lambda)}(X,Y[1])\). A **conflation** in an extriangulated category is the generalization of what would be a triangle in a triangulated category, or an exact sequence in an exact category. We restrict this notion to our setting.
**Definition 2.1**.: A sequence of morphisms \(X\xmapsto{f}Y\xmapsto{g}Z\) in \(\mathcal{K}_{\Lambda}\) is a **conflation** if \(Z\cong\operatorname{Cone}(f)\), or equivalently, if there exists a map \(h:Z\to X[1]\) such that \((f,g,h)\) is a triangle in \(\mathcal{D}^{b}(\operatorname{mod}\Lambda)\). In this scenario, \(f\) is said to be an **inflation** and \(g\) a **deflation**.
**Remark 2.2**.: Recall that the category \(\mathcal{C}_{\Lambda}=\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\) of morphism between projective modules is exact, thus, extriangulated. Indeed, its set of conflations is given by those sequences \(A\xmapsto{u}B\xmapsto{v}C\) of objects in \(\mathcal{C}_{\Lambda}\) such that the sequences \(0\to A^{i}\xrightarrow{u^{i}}B^{i}\xrightarrow{v^{i}}C^{i}\to 0\) are exact in \(\operatorname{mod}\Lambda\) for \(i=-1,0\). Let \(X\xmapsto{f}Y\xmapsto{g}Z\)
be a conflation in \(\mathcal{K}_{\Lambda}\) and fix representatives \(x,y,z\) of the differentials of \(X,Y,Z\) respectively. Chose as well \(h:Z^{-1}\to X^{0}\), a representative of the morphism \(Z\dashrightarrow X[1]\) associated to our conflation. Then we must have an isomorphism in \(\mathcal{K}_{\Lambda}\)
\[Y\cong\operatorname{Cocone}(Z^{-1}\dashrightarrow X[1])[-1] \cong\begin{array}{c}X^{-1}\oplus Z^{-1}\\ \left\downarrow\left(\begin{smallmatrix}x&h\\ 0&z\end{smallmatrix}\right).\end{array}\] \[X^{0}\oplus Z^{0}\]
Since \(\mathcal{K}_{\Lambda}\) is equivalent to the (extriangulated) quotient (see [20]) \(\mathcal{C}_{\Lambda}/\{P\xrightarrow{f}Q\mid f\text{ isomorphism}\}\), if we choose a minimal representative \(y\) of \(Y\), that is, such that it satisfies that \(y\not\cong\begin{pmatrix}y^{\prime}&0\\ 0&\operatorname{Id}_{Q}\end{pmatrix}\) for all \(0\neq Q\in\operatorname{proj}\Lambda\), then there exists \(P\in\operatorname{proj}\Lambda\) and a diagram
\[\begin{array}{c}Y^{-1}\oplus P\xrightarrow{\simeq}X^{-1}\oplus Z^{-1}\\ \left\downarrow\left(\begin{smallmatrix}y&0\\ 0&\operatorname{Id}_{P}\end{smallmatrix}\right)\right.\end{array}\] \[Y^{0}\oplus P\xrightarrow{\simeq}X^{0}\oplus Z^{0}\]
that is commutative inside \(\operatorname{mod}\Lambda\). That is, the obtained sequence \(X\mapsto Y\oplus\begin{smallmatrix}P\\ {}_{\shortstack{\shortstack{\shortstack{\shortstack{\shortstack{ \shortstack{\shortstack{\shortstack{\shortstack{\shortstack{\shortstack{ \shortstackstack{\shortstackstack{\shortstackstack{\shortstackstackstack{ \shortstackstackstackstack{ \shortstackstackstackstackstack{ \shortstackstackstackstackstackstack{ \shortstackstackstackstackstackstack{ }}}{ {\shortstackstackstackstackstackstack{{\shortstackstackstackstack{ }}{{\shortstackstackstackstack{\shortstackstackstack{ }}{\shortstackstackstackstack{\shortstackstack{ }}{\shortstackstack{\shortstackstack{\stackstackstackstack{ }}{\shortstackstackstack{\shortstackstack{\stackstackstackstack{ }}{\shortstackstack{\stackstackstack{\stackstackstackstack{ }}{\shortstackstackstack{\stackstackstack{\stackstackstackstack{ }stackstackstackstackstack{\stackstackstackstack{ }stackstackstackstack{stackstackstackstack{\stackstackstackstackstack{ }stackstackstackstackstack{stackstackstackstack{ }stackstackstackstackstack{stackstackstackstack{stackstackstackstackstack{ }stackstackstackstackstackstackstack{stackstackstackstackstack{ }stackstackstackstackstack{stackstackstackstack{stackstackstackstackstack{ }stackstackstackstackstackstack{stackstackstackstackstack{ }stackstackstackstack{stackstackstackstackstack{stackstackstackstackstack{ stackstackstackstackstackstackstackstackstackstackstackstackstackstack{
**Definition 2.4**.: [20] Let \(\mathcal{K}\) be an extriangulated category. We say that a subcategory \(\mathcal{T}\subset\mathcal{K}\) is **thick**, if it is closed under direct summands and if for every conflation
\[X\mapsto Y\twoheadrightarrow Z\]
if two of the terms lie in \(\mathcal{K}\), then the third does as well. That is, \(\mathcal{K}\) is closed under extensions, cones and cocones. For all \(\mathcal{C}\subset\mathcal{K}\), we denote by \(\operatorname{thick}(\mathcal{C})\) the smallest thick subcategory that contains \(\mathcal{C}\), and we write \(\operatorname{thick}\mathcal{K}\) for the set of thick subcategories of \(\mathcal{K}\).
**Definition 2.5**.: Let \(\mathcal{K}\) be an extriangulated category. We say that a subcategory \(\mathcal{U}\subset\mathcal{K}\) is **presilting** if it is closed under direct sums and summands and \(\operatorname{\mathbb{E}}^{i}(\mathcal{U},\mathcal{U})=0\) for all \(i>0\). We say that \(\mathcal{U}\) is **silting** if \(\operatorname{thick}(\mathcal{U})=\mathcal{K}\). An object \(U\in\mathcal{K}\) is (pre)silting if the category \(\operatorname{add}(U)\) is. We denote \(\operatorname{silt}\mathcal{K}\) the set of isomorphism classes of basic silting objects in \(\mathcal{K}\).
**Proposition 2.1**.: [1, Lemma 5.3] _Let \(\mathcal{V}\subset\mathcal{K}\) be a silting subcategory. If \(\mathcal{U}\) is a presilting subcategory with \(\mathcal{V}\subset\mathcal{U}\), then \(\mathcal{U}=\mathcal{V}\)._
**Proposition 2.2**.: [1, Proposition 5.4] _Let \(\mathcal{K}\) be an extriangulated category that contains a silting object. Then each silting category admits an additive generator. Moreover, if \(\mathcal{K}\) is a Krull-Schmidt category, then \(U\mapsto\operatorname{add}U\) gives a bijection between \(\operatorname{silt}\mathcal{K}\) and the set of silting subcategories._
**Proposition 2.3**.: _[_1_, Proposition 4.10]_ _Let \(\mathcal{U}\) be a presilting subcategory of \(\mathcal{K}\). Then the following statements hold._
1. \(\mathcal{U}^{\vee}\) _is the smallest subcategory containing_ \(U\) _and closed under extensions, cocones and direct summands. Moreover, if_ \(\mathcal{U}\) _is closed under cones, then_ \(\mathcal{U}^{\vee}=\operatorname{thick}(\mathcal{U})\)_._
2. \(\mathcal{U}^{\wedge}\) _is the smallest subcategory containing_ \(U\) _and closed under extensions, cones and direct summands. Moreover, if_ \(\mathcal{U}\) _is closed under cocones, then_ \(\mathcal{U}^{\wedge}=\operatorname{thick}(\mathcal{U})\)_._
**Definition 2.6**.: [1, Definition 1.7] Let \(\mathcal{K}\) be an extriangulated category. We say that a pair of subcategories \((\mathcal{X},\mathcal{Y})\) is a **cotorsion pair** if they are both full and additive and they satisfy
1. \(\operatorname{\mathbb{E}}(X,\mathcal{Y})=0\) if and only if \(X\in\mathcal{X}\).
2. \(\operatorname{\mathbb{E}}(\mathcal{X},Y)=0\) if and only if \(Y\in\mathcal{Y}\).
In other words \(\mathcal{Y}=\mathcal{X}^{\perp_{1}}=\{Y\in\mathcal{K}\mid\operatorname{ \mathbb{E}}(X,Y)=0\ \forall\ X\in\mathcal{X}\}\) and \(\mathcal{X}={}^{\perp_{1}}\mathcal{Y}=\{X\in\mathcal{K}\mid\operatorname{ \mathbb{E}}(X,Y)=0\ \forall\ Y\in\mathcal{Y}\}\). We denote by \(\operatorname{cotor}\mathcal{K}\) the set of all cotorsion pairs in \(\mathcal{K}\). We say that \((\mathcal{X},\mathcal{Y})\) is **complete** ([1, Definition 4.1]), if additionally \(\mathcal{K}=\operatorname{Cone}(\mathcal{Y},\mathcal{X})=\operatorname{Cocone }(\mathcal{Y},\mathcal{X})\). We denote by \(\operatorname{c-cotor}\mathcal{K}\subset\operatorname{cotor}\mathcal{K}\) the subset of complete cotorsion pairs of \(\mathcal{K}\).
**Remark 2.7**.: As we have noted before, when \(\mathcal{K}=\mathcal{K}_{\Lambda}\) it is always true that \(\operatorname{\mathbb{E}}^{2}(X,Y)=0\) for all \(X,Y\in\mathcal{K}\). In particular, \(\operatorname{\mathbb{E}}^{2}(\mathcal{X},\mathcal{Y})=0\) for all cotorsion pairs \((\mathcal{X},\mathcal{Y})\). We say that a cotorsion pair is **hereditary** ([1, Definition 4.1]) when it satisfies this property. We remark as well that all projective objects must lie in \(\mathcal{X}\), all injective objects belong to \(\mathcal{Y}\) and since \(\mathcal{K}_{\Lambda}=\operatorname{Cone}(\operatorname{proj}\mathcal{K}_{ \Lambda},\operatorname{proj}\mathcal{K}_{\Lambda})=\operatorname{Cocone}( \operatorname{inj}\ \mathcal{K}_{\Lambda},\operatorname{inj}\ \mathcal{K}_{\Lambda})\) we have that \(\mathcal{K}_{\Lambda}=\mathcal{X}^{\wedge}=\mathcal{Y}^{\vee}\). In a general extriangulated category \(\mathcal{K}\), we say that \((\mathcal{X},\mathcal{Y})\) is **bounded** ([1]) if it satisfies that \(\mathcal{K}=\mathcal{X}^{\wedge}=\mathcal{Y}^{\vee}\).
Recall that a **torsion pair** in \(\operatorname{mod}\Lambda\) is a pair of subcategories \((\mathcal{T},\mathcal{F})\) closed under extensions such that
1. \(\mathcal{T}\) is closed under factor modules.
2. \(\mathcal{F}\) is closed under submodules.
3. \(\mathcal{F}=\mathcal{T}^{\perp}\) and \(\mathcal{T}={}^{\perp}\mathcal{F}\).
We say that a torsion pair is **functorially finite** if every \(\Lambda\)-module admits both a right and a left \(\mathcal{T}\)-approximation. We denote by \(\operatorname{f-tors}\Lambda\subset\operatorname{tors}\Lambda\) the subset of functorially finite torsion classes in \(\operatorname{mod}\Lambda\).
**Theorem 2.4**.: _[_20_, Theorem 3.6]_ _Let \(\mathcal{K}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). There are well defined maps:_
\[\begin{CD}\operatorname{cotor}\mathcal{K}@>{\Phi}>{}>\operatorname{tors} \Lambda\\ @V{\cup\mid}V{}V@V{\cup\mid}V{}V\\ \operatorname{c-cotor}\mathcal{K}@>{}>{\Theta}>\operatorname{f-tors}\Lambda \end{CD}\]
_given by_
\[\Phi((\mathcal{X},\mathcal{Y}))=(H^{0}(\mathcal{Y}),H^{0}(\mathcal{Y})^{\perp})\]
_and_
\[\Theta(\mathcal{T},\mathcal{F})=({}^{\perp_{1}}\mathcal{Z},\mathcal{Z})\]
_where \(\mathcal{Z}=\left(H^{0}\right)^{-1}(\mathcal{T})\). Moreover, \(\Theta\) and \(\Phi\) are inverse to each other when restricted to \(\operatorname{c-cotor}\mathcal{K}\) and \(\operatorname{f-tors}\Lambda\)._
The following is an application of Theorem 5.7 in [1] to the extriangulated category \(\mathcal{K}_{\Lambda}\). The statement follows from the fact that, in \(\mathcal{K}_{\Lambda}\), all complete cotorsion pairs are hereditary and bounded. The original theorem gives a bijection between \(\operatorname{c-cotor}\mathcal{K}_{\Lambda}\) and the set of its silting subcategories. Since \(\mathcal{K}_{\Lambda}\) is Krull-Schmidt, Proposition 2.1 allows us to state the bijection in terms of \(\operatorname{silt}\mathcal{K}_{\Lambda}\).
**Theorem 2.5**.: _[_1_, Theorem 5.7]_ _There is a one-to-one correspondence_
\[\operatorname{c-cotor}\mathcal{K}_{\Lambda}@<{\Psi}<{}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s}<\operatorname{s}<\operatorname{s}< \operatorname{s}<\operatorname{s
\(X_{P}\twoheadrightarrow P\). But since \(P\) is projective, this conflation must split and \(P\in\mathcal{X}\), as \(\mathcal{X}\) is closed under direct summands.
Suppose now that \(\mathcal{K}=\mathcal{K}_{\Lambda}\) and let \(\mathcal{X}\) be a resolving and contravariantly finite subcategory of \(\mathcal{K}_{\Lambda}\). Then for every object \(C\in\mathcal{K}_{\Lambda}\) a right \(\mathcal{X}\)-approximation of \(C\) must be a deflation. Indeed, Let \(X\overset{c}{\hookrightarrow}C\) be an \(\mathcal{X}\)-approximation of \(C\). Since \(\mathcal{X}\) is resolving, there exists a conflation \(Y\overset{f}{\longrightarrow}X^{\prime}\overset{g}{\rightharpoonup}C\) such that \(Y\in\mathcal{K}\) and \(X^{\prime}\in\mathcal{X}\). Thus, there exists a map \(X^{\prime}\overset{x}{\longrightarrow}X\) such that \(g=c\cdot x\). By the octahedral axiom, we have a triangle \(\operatorname{Cone}(x)\to Y[1]\to\operatorname{Cone}(c)\) such that the following diagram commutes
But since \(\operatorname{Cone}(x)\in\mathcal{K}^{[-2,0]}(\operatorname{proj}\Lambda)\) and \(Y[1]\in\mathcal{K}^{[-2,-1]}(\operatorname{proj}\Lambda)\), \(\operatorname{Cone}(c)\) must be in \(\mathcal{K}^{[-2,0]}(\operatorname{proj}\Lambda)\cap\mathcal{K}^{[-3,-1]}( \operatorname{proj}\Lambda)=\mathcal{K}^{[-2,-1]}(\operatorname{proj}\Lambda)\). Then the triangle \(\operatorname{Cone}(c)[-1]\to X\overset{c}{\rightharpoonup}C\) lies ins \(\mathcal{K}_{\Lambda}\) and \(c\) is a deflation. Since \(\mathcal{K}_{\Lambda}\) satisfies WIC, is Krull-Schmidt and Hom-finite, we can also find an approximation that is minimal.
**Proposition 2.6**.: _[_1_, _Dual of Lemma 3.1]_ _Assume that \(\mathcal{K}\) is Krull-Schmidt, Hom-finite and satisfies WIC. Let \(\mathcal{C}\subset\mathcal{K}\) be an extension-closed subcategory of \(\mathcal{K}\). If we have a conflation \(X\mapsto C\twoheadrightarrow Y\) where the corresponding deflation is a minimal right \(\mathcal{C}\)-approximation of \(Y\), then \(X\in\mathcal{C}^{\perp_{1}}\)._
**Proposition 2.7**.: _[_1_, Proposition 5.15]_ _Let \(\mathcal{K}\) be a Krull-Schmidt, Hom-finite extriangulated category satisfying WIC and having enough projectives and injectives. If \((\mathcal{X},\mathcal{Y})\) is a hereditary complete cotorsion pair, then \(\mathcal{X}\) is a contravariantly finite resolving subcategory of \(\mathcal{K}\). Reciprocally, if \(\mathcal{X}\in\operatorname{f-res}\mathcal{K}\), then \((\mathcal{X},\mathcal{X}^{\perp_{1}})\) is a complete cotorsion pair._
Recall that there is a one-to-one correspondence between the set \(\operatorname{silt}\mathcal{K}_{\Lambda}\) of isomorphism classes of basic silting objects in \(\mathcal{K}_{\Lambda}\) and the set \(\operatorname{s\tau}\)-\(\operatorname{tilt}\Lambda\) of support \(\tau\)-tilting basic modules in \(\operatorname{mod}\Lambda\) given by the map \(H^{0}:\mathcal{K}_{\Lambda}\to\operatorname{mod}\Lambda\) (Theorem B). Moreover, the map \(M\mapsto\vartheta(M)=(\operatorname{Fac}(M),M^{\perp})\) gives a correspondence between the sets \(\operatorname{s\tau}\)-\(\operatorname{tilt}\operatorname{mod}\Lambda\) and \(\operatorname{f-tors}\Lambda\) (Theorem A). The following result shows that these bijections are compatible to the ones described in Theorem 2.4 and Theorem 2.5.
**Proposition 2.8**.: _Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)-algebra and consider the bijections \(\Phi:\operatorname{c-cotor}\mathcal{K}_{\Lambda}\to\operatorname{f-tors}\Lambda\) of Theorem 2.4 as well as \(\Psi:\operatorname{c-cotor}\mathcal{K}_{\Lambda}\to\operatorname{silt} \mathcal{K}_{\Lambda}\) of Theorem 2.5. The following diagram_
_commutes._
Proof.: Let \((\mathcal{X},\mathcal{Y})\) be a complete cotorsion pair in \(\mathcal{K}_{\Lambda}\). By Theorem 2.5, since \(\mathcal{X}\) is contravariantly finite and resolving, the complex \(\Lambda[1]=\underset{\downarrow}{\Lambda}\) admits a conflation
(2.1)
where the corresponding deflation is a minimal right \(\mathcal{X}\)-approximation and \(U_{\mathcal{Y}}\in\mathcal{Y}\) by Proposition 2.6. Since \(\mathcal{Y}\) is closed by extensions and \(\Lambda[1]\in\operatorname{inj}\,\mathcal{K}_{\Lambda}\subset\mathcal{Y}\), we get that \(U_{\mathcal{X}}\in\mathcal{X}\cap\mathcal{Y}\). Moreover, since the sequence \(\Lambda\mathop{\mathchoice{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}}\nolimits_{\mathcal{Y}}U_{\mathcal{Y}}\mathop{\mathchoice{ \vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}{\vbox{\hbox{$ i$}}}}\nolimits_{\mathcal{Y}}U_{\mathcal{X}}\) is also a conflation, \(\mathcal{X}\) is closed under extensions and \(\Lambda\in\operatorname{proj}\mathcal{K}_{\Lambda}\subset\mathcal{X}\), then \(U_{\mathcal{Y}}\in\mathcal{X}\cap\mathcal{Y}\). This implies that \(\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})\subset\mathcal{X} \cap\mathcal{Y}\), and since \(\Lambda\in\operatorname{thick}(\operatorname{add}(U_{\mathcal{X}}\oplus U_{ \mathcal{X}}))\), we obtain that \(\operatorname{thick}(\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{X} }))=\mathcal{K}_{\Lambda}\). By Proposition 2.1, we have that \(\mathcal{X}\cap\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}}\oplus U_{ \mathcal{Y}})\), which gives \(\Psi((\mathcal{X},\mathcal{Y}))=U_{\mathcal{X}\cap\mathcal{Y}}\), where \(U_{\mathcal{X}\cap\mathcal{Y}}\) is the basic object such that \(\operatorname{add}(U_{\mathcal{X}\cap\mathcal{Y}})=\operatorname{add}(U_{ \mathcal{X}}\oplus U_{\mathcal{X}})\).
Let \(\mathcal{T}=H^{0}(\mathcal{Y})\) be the torsion class associated to \(\Phi((\mathcal{X},\mathcal{Y}))\). Applying \(H^{*}\) to the conflation (2.1), we get the exact sequence \(H^{0}(U_{\mathcal{Y}})\to H^{0}(U_{\mathcal{X}})\to 0\). Since \(\mathcal{T}\) is closed under quotients, then \(\operatorname{Fac}(H^{0}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}}))\subset \mathcal{T}\). On the other hand, by Theorem 2.5, we know that \(\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})^{\wedge}\), in particular, \(\forall\ Y\in\mathcal{Y}\) there exists a conflation \(Y^{\prime}\rightharpoonup U\twoheadrightarrow Y\) where \(Y^{\prime}\in\mathcal{Y}\) and \(U\in\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})\). Applying again the functor \(H^{*}\), we get the exact sequence
\[H^{0}(Y^{\prime})\to H^{0}(U)\to H^{0}(Y)\to H^{1}(Y^{\prime})=0\]
which implies that \(H^{0}(Y)\in\operatorname{Fac}H^{0}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})\) for all \(Y\in\mathcal{Y}\). Then \(\mathcal{T}=\operatorname{Fac}H^{0}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})= \operatorname{Fac}H^{0}(U_{\mathcal{X}\cap\mathcal{Y}})\), which gives the result.
**Corollary 2.9**.: _For any complete cotorsion pair \((\mathcal{X},\mathcal{Y})\) in \(K^{[-1,0]}(\operatorname{proj}\Lambda)\), there exists conflations_
_where_
1. \(U_{\mathcal{X}}\in\mathcal{X}\) _and_ \(U_{\mathcal{Y}}\in\mathcal{Y}\)_;_
2. \(U_{\mathcal{X}}\oplus U_{\mathcal{Y}}\) _is a silting object such that_ \(\mathcal{X}\cap\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}}\oplus U_{ \mathcal{Y}})\)_;_
3. \(\pi_{\mathcal{X}}\) _is a minimal right_ \(\mathcal{X}\)_-approximation of_ \(\Lambda[1]\)_;_
4. \(i_{\mathcal{Y}}\) _is a minimal left_ \(\mathcal{Y}\)_-approximation of_ \(\Lambda\)_._
**Remark 2.10**.: When \(\mathcal{X}=(\operatorname{add}U)^{\vee}\) with \(U\in\operatorname{silt}\mathcal{K}_{\Lambda}\), then \(U_{1}\twoheadrightarrow\Lambda[1]\) is a minimal \(U\)-right approximation if and only if it is a minimal \(\mathcal{X}\)-right approximation. Indeed, by the proof of Proposition 2.8, we know that \(\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})=\mathcal{X}\cap \mathcal{Y}=\operatorname{add}U\)
which implies that \(\pi_{\mathcal{X}}:U_{\mathcal{X}}\twoheadrightarrow\Lambda[1]\) is a minimal \(U\)-right approximation since \(\operatorname{add}U\subset\mathcal{X}\). Consider now \(\pi:U_{1}\twoheadrightarrow\Lambda[1]\) a minimal \(U\)-right approximation. Since \(U_{1}\in\operatorname{add}U\subset\mathcal{X}\), there exists a map \(f:U_{1}\to U_{\mathcal{X}}\) such that \(\pi=\pi_{\mathcal{X}}\circ f\). But \(U_{\mathcal{X}}\in\operatorname{add}U\), so there is \(g:U_{\mathcal{X}}\to U_{1}\) such that \(\pi_{\mathcal{X}}=\pi\circ g\). Since \(\pi=\pi\circ(g\circ f)\) and \(\pi\) is minimal, \(g\circ f\) must be an isomorphism. Using that \(\pi_{\mathcal{X}}\) is minimal as well, \(f\circ g\) is also an isomorphism such that \(\pi_{\mathcal{X}}=\pi_{\mathcal{X}}\circ(f\circ g)\). We conclude that \(U_{\mathcal{X}}\) and \(U_{1}\) are isomorphic.
## 3. Main results
### Thick subcategories and cotorsion pairs
The goal of this section is to prove the following result:
**Theorem 3.1**.: _Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)-algebra and let \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). There exist maps_
\[\operatorname{cotor}\mathcal{K}_{\Lambda}\xleftarrow{\beta}\text{thick}\, \mathcal{K}_{\Lambda}\]
_such that when restricted to complete cotorsion pairs and thick subcategories with enough injectives, they are inverse of each other._
Proof.: It follows from Proposition 3.2, Lemma 3.5, and Lemma 3.10.
**Proposition 3.2**.: _Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)-algebra. There exist a well defined map_
\[\operatorname{res}\mathcal{K}_{\Lambda}\xlongrightarrow{\beta}\operatorname{ thick}\mathcal{K}_{\Lambda}\]
_which takes any \(\mathcal{X}\in\operatorname{res}\mathcal{K}_{\Lambda}\) and sends it to_
\[\beta(\mathcal{X})=\{X\in\mathcal{X}\ |\ \forall\text{ conflation }X\rightsquigarrow X^{\prime} \twoheadrightarrow X^{\prime\prime}\text{ such that }X^{\prime}\in\mathcal{X},\text{ then }X^{\prime\prime}\in\mathcal{X}\}.\]
Proof.: Let \(\mathcal{X}\) be a resolving subcategory of \(\mathcal{K}_{\Lambda}\). First, we prove that \(\beta(\mathcal{X})\) is closed under direct summands. Suppose \(X=X^{\prime}\oplus X^{\prime\prime}\in\beta(\mathcal{X})\subset\mathcal{X}\), then \(X^{\prime}\) and \(X^{\prime\prime}\) are in \(\mathcal{X}\) since \(\mathcal{X}\) is closed under direct summands. Let \(X^{\prime}\xlongrightarrow{a}A\xlongrightarrow{B}\) be a conflation with \(A\in\mathcal{X}\), then \(X^{\prime}\oplus X^{\prime\prime}\xlongrightarrow{a}A\oplus X^{\prime\prime}\xlongrightarrow{B}\) is also a conflation with \(A\oplus X^{\prime\prime}\in\mathcal{X}\), which implies that \(B\in\mathcal{X}\) since \(X\in\beta(\mathcal{X})\). Thus, \(X^{\prime}\in\beta(\mathcal{X})\).
Next, we prove that \(\beta(\mathcal{X})\) is closed under extensions. Consider a conflation \(X\mapsto X^{\prime}\twoheadrightarrow X^{\prime\prime}\) in \(\mathcal{K}_{\Lambda}\) such that \(X,X^{\prime\prime}\in\beta(\mathcal{X})\). Since \(\mathcal{X}\) is closed under extensions, \(X^{\prime}\) is in \(\mathcal{X}\). Take \(X^{\prime}\mapsto A\twoheadrightarrow B\) with \(A\in\mathcal{X}\). By the octahedral axiom, there exists \(C\in\mathcal{K}_{\Lambda}\) such that the diagram
commutes and such that the last column and the middle row are conflations. Since \(X\in\beta(\mathcal{X})\), \(C\) must be in \(\mathcal{X}\). But \(X^{\prime\prime}\in\beta(\mathcal{X})\) as well, so \(B\in\mathcal{X}\). This implies that \(X^{\prime}\in\beta(\mathcal{X})\) and \(\beta(\mathcal{X})\) is closed under extensions.
We now prove that \(\beta(\mathcal{X})\) is closed under cones. Let \(X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}X^{\prime}\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}X^{\prime\prime}\) be a conflation with \(X,X^{\prime}\in\beta(\mathcal{X})\). In particular, \(X^{\prime}\in\mathcal{X}\), so \(X^{\prime\prime}\in\mathcal{X}\) by definition of \(\beta(\mathcal{X})\). Consider a conflation \(X^{\prime\prime}\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}A\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}B\) with \(A\in\mathcal{X}\). Since \(\operatorname{Hom}_{\mathcal{D}^{b}(\operatorname{mod}\Lambda)}(B[-1],X[1])=0\), there exists \(h:B[-1]\to X^{\prime}\) and \(C\in\mathcal{K}_{\Lambda}\), such that
is a commutative diagram where the second row and column are conflations. Since \(\mathcal{X}\) is closed under extensions and \(X,A\in\mathcal{X}\), we have that \(C\in\mathcal{X}\). Likewise, \(B\) must be in \(\mathcal{X}\), since \(X^{\prime}\in\beta(\mathcal{X})\), proving that \(\beta(\mathcal{X})\) is closed under cones. In order to prove that \(\beta(\mathcal{X})\) is closed under cocones, take now \(X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}X^{\prime}\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}X^{\prime\prime}\) a conflation such that \(X^{\prime},X^{\prime\prime}\in\beta(\mathcal{X})\). Since \(\mathcal{X}\) is resolving, it is closed under cocones and thus, \(X\in\mathcal{X}\). Take \(X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}A\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}B\) a conflation in \(\mathcal{K}_{\Lambda}\) with \(A\in\mathcal{X}\). Using the octahedral axiom, we get the commutative diagram
Since both \(\mathcal{K}\) and \(\mathcal{X}\) are closed by extensions, \(C\in\mathcal{X}\). Using that \(X^{\prime}\in\beta(\mathcal{X})\), we get that \(B\in\mathcal{X}\), which gives that \(X\in\beta(\mathcal{X})\), so it is closed under cocones.
**Proposition 3.3**.: _Let \(\mathcal{C}\subset\mathcal{K}_{\Lambda}\) be an extension-closed subcategory of \(\mathcal{K}_{\Lambda}\) that contains the zero object. Then_
\[\iota(\mathcal{C})=\{X\in\mathcal{K}_{\Lambda}\ |\ \exists\text{ an inflation }X\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}C\text{ with }C\in\mathcal{C}\}\]
_is a resolving subcategory. Moreover, if \(\mathcal{C}^{\prime}\subset\mathcal{C}\) is also closed under extensions, then \(\iota(\mathcal{C}^{\prime})\subset\iota(\mathcal{C})\)._
Proof.: Let \(\mathcal{C}\) be a extension-closed subcategory of \(\mathcal{K}_{\Lambda}\) containing the zero object. Note that for \(P\in\operatorname{proj}\Lambda\), \(P\to 0\to P[1]\) is always a conflation. Since \(0\in\mathcal{C}\), we have that \(\operatorname{proj}\Lambda\subset\iota(\mathcal{C})\). That \(\iota(\mathcal{C})\) is closed under cocones and direct summands follows directly from the definition of \(\iota\). Take \(X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}X^{\prime}\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}X^{\prime\prime}\) a conflation where \(X,X^{\prime\prime}\in\iota(\mathcal{C})\). In particular, there is a conflation \(X^{\prime\prime}\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}C\mathrel{ \mathop{\kern 0.0pt\rightharpoonup}}W\) where \(C\in\mathcal{C}\). Using that \(\operatorname{Hom}_{\mathcal{D}^{b}(\operatorname{mod}\Lambda)}(W[-1],M[1])=0\) and the octahedral axiom, we get the commutative diagram
But \(X\in\iota(\mathcal{C})\) as well, so there exists a conflation \(X\mapsto C^{\prime}\twoheadrightarrow W^{\prime}\) with \(C^{\prime}\in\mathcal{C}\). Using the octahedral axiom once more, we can construct the commutative diagram
and since \(\mathcal{C}\) is closed under extensions, \(B\in\mathcal{C}\). Composing the inflations \(X^{\prime}\mapsto A\mapsto B\), we get that \(X^{\prime}\in\iota(\mathcal{C})\).
**Lemma 3.4**.: _Let \(\mathcal{X}\) be a contravariantly resolving subcategory of \(\mathcal{K}_{\Lambda}\). Then \(\,\iota(\mathcal{X})\subset\mathcal{X}\)._
Proof.: Let \(\mathcal{X}\) be a contravariantly finite resolving subcategory of \(\mathcal{K}_{\Lambda}\) and let \(X\in\iota(\mathcal{X})\). Take a conflation \(X\rightarrowtail T\twoheadrightarrow Y\) such that \(T\in\mathcal{X}\). Since \(\mathcal{X}\) is contravariantly finite, by Proposition 2.6 there exists \(X^{\prime}\in\mathcal{X}\), \(Y^{\prime}\in\mathcal{X}^{\perp_{1}}\) and a conflation \(Y^{\prime}\mapsto X^{\prime}\twoheadrightarrow X\) such that the corresponding deflation \(X^{\prime}\twoheadrightarrow X\) is a minimal \(\mathcal{X}\)-right approximation. By the octahedral axiom, there exists \(C\in\mathcal{K}\) such that the following diagram is commutative
But \(\mathbb{E}(T,Y^{\prime})=0\), since \(T\in\mathcal{X}\) and \(Y^{\prime}\in\mathcal{X}^{\perp_{1}}\). In particular, \(C\simeq T\oplus Y^{\prime}\). This implies that \(X^{\prime}\simeq X\oplus Y^{\prime}\), and therefore \(X\in\mathcal{X}\) because \(\mathcal{X}\) is closed under direct summands.
**Lemma 3.5**.: _Let \(\mathcal{X}\) be a contravariantly finite resolving category of \(\mathcal{K}_{\Lambda}\), then_
\[\iota(\beta(\mathcal{X}))=\mathcal{X}.\]
Proof.: Since \(\beta(\mathcal{X})\subset\mathcal{X}\), the previous lemma shows that \(\iota(\beta(\mathcal{X}))\subset\iota(\mathcal{X})\subset\mathcal{X}\). Consider now \(U_{\mathcal{X}}\) as in Corollary 2.9. We will show that \(U_{\mathcal{X}}\in\beta(\mathcal{X})\). Let
\(U_{\mathcal{X}}\xrightarrow{x}X\xrightarrow{\ \ }X^{\prime}\) be a conflation with \(X\in\mathcal{X}\). By the octahedral axiom, there exists \(W\in\mathcal{K}_{\Lambda}\) and a commutative diagram
such that the second line is a conflation. Since \(\pi_{\mathcal{X}}\) is a minimal \(\mathcal{X}\)-approximation, there exists \(x^{\prime}:X\to U_{\mathcal{X}}\) such that \(\pi^{\prime}_{\mathcal{X}}=\pi_{\mathcal{X}}\circ x^{\prime}\), which implies that \(\pi_{\mathcal{X}}\circ(x^{\prime}\circ x)=\pi^{\prime}_{\mathcal{X}}\circ x= \pi_{\mathcal{X}}\). Since \(\pi_{\mathcal{X}}\) is minimal, we get that \(x^{\prime}\circ x\) is an isomorphism. In particular, \(x\) is a section, which implies that \(X^{\prime}\) is a direct summand of \(X\in\mathcal{X}\). This gives that \(X^{\prime}\in\mathcal{X}\) and \(U_{\mathcal{X}}\in\beta(\mathcal{X})\).
Since we have an inflation \(U_{\mathcal{Y}}\mapsto U_{\mathcal{X}}\), \(U_{\mathcal{Y}}\in\iota(\beta(\mathcal{X}))\) and \(\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})\subset\iota(\beta( \mathcal{X}))\). But \(\iota(\beta(\mathcal{X}))\) is closed under cocones, so by Proposition 2.3, \(\mathcal{X}=\operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})^{ \vee}\subset\iota(\beta(\mathcal{X}))\).
Lemma 3.5 tell us that, when restricted to contravariantly finite resolving categories, the map \(\beta\) is injective. The following results will allow us to explicitly describe the image of \(\beta\).
**Proposition 3.6**.: _Let \((\mathcal{X},\mathcal{Y})\) be a complete cotorsion pair in \(\mathcal{K}_{\Lambda}\), then_
\[\mathcal{X}=\operatorname{Cocone}(\mathcal{X}\cap\mathcal{Y},\mathcal{X}\cap \mathcal{Y}).\]
Proof.: Recall that \(\mathcal{U}=\mathcal{X}\cap\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}} \oplus U_{\mathcal{Y}})\) and that \(\mathcal{X}=\mathcal{U}^{\vee}\) by Corollary 2.9. Let \(X\in\mathcal{X}\), there must exist \(m\in\mathbb{Z}_{\geq 0}\) such that \(X\in\mathcal{U}_{m}^{\vee}\), that is, we can find conflations
\[X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}U_{0}\twoheadrightarrow X_{1}\dashrightarrow X[1] \tag{3.1}\]
\[X_{1}\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}U_{1}\twoheadrightarrow X_{2}\dashrightarrow X_{1}[1] \tag{3.2}\]
with \(X_{i}\in\mathcal{U}_{m-i}^{\vee}\subset\mathcal{X}\) for \(i=1,2\), and \(U_{0},U_{1}\in\mathcal{U}\). Shifting and rotating triangles (3.1) and (3.2), and using that \(\operatorname{Hom}_{\mathcal{D}^{b}(\operatorname{mod}\Lambda)}(X_{2},X[2])=0\), as well as the octahedral axiom, we get a commutative diagram
where the last row is a triangle. Then \(U_{0}\to U_{1}\to X_{2}\oplus X[1]\dashrightarrow U_{0}[1]\) is a triangle as well. Since \(U_{0}\in\mathcal{Y}\), \(\mathbb{E}(X_{2},U_{0})=0\), the morphism \(X_{2}\oplus X[1]\dashrightarrow U_{0}[1]\) must be of the form \(X_{2}\oplus X[1]\xrightarrow{\ \ (0,f)\ }U_{0}[1]\). This in turn implies that \(U_{1}\simeq X_{2}\oplus\operatorname{Cone}(f)[-1]\), thus \(U^{\prime}=\operatorname{Cone}(f[-1])=\operatorname{Cone}(f[-1])\) belongs to \(\mathcal{U}\) since \(\mathcal{U}\) is closed under direct summands. Remark that, by the commutativity of the previous diagram, \(f[-1]\) is exactly the inflation \(X\mathrel{\mathop{\kern 0.0pt\rightharpoonup}}U_{0}\). We get that \(U^{\prime}\simeq X_{0}\), and so, \(X\in\operatorname{Cocone}(\mathcal{U},\mathcal{U})\).
**Lemma 3.7**.: _Let \((\mathcal{X},\mathcal{Y})\) be a complete cotorsion pair in \(\mathcal{K}_{\Lambda}\) and consider the conflation \(U_{\mathcal{Y}}\mapsto U_{\mathcal{X}}\twoheadrightarrow\Lambda[1]\) as in Corollary 2.9. Then_
\[\beta(\mathcal{X})\cap\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}}).\]
Proof.: By the proof of Lemma 3.5, we know that \(U_{\mathcal{X}}\in\beta(\mathcal{X})\cap\mathcal{Y}\). Since both \(\beta(\mathcal{X})\) and \(\mathcal{Y}\) are additive subcategories, we get that \(\operatorname{add}U_{\mathcal{X}}\subset\beta(\mathcal{X})\cap\mathcal{Y}\). Now take \(Y\in\beta(\mathcal{X})\cap\mathcal{Y}\subset\mathcal{X}\cap\mathcal{Y}= \operatorname{add}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})\) and suppose that \(Y\) is indecomposable. Recall that \(U_{\mathcal{X}}\) and \(U_{\mathcal{Y}}\) share no non-zero direct summands ([1, Lemma 2.25]), so we can also suppose that \(Y\) is a direct summand of \(U_{\mathcal{Y}}=Y\oplus Y^{\prime}\). Using the octahedral axiom, we have the commutative diagram
Since \(Y\in\beta(\mathcal{X})\), the complex \(C\) must be in \(\mathcal{X}\), which implies that \(\mathbb{E}(C,Y)=0\). That is, \(U_{\mathcal{X}}\simeq Y\oplus C\) and \(Y\in\operatorname{add}(U_{\mathcal{X}})\cap\operatorname{add}(U_{\mathcal{Y}} )=\{0\}\). Thus \(\beta(\mathcal{X})\cap\mathcal{Y}=\operatorname{add}(U_{\mathcal{X}})\).
**Lemma 3.8**.: _Let \(\mathcal{X}\) be a contravariantly finite resolving subcategory of \(\mathcal{K}_{\Lambda}\) and let \(U_{\mathcal{X}}\) be as in Corollary 2.9. Then_
\[\beta(\mathcal{X})=\operatorname{Cocone}(\operatorname{add}(U_{\mathcal{X}}), \operatorname{add}(U_{\mathcal{X}}))=\operatorname{thick}(U_{\mathcal{X}}).\]
_In particular, \(\beta(\mathcal{X})\) is a thick subcategory with enough injectives. All the injectives objects of \(\beta(\mathcal{X})\) lie in \(\operatorname{add}U_{\mathcal{X}}\) and all objects in \(\beta(\mathcal{X})\) have injective dimension \(\leq 1\)._
Proof.: Let \(\mathcal{U}_{\mathcal{X}}=\operatorname{add}(U_{\mathcal{X}})\) and take \(U,U^{\prime}\in\mathcal{U}_{\mathcal{X}}\subset\mathcal{Y}\). For every conflation \(U\to U^{\prime}\twoheadrightarrow U^{\prime\prime}\), we have that \(U^{\prime\prime}\in\mathcal{Y}\) since \(\mathcal{Y}\) is closed under cones. Moreover, \(U,U^{\prime}\in\beta(\mathcal{X})\) which is thick, so \(U^{\prime\prime}\in\beta(\mathcal{X})\cap\mathcal{Y}\), the later being equal to \(\mathcal{U}_{\mathcal{X}}\) by Lemma 3.7. Thus, \(\mathcal{U}_{\mathcal{X}}\) is a presilting subcategory that is closed under cones. By Proposition 2.3 we get that
\[\operatorname{thick}(U_{\mathcal{X}})=\mathcal{U}_{\mathcal{X}}^{\vee}.\]
On the other hand, Proposition 3.6 tells us that \(\mathcal{X}=\operatorname{Cocone}(\mathcal{U},\mathcal{U})\). So for every \(X\in\beta(\mathcal{X})\subset\mathcal{X}\) there exists a conflation
\[X\rightarrowtail U_{0}\twoheadrightarrow U_{1}\]
where \(U_{i}\in\mathcal{U}\) for \(i=0,1\). We know that there exists \(U_{0}^{\mathcal{X}}\in\mathcal{U}_{\mathcal{X}}\) and \(U_{0}^{\mathcal{Y}}\in\mathcal{U}_{\mathcal{Y}}\) such that \(U_{0}\simeq U_{0}^{\mathcal{X}}\oplus U_{0}^{\mathcal{Y}}\). Since \(U_{0}^{\mathcal{Y}}\) is in \(\operatorname{add}(\mathcal{U}_{\mathcal{Y}})\), there exists \(V\in\operatorname{add}(\mathcal{U}_{\mathcal{Y}})\) and \(m\in\mathbb{Z}_{\geq 0}\) such that \(U_{0}^{\mathcal{Y}}\oplus V\simeq U_{\mathcal{Y}}^{\oplus m}\). We get a conflation \(X\totail U_{0}^{\mathcal{X}}\oplus U_{\mathcal{Y}}^{\oplus m}\twoheadrightarrow U _{1}\oplus V\). Applying the octahedral axiom, we get the commutative diagram
\(\begin{CD}X@>{}>{}>U_{0}^{\mathcal{X}}\oplus U_{\mathcal{Y}}^{\oplus m}@>{}>{}>U_{1} \oplus V\\ @V{}V{\left(\begin{smallmatrix}\operatorname{Id}_{v_{\delta}^{\mathcal{X}}}&0 \\ 0&u^{\oplus m}\end{smallmatrix}\right)}V\\ X@>{}>{}>U_{0}^{\mathcal{X}}\oplus U_{\mathcal{X}}^{\oplus m}@>{}>{}>C\end{CD}\)
Since \(X,U_{0}^{\mathcal{X}}\oplus U_{\mathcal{X}}^{\oplus m}\in\beta(X)\), the complex \(C\) must lie in \(\beta(\mathcal{X})\), since it is a thick subcategory of \(\mathcal{K}_{\Lambda}\). Moreover, \(C\in\mathcal{Y}\), because \(\mathcal{Y}\) is closed under extensions and contains \(U,V\) and \(\Lambda[1]\). This implies that \(C\in\beta(\mathcal{X})\cap\mathcal{Y}=\mathcal{U}_{\mathcal{X}}\). In particular, \(X\in\operatorname{Cocone}(\mathcal{U}_{\mathcal{X}},\mathcal{U}_{\mathcal{X}})\). We conclude that \(\beta(\mathcal{X})\subset\operatorname{Cocone}(\mathcal{U}_{\mathcal{X}}, \mathcal{U}_{\mathcal{X}})\subset\mathcal{U}_{\mathcal{X}}^{\vee}=\operatorname {thick}(U_{\mathcal{X}})\). Since \(\operatorname{thick}(U_{\mathcal{X}})\) is the smallest thick subcategory containing \(U_{\mathcal{X}}\), \(\beta(\mathcal{X})=\operatorname{Cocone}(\mathcal{U}_{\mathcal{X}},\mathcal{U }_{\mathcal{X}})=\operatorname{thick}(U_{\mathcal{X}})\). We know show that any \(U\in\mathcal{U}_{\mathcal{X}}\) is an injective object in \(\beta(\mathcal{X})\). Consider a conflation \(U\rightarrowtail Y\twoheadrightarrow X\) with \(Y,X\in\beta(\mathcal{X})\), then there must exists a conflation \(X\rightarrowtail U^{\prime}\twoheadrightarrow U^{\prime\prime}\) with \(U^{\prime},U^{\prime\prime}\in\mathcal{U}_{\mathcal{X}}\). We can find \(A\in\mathcal{K}_{\Lambda}\) such that the follow diagram
commutes. Since \(\mathbb{E}(U,U^{\prime})=0\), the second line splits and \(A\in\mathcal{U}_{\mathcal{X}}\). That is, there exists \(h:A\to U\) such that \(h\circ f^{\prime}=\operatorname{Id}_{U}\), which in turn implies that \((h\circ g)\circ f=h\circ(f\circ g)=h\circ f^{\prime}=\operatorname{Id}_{U}\). We conclude that \(f\) is a section, so \(U\rightarrowtail Y\twoheadrightarrow X\) splits, and \(U\) must be injective. That all injective objects are in \(\mathcal{U}_{\mathcal{X}}\) follows directly from the fact that \(\beta(\mathcal{X})=\operatorname{Cocone}(\mathcal{U}_{\mathcal{X}},\mathcal{U }_{\mathcal{X}})\). This finishes the proof.
**Proposition 3.9**.: _Let \(\mathcal{T}\subset\mathcal{K}\) be a thick subcategory of an hereditary extriangulated category. Then, \(\mathcal{T}\) has enough injectives if and only if there exist a presilting subcategory \(\mathcal{U}\subset\mathcal{K}\) such that \(\mathcal{U}\) is closed under cones and \(\mathcal{T}=\operatorname{thick}(\mathcal{U})\)._
Proof.: Suppose \(\mathcal{T}\) has enough injectives and let \(\mathcal{U}=\operatorname{inj}\mathcal{K}\), then \(\mathcal{T}=\mathcal{U}^{\vee}\). Since \(\mathbb{E}(\mathcal{T},\mathcal{U})=0\), in particular we have that \(\mathbb{E}(\mathcal{U},\mathcal{U})=0\), so \(\mathcal{U}\) is presilting. For any conflation \(U\rightarrowtail U^{\prime}\to X\) with \(U,U^{\prime}\in\mathcal{U}\), we must have that \(X\in\mathcal{T}\) since \(\mathcal{T}\) is thick. Moreover, \(U\) is injective, so the conflation must split and \(X\in\mathcal{U}\), which in turn implies that \(\mathcal{U}\) is closed under cones. We conclude that \(\mathcal{T}=\mathcal{U}^{\vee}=\operatorname{thick}(\mathcal{U})\).
Conversely, suppose that \(\mathcal{T}=\operatorname{thick}(\mathcal{U})\), where \(\mathcal{U}\) is presilting and closed under cones. To prove the result, it suffices to show that every \(U\in\mathcal{U}\) is injective. Indeed, since \(\mathcal{U}\) is closed under cones, we have that \(\mathcal{U}^{\vee}=\operatorname{thick}(\mathcal{U})=\mathcal{T}\), so any object in \(\mathcal{T}\) can be approximated by objects in \(\mathcal{U}\). Let \(U\in\mathcal{U}\) and take a conflation \(U\rightarrowtail X\twoheadrightarrow Y\). Since \(\mathcal{T}=\mathcal{U}^{\vee}\), there exist a conflation \(X\rightarrowtail U^{\prime}\twoheadrightarrow X^{\prime}\) with \(U^{\prime}\in\mathcal{U}\) and \(X^{\prime}\in\mathcal{T}\). Then, there exist \(A\in\mathcal{T}\) and a commutative diagram
where the second line is a conflation. But \(\mathcal{U}\) is closed under cones, so \(A\in\mathcal{U}\). Moreover, \(\mathcal{U}\) is presilting, so the second line must split, which implies that \(U\to X\twoheadrightarrow Y\) does as well. We conclude that \(U\) is injective.
**Lemma 3.10**.: _Let \(\mathcal{T}\) be a thick subcategory of \(\mathcal{K}_{\Lambda}\) with enough incentives, then_
\[\beta(\iota(\mathcal{T}))=\mathcal{T}.\]
Proof.: Let \(\mathcal{T}\) a thick subcategory of \(\mathcal{K}_{\Lambda}\) with enough incentives, by Proposition 3.9 we know that there exists a basic presilting object \(U\) such that \(\mathcal{U}=\operatorname{add}(U)\) is closed under cones and \(\mathcal{T}=\operatorname{thick}(\mathcal{U})\). Consider now its Bongartz completion ([1]) \(\overline{U}=U^{\prime}\oplus V\) given by the conflation
\[V_{0}\to U_{0}\twoheadrightarrow\Lambda[1] \tag{3.3}\]
where the deflation \(U_{0}\twoheadrightarrow\Lambda[1]\) is a minimal \(U\)-right approximation of \(\Lambda[1]\), \(\operatorname{add}U_{0}=\operatorname{add}U^{\prime}\subset\operatorname{ add}U\) and \(\operatorname{add}V_{0}=\operatorname{add}V\) with \(V\) and \(U^{\prime}\) basic. Recall that \(\overline{U}\) is silting and that \(U\) is a direct summand of \(\overline{U}\). By construction, \(U_{0}\twoheadrightarrow\Lambda[1]\) is also a minimal \(\overline{U}\)-right approximation and \(\operatorname{add}U^{\prime}\cap\operatorname{add}V=\{0\}\). Now let \(W\in\operatorname{add}U/\operatorname{add}U^{\prime}\), such that \(W\) is indecomposable. Since \(W\in\operatorname{add}\overline{U}=\operatorname{add}U^{\prime}\sqcup \operatorname{add}V\), there exists \(W^{\prime}\in\operatorname{add}\overline{U}\) such that \(V_{0}=W\oplus W^{\prime}\). We can find \(A\in\mathcal{K}_{\Lambda}\) and a commutative diagram
such that the second line is a conflation. But \(W\) and \(U_{0}\) lie in \(\operatorname{add}U\) wich is closed under cones, so \(A\in\operatorname{add}U\). Since \(U\) is presilting, the second line must split, in particular \(W\in\operatorname{add}U^{\prime}\cap\operatorname{add}V=\{0\}\). We conclude that \(\operatorname{add}U^{\prime}=\operatorname{add}U\).
Now, let \(\mathcal{X}=(\operatorname{add}\overline{U})^{\vee}\), by Theorem 2.5 and Remark 2.10, we know that \((\mathcal{X},\mathcal{X}^{\perp_{1}})\) is a cotorsion pair and that the deflation in the conflation (3.3) is a minimal \(\mathcal{X}\)-right approximation of \(\Lambda[1]\). Since \(\operatorname{add}U_{0}=\operatorname{add}U\), we have that
\[\beta(\mathcal{X})=\operatorname{Cocone}(U,U)=\operatorname{thick}(U)= \mathcal{T}. \tag{3.4}\]
Finally, we know that \(\iota(\mathcal{T})\) is resolving, that is closed under extensions, direct summands and cocones. Since \(V\in\iota(\mathcal{T})\), then \(\mathcal{X}=(\operatorname{add}\overline{U})^{\vee}\subset\iota(\mathcal{T})\). Moreover, \(\mathcal{T}=\operatorname{Cocone}(U,U)\subset(\operatorname{add}\overline{U}) ^{\vee}\) and both subcategories are extension-closed, so Proposition 3.3 and Lemma 3.4 imply that \(\iota(\mathcal{T})\subset\iota((\operatorname{add}\overline{U})^{\vee})= \iota(\mathcal{X})\subset\mathcal{X}\). This implies that
\[\mathcal{X}=(\operatorname{add}\overline{U})^{\vee}=\iota(\mathcal{T}) \tag{3.5}\]
Putting (3.4) and (3.5) together, we get that
\[\beta(\iota(\mathcal{T}))=\beta(\mathcal{X})=\mathcal{T}\]
which gives the result.
### Linking thick and wide subcategories
The connections between \(\tau\)-tilting theory and stability conditions have been studied by a vast number of authors in the last two decades, resulting in a direct bridge between \(\mathrm{s}\tau\)-tilting modules, torsion classes and semistable subcategories. In this section we propose a notion of semistability for objects in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) that will allow us to construct a bridge between the bijections of Theorem A and those of Theorem 3.1.
**Definition 3.1** (\(M\)**-semistability**).: Let \(M\in\operatorname{mod}\Lambda\) and \(X=\underset{\mathcal{X}^{0}}{X^{-1}}\in\mathcal{K}^{[-1,0]}(\operatorname{ proj}\Lambda)\). We say that \(X\) is \(M\)**-semistable** if the map \(\operatorname{Hom}(X^{-1},M)\xrightarrow{x^{*}}\operatorname{Hom}(X^{0},M)\) is an isomorphism of \(\Bbbk\)-vector spaces. In particular, since \(\langle[X],[M]\rangle=\dim_{\Bbbk}(\operatorname{Hom}(X^{0},M))-\dim_{ \Bbbk}(\operatorname{Hom}(X^{-1},M))\), if \(X\) is \(M\)-semistable, \(\langle[X],[M]\rangle=0\).
Note that this definition does not depend on the choice of representative of \(X\) in its isomorphism class inside \(\mathcal{K}_{\Lambda}\) thanks to Remark 2.2. In Section 4 we will discuss the geometric origin of this notion, but for now, let us proceed to the proof of the main theorem of this section.
**Definition 3.2**.: Let \(\mathcal{H}\) be a subcategory of \(\operatorname{mod}\Lambda\). We define \(\mathscr{T}(\mathcal{H})\) to be the full subcategory of \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) whose objects are all complexes \(X\) such that \(X\) is \(N\)-semistable \(\forall\ N\in\mathcal{H}\). Similarly, if \(\mathcal{C}\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\), we define
\[\mathscr{W}(\mathcal{C})=\{N\in\operatorname{mod}\Lambda\ |\ X\text{ is $N$- semistable $\forall\ X\in\mathcal{C}$}\}.\]
**Proposition 3.11**.: _Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)- algebra, then_
1. \(\forall\ \mathcal{H}\subset\operatorname{mod}\Lambda\)_,_ \(\mathscr{T}(\mathcal{H})\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) _is a thick subcategory._
2. \(\forall\mathcal{C}\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_,_ \(\mathscr{W}(\mathcal{C})\subset\operatorname{mod}\Lambda\) _is a wide subcategory._
3. _For any subcategories_ \(\mathcal{H}\subset\operatorname{mod}\Lambda\) _and_ \(\mathcal{C}\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_,_ \[\mathcal{H}\subset\mathscr{W}(\mathscr{T}(\mathcal{H})),\] \[\mathcal{C}\subset\mathscr{T}(\mathscr{W}(\mathcal{C})).\]
4. _For any subcategories_ \(\mathcal{H}\subset\operatorname{mod}\Lambda\) _and_ \(\mathcal{C}\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\)_._ \[\mathscr{T}(\mathcal{H})=\mathscr{T}(\operatorname{wide}( \mathcal{H}))\] \[\mathscr{W}(\mathcal{C})=\mathscr{W}(\operatorname{thick}( \mathcal{C}))\]
Proof.: We only prove (1) for \(\mathcal{H}=\{M\}\) with \(M\in\operatorname{mod}\Lambda\). The result follows noting that \(\mathscr{T}(\mathcal{H})=\bigcap_{M\in\mathcal{H}}\mathscr{T}(M)\). The statement in (2) follows using similar arguments.
_Closure under extensions:_ Let \(X\rightarrowtail Y\twoheadrightarrow Z\) be a conflation and suppose \(X\) and \(Z\) are in \(\mathscr{T}(M)\). As we have seen in Remark 2.2, we can find \(P\in\operatorname{proj}\Lambda\) such that \(X\rightarrowtail Y\oplus\underset{\mathcal{U}}{P}\twoheadrightarrow Z\) is a conflation in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\). Applying \(\operatorname{Hom}(-,M)\) to \(P\)
the associated exact sequences, we get the commutative diagram
\[\begin{CD}0@>{}>{}>{}>\operatorname{Hom}(Z^{0},M)@>{}>{}>{}>\operatorname{Hom}(Y^{0} \oplus P,M)@>{}>{}>\operatorname{Hom}(X^{0},M)@>{}>{}>0\\ @V{}V{z^{*}}V@V{(y\oplus\operatorname{Id}_{P})^{*}}V{x^{*}}V\\ 0@>{}>{}>\operatorname{Hom}(Z^{-1},M)@>{}>{}>\operatorname{Hom}(Y^{-1}\oplus P,M)@>{}>{}>\operatorname{Hom}(X^{-1},M)@>{}>{}>0\end{CD} \tag{3.6}\]
where \(z^{*}\) and \(x^{*}\) are isomorphisms by hypothesis. Since \((y\oplus\operatorname{Id}_{P})^{*}=y^{*}\oplus\operatorname{Id}_{\operatorname{ Hom}(P,M)}\), \(y^{*}\) is an isomorphism as well.
_Closure by cones and cocones:_ Take a conflation as before and suppose that \(X\) and \(Y\) are \(M\)-semistable. In particular, \(0=\langle[Y],[M]\rangle=\langle[X],[M]\rangle+\langle[Z],[M]\rangle=0+\langle [Z],[M]\rangle\). As before, there exists \(P\in\operatorname{proj}\Lambda\) and a commutative diagram like (3.6). Since \((y\oplus\operatorname{Id}_{P})^{*}\) and \(x^{*}\) are isomorphisms we can deduce that \(z^{*}\) is injective. Moreover, \(z^{*}\) is a linear map between vector spaces of same dimension, so it must be bijective. The proof of \(\mathscr{T}(M)\) being closed by cocones is dual.
_Closure under direct summands:_ Let \(X\in\mathscr{T}(M)\) such that \(X\simeq X^{\prime}\oplus X^{\prime\prime}\). Since we have inflations \(X^{\prime}\to X\) and \(X^{\prime\prime}\to X\), then \(\langle[X^{\prime}],[M]\rangle,\langle[X^{\prime\prime}],[M]\rangle\geq 0\) (see Theorem 4.6 for more details). But \(0=\langle[X],[M]\rangle=\langle[X^{\prime}],[M]\rangle+\langle[X^{\prime\prime }],[M]\rangle\), so both terms must be equal to \(0\). Take now the conflation \(X^{\prime\prime}\to X\twoheadrightarrow X^{\prime}\) and \(P\in\operatorname{proj}\Lambda\) such that we have a commutative diagram like (3.6). This time around, \(x^{*}\oplus\operatorname{Id}_{P}^{*}\) is an isomorphism, which implies that \((x^{\prime})^{*}\) is bijective since it is injective and \(\langle[X^{\prime}],[M]\rangle=0\).
We proceed to prove (3). Since for any subcategory \(\mathcal{H}\subset\operatorname{mod}\Lambda\), \(\mathscr{T}(\mathcal{H})=\bigcap_{M\in\mathcal{H}}\mathscr{T}(M)\), the map \(\mathscr{T}\) reverses inclusions. Take \(M\in\mathcal{H}\), then all \(X\in\mathscr{T}(\mathcal{H})\) satisfy that \(X\) is \(M\)-semistable, that is, \(M\in\mathscr{W}(\mathscr{T}(\mathcal{H}))\). The rest of the statement follows from similar arguments.
Lastly, consider \(\mathcal{C}\subset\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). Since \(\mathscr{W}\) reverses inclusions and \(\mathcal{C}\subset\operatorname{thick}(\mathcal{C})\), then \(\mathscr{W}(\operatorname{thick}(\mathcal{C}))\subset\mathscr{W}(\mathcal{C})\). We have seen that \(\mathscr{T}(\mathscr{W}(\mathcal{C}))\) is a thick subcategory that contains \(\mathcal{C}\), so \(\operatorname{thick}(\mathcal{C})\subset\mathscr{T}(\mathscr{W}(\mathcal{C}))\). Applying (3) to \(\mathcal{H}=\mathscr{W}(C)\), we have the following inclusions :
\[\mathscr{W}(\mathcal{C})\subset\mathscr{W}(\mathscr{T}(\mathscr{W}(\mathcal{C })))\subset\mathscr{W}(\operatorname{thick}(\mathcal{C}))\subset\mathscr{W}( \mathcal{C}).\]
Thus \(\mathscr{W}(\mathcal{C})=\mathscr{W}(\operatorname{thick}(\mathcal{C}))\). That \(\mathscr{T}(\mathcal{H})=\mathscr{T}(\operatorname{wide}(\mathcal{H}))\) for any subcategory \(\mathcal{H}\) of \(\operatorname{mod}\Lambda\) follows from the same argument. This proves (4).
For any extriangulated category \(\mathcal{K}\), we will denote by \(\operatorname{inj-thick}\mathcal{K}\) the set of all thick subcategories of \(\mathcal{K}\) that have enough injectives. We are now ready to state the main theorem of this section.
**Theorem 3.12**.: _Let \(\Lambda\) be a finite-dimensional \(\Bbbk\)-algebra and take \(\mathcal{K}_{\Lambda}\) as before. There exist well defined maps_
_such that, when restricted to thick subcategories with enough injectives and left finite wide subcategories, they make the following diagram commute_
_In particular, \(\mathscr{W}\) and \(U\in\operatorname{silt}\mathcal{K}_{\Lambda}\mapsto\operatorname{thick}(U_{\rho}) \in\operatorname{inj-thick}\mathcal{K}_{\Lambda}\) are bijective._
Proof.: The first part of the statement follows from Proposition 3.11. We show that the center square of the diagram is commutative, that the upper triangle is as well follows from Lemma 3.8. Let \((\mathcal{X},\mathcal{Y})\) be a complete cotorsion pair. We prove that \(\mathscr{W}(\beta(\mathcal{X}))=\alpha(H^{0}(\mathcal{Y}))\). Proposition 2.8 implies that \(H^{0}(\mathcal{Y})=\operatorname{Fac}(H^{0}(U_{\mathcal{X}}\oplus U_{ \mathcal{Y}}))\). By [17, Lemma 3.8] and [20, Lemma 3.5] we have that \(\alpha(\operatorname{Fac}(H^{0}(U_{\mathcal{X}}\oplus U_{\mathcal{Y}})))= \mathscr{W}(U_{\mathcal{X}})\). Moreover, \(\mathscr{W}(\beta(\mathcal{X}))=\mathscr{W}(\operatorname{thick}(U_{\mathcal{ X}}))\) by Lemma 3.8, and \(\mathscr{W}(U_{\mathcal{X}})=\mathscr{W}(\operatorname{thick}(U_{\mathcal{X}}))\) by Proposition 3.11(4). Putting all these equalities together we get that
\[\mathscr{W}(\beta(\mathcal{X}))=\mathscr{W}(\operatorname{thick}(U_{\mathcal{ X}}))=\mathscr{W}(U_{\mathcal{X}})=\alpha(\operatorname{Fac}(H^{0}(U_{\mathcal{X}} \oplus U_{\mathcal{Y}})))=\alpha(H^{0}(\mathcal{Y}))\]
which gives us the result.
**Example 3.3**.: Let \(Q\) be the quiver \(1\xrightarrow{\alpha}2\xrightarrow{\beta}3\). Then the Auslander-Reiten quiver of \(\operatorname{mod}\Bbbk Q\) is the following
All minimal projective presentations of indecomposable modules are indecomposable objects in \(\mathcal{K}_{\Bbbk Q}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Bbbk Q)\), as are the objects \(P\to 0\) where \(P\) is an indecomposable projective module. Then, the AR quiver of \(\mathcal{K}_{\Bbbk Q}\) is given by
In Table 1, we show all silting objects, their respective cotorsion pairs, thick subcategories, wide subcategory and torsion class given by the bijections in Corollary 1.1. The dots correspond to the objects depicted in the AR quiver of \(\mathcal{K}_{\Bbbk Q}\) (or \(\operatorname{mod}\Bbbk Q\)), and the shaded areas correspond to the subcategory additively generated
by the dots they contain. In the second column, the blue shaded area in each figure depict the subcategory \(\mathcal{X}\), while the orange shaded area plays the role of \(\mathcal{Y}\) for the cotorsion pairs \((\mathcal{X},\mathcal{Y})\) they illustrate.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\operatorname{silt}\mathcal{K}\) & \(\operatorname{cotor}\mathcal{K}\) & \(\operatorname{thick}\mathcal{K}\) & \(\operatorname{wide}\Lambda\) & \(\operatorname{tors}\Lambda\) \\ \hline \(\operatorname{silt}\mathcal{K}\) & \(\operatorname{cotor}\mathcal{
**Example 3.4** (\(\mathscr{W}\) is not a bijection in general).: Consider now the Kronecker quiver
\[Q=\ 1\xrightarrow[\beta]{\alpha}2\.\]
Let \(\mathcal{C}\) be thick subcategory of \(\mathcal{K}_{\Bbbk Q}\) whose objects are the projective presentations of regular modules. In particular, any object \(X\in\mathcal{C}\) satisfies that \([X]=n[P_{1}]-n[P_{2}]=(n,-n)\) for some \(n\in\mathbb{Z}_{>0}\). Let \(M\neq 0\) be an indecomposable module in \(\mathscr{W}(\mathcal{C})\subset\operatorname{mod}\Bbbk Q\) with minimal projective resolution \(X_{M}\in\mathcal{K}_{\Bbbk Q}\). Since \(\langle[X],M\rangle=0\) for all \(X\in\mathcal{C}\), \(M\) cannot be pre-projetive or pre-injective. If \(M\) is regular, then \(X_{M}\in\mathcal{C}\), but \(M\) cannot be \(X_{M}\)-semistable. We conclude that \(\mathscr{W}(\mathcal{C})=\{0\}=\mathscr{W}(\mathcal{K}_{\Bbbk Q})\), and \(\mathscr{W}\) is not a bijection in general.
## 4. Geometric interpretation
The goal of this section is to give a geometric interpretation to Definition 3.1 and relate it to the theory of stability conditions. Throughout this section we fix \(\theta^{-1},\theta^{0}\in\mathbb{Z}_{\geq 0}^{n}\) as well as \(X^{-1}=\bigoplus_{i=0}^{n}P_{i}^{\theta_{i}^{-1}}\) and \(X^{0}=\bigoplus_{i=0}^{n}P_{i}^{\theta_{i}^{0}}\) two projective modules in \(\operatorname{mod}\Lambda\). Let \(R(X^{-1},X^{0})=\operatorname{Hom}_{\Lambda}(X^{-1},X^{0})\), \(A(X^{-1},X^{0})=\operatorname{End}(X^{-1})\times\operatorname{End}(X^{0})^{op}\), and \(G(X^{-1},X^{0})=\operatorname{Aut}(X^{-1})^{op}\times\operatorname{Aut}(X^{0})\subset A (X^{-1},X^{0})\). The group \(G(X^{-1},X^{0})\) acts on the affine space \(R(X^{-1},X^{0})\) via simultaneous multiplication: let \(g=(g_{-1},g_{0})\in G(X^{-1},X^{0})\), where \(g_{-1}\in\operatorname{Aut}(X^{-1})\), \(g_{0}\in\operatorname{Aut}(X^{0})\) and \(x:X^{-1}\to X^{0}\), then \(g\cdot x=g_{0}\cdot x\cdot g_{-1}\). Note that this is a well defined action since we chose to work in \(\operatorname{Aut}(X^{-1})^{op}\). If we were to consider \(\operatorname{Aut}(X^{-1})\) instead, some sign conventions would have to be adjusted. When the context allows it, we will write \(R\) instead of \(R(X^{-1},X^{0})\) and do the same for the groups \(A\) and \(G\).
Since we consider modules over a finite-dimensional \(\Bbbk\)-algebra \(\Lambda\), \(A=A(X^{-1},X^{0})\) is also finite-dimensional and thus it's radical \(N\) coincides with its nil-radical. By the Wedderburn-Artin theorem, we have that
\[A/N\cong\bigoplus_{i=1}^{n}\left(M_{\theta_{i}^{-1}}(D_{i})\times M_{\theta_{i }^{0}}(D_{i})\right)\]
where \(N=\operatorname{rad}(\operatorname{End}(X^{-1}))\times\operatorname{rad}( \operatorname{End}(X^{0}))\) and \(D_{i}=\operatorname{End}(P_{i})/\operatorname{rad}(\operatorname{End}(P_{i})) \simeq\Bbbk\). Using the fact that \(f\in A\) is invertible if and only if its image in \(A/N\) is invertible, we get that
\[G=G(X^{-1},X^{0})=(1_{A}+N)\rtimes\left(\prod_{i=1}^{n}\left(GL_{\theta_{i}^{ -1}}(D_{i})\times GL_{\theta_{i}^{0}}(D_{i})\right)\right).\]
Let \(U=1_{A}+N\), then \(U\) is a normal subgroup of \(G\) and, by definition, all its elements are unipotent. Since \(\Lambda\) is finite-dimensional, there exists \(m\in\mathbb{Z}_{>0}\) and a series of subgroups
\[0\unlhd 1_{A}+N^{m}\unlhd\cdots\unlhd 1_{A}+N^{2}\unlhd 1_{A}+N=U\]
where \(1+N^{i}\) is a normal subgroup of \(1+N^{i-1}\), and such that every quotient is abelian. This shows that \(U\) is solvable and hence, is the unipotent radical of \(G\), since it is closed and connected. Any algebraic group \(G\) over \(\mathbb{C}\) satisfies that \(G=U\rtimes G_{red}\) where \(U\) is its unipotent radical and \(G_{red}\) is reductive. If \(\Bbbk=\mathbb{C}\), we
then get that \(G(X^{-1},X^{0})_{red}\cong\prod_{i=1}^{n}\left(GL_{\theta_{i}^{-1}}(D_{i})\times GL _{\theta_{i}^{0}}(D_{i})\right)\), where \(D_{i}\cong\mathbb{C}\). From now on, we suppose that \(\Bbbk\simeq\mathbb{C}\).
A **character** of \(G\) is a morphism of algebraic groups \(\chi:G\to\mathbb{C}^{*}\). If \(G\) is unipotent, every character is trivial. In particular, when \(G=U\rtimes G_{red}\), the set of characters of \(G\) can be identified with that of \(G_{red}\). In our case, every character \(\chi:\prod_{i=1}^{n}\left(GL_{\theta_{i}^{-1}}(\mathbb{C})\times GL_{\theta_{i }^{0}}(\mathbb{C})\right)\to\mathbb{C}^{*}\) is given by \(\chi\left((g_{-1}^{i},g_{0}^{i})_{1\leq i\leq n}\right)=\prod_{i=1}^{n} \det(g_{-1}^{i})^{d_{i}^{-1}}\cdot\det(g_{0}^{i})^{d_{i}^{0}}\), and thus, the group of characters of \(G\) is isomorphic to \(\mathbb{Z}^{2n}\). For \(\vec{d}=(d^{-1},d^{0})\in\mathbb{Z}^{2n}\) we will write \(\chi_{\bar{d}}\) for the character given by the previous formula.
Dually, a group morphism \(\lambda:\mathbb{C}^{*}\to G\) is called a **one-parameter subgroup** or a **co-character** of \(G\). They are all of the form \(u\hat{\lambda}u^{-1}\) for \(u\in U\) and \(\hat{\lambda}:\mathbb{C}^{*}\to G_{red}\). Indeed, let \(H=ker(\lambda)=\lambda^{-1}(1_{G})\). \(H\) is a closed subgroup of \(\mathbb{C}^{*}\), hence, \(\lambda(\mathbb{C}^{*})\cong\mathbb{C}^{*}/H\) is a reductive subgroup of \(G=U\rtimes G_{red}\) since \(\mathbb{C}^{*}\) is. As we are working in characteristic \(0\), by [12, Proposition 4.2], there exists \(u\in U\) such that \(u^{-1}\lambda(\mathbb{C}^{*})u\leq G_{red}\). Then \(\hat{\lambda}=u^{-1}\lambda u:\mathbb{C}^{*}\to G_{red}\) satisfies the property. Moreover, since \(G_{red}\) is a product of \(GL_{k}(\mathbb{C})\)'s, all one parameter subgroups are of the form \(\lambda=u\tilde{\lambda}u^{-1}\), where \(\tilde{\lambda}\) is a one-parameter subgroup with image in a maximal torus of \(G_{red}\).
The composition \(\chi\circ\lambda:\mathbb{C}^{*}\to\mathbb{C}^{*}\) gives us a paring \(\langle-,-\rangle\) between the set of one parameter subgroups and the character group. Indeed, since every algebraic group automorphism of \(\mathbb{C}^{*}\) is of the form \(t\mapsto t^{m}\) for some \(m\in\mathbb{Z}\), we define \(\langle\lambda,\chi\rangle\) to be the integer \(m\) such that \(\chi\circ\lambda(t)=t^{m}\).
### Geometric Invariant Theory
Geometric Invariant Theory (GIT) was developed by D. Mumford as a method for constructing quotients by group actions on algebraic varieties. One of the key tools in this theory is the notion of semi-invariant and semistability. In this section we study those tools for the action of \(G(X^{-1},X^{0})\) over the vector space \(R(X^{-1},X^{0})\). For more on the generalities of GIT, see [10].
Let \(R=R(X^{-1},X^{0})\) and \(G=G(X^{-1},X^{0})\). A **semi-invariant**\(f\in\mathbb{C}[\mathbb{R}]\) of weight \(\chi\in\operatorname{Hom}(G,\mathbb{C}^{*})\) is a regular function such that \(f(g\cdot x)=\chi(g)f(x)\) for all \(g\in G\) and all \(x\in R\). For a non-trivial character \(\chi\), define the graded ring
\[SI(R)^{G,\chi}=\bigoplus_{i\geq 0}\mathbb{C}[R]^{G,\chi^{i}},\]
where the \(\mathbb{C}[R]^{G,\chi^{i}}\) denote the set of semi-invariant functions over \(R\) of weight \(\chi^{i}\).
**Definition 4.1**.: Let \(x\in R\) and \(\chi\) a character of \(G\). We say that \(x\) is \(\chi\)**-semistable** if there exists \(m\in\mathbb{Z}_{>0}\) and \(f\in\mathbb{C}[R]^{G,\chi^{m}}\) such that \(f(x)\neq 0\). We denote \(R^{\chi,ss}\) the open subset of semistable points.
The points of the scheme \(\operatorname{Proj}(SI(R)^{G,\chi})\) should correspond (up to GIT- equivalence) to orbits of \(\chi\)-semistable points. However, since in our setting \(G\) is not reductive in general, the ring \(SI(R)^{G,\chi}\) of semi-invariants is not necessarily finitely generated, and thus \(\operatorname{Proj}(SI(R)^{G,\chi})\) is not a variety in general, even as \(R\) is an affine space. Moreover, the Hilbert-Mumford numerical criterion to determine weather a point \(x\) is \(\chi\)-semistable does not necessarily hold. However, we seek to describe features of the characters for which semistable points exist.
Let \(G_{0}=\{g\ |\ g\cdot x=x\quad\forall x\in R\}\), and suppose that \(x\in R\) is \(\chi_{\bar{d}}\)-semistable for some \(\bar{d}\in\mathbb{Z}^{2n}\). Let \(f\in\mathbb{C}[R]^{G,\chi_{\bar{d}}^{m}}\) such that \(f(x)\neq 0\) with \(m\geq 1\), then \(f(g\cdot x)=\chi(g)_{\bar{d}}^{m}f(x)=f(x)\) for any \(g\in G_{0}\). That is \(\chi_{\bar{d}}^{m}(G_{0})\equiv 1\). In particular for \(\Delta=\left\{(t^{-1}\cdot\mathrm{Id}_{\theta_{i}^{-1}},t\cdot\mathrm{Id}_{ \theta_{i}^{0}})\ |\ t\in\mathbb{C}^{*}\right\}_{1\leq i\leq n}\cong\mathbb{C}^{*}\subseteq G_{0}\), we must have that
\[\chi^{m}(\Delta) =\left(\prod_{i=1}^{n}\det(t^{-1}\cdot\mathrm{Id}_{\theta_{i}^{-1 }})^{d_{i}^{-1}}\det(t\cdot\mathrm{Id}_{\theta_{i}^{0}})^{d_{i}^{0}}\right)^{ m}=\] \[\qquad t^{m\left(-\sum_{i=1}^{n}\theta_{i}^{-1}d_{i}^{-1}+\sum_{ i=1}^{n}\theta_{i}^{0}d_{i}^{0}\right)}=1\]
which in turn implies that
\[-\sum_{i=1}^{n}\theta_{i}^{-1}d_{i}^{-1}+\sum_{i=1}^{n}\theta_{i}^{0}d_{i}^{0} =\langle(-[X^{-1}],[X^{0}]),(d^{-1},d^{0})\rangle=\langle(-[X^{-1}],[X^{0}]), \chi\rangle=0.\]
Let \(x\in R=R(X^{-1},X^{0})\) and consider its orbit \(G\cdot x\subset R\). Note that \(G\cdot x\) can be identified with the isomorphism class of \(x\) as an object in \(C^{[-1,0]}(\operatorname{proj}\Lambda)\). The following proposition will gives us a link between inflations in \(C^{[-1,0]}(\operatorname{proj}\Lambda)\) and semistability.
**Proposition 4.1**.: _Let \(x\in R\) with associated \(X=\underset{X^{0}}{\overset{X^{-1}}{\downarrow}}\in C^{[-1,0]}(\operatorname{ proj}\Lambda)\). If \(x\) if \(\chi\)-semistable, then_
1. \(\langle(-[X^{-1}],[X^{0}]),\chi\rangle=0\)_;_
2. _For any inflation_ \(Y\mapsto X\) _in_ \(C^{[-1,0]}(\operatorname{proj}\Lambda)\)_, we must have that_ \[\langle(-[Y^{-1}],[Y^{0}]),\chi\rangle\geq 0.\]
Proof.: Suppose \(x\) is \(\chi\)-semistable with respect to the \(G\)-action. By the previous discussion we have (1). Let \(f\) be a \(\chi^{m}\) a semi-invariant for \(m\geq 1\) such that \(f(x)\neq 0\). If \(\lambda\) is a one-parameter subgroup of \(G\), we must have that for every \(t\in\mathbb{C}^{*}\)
\[f(\lambda(t)\cdot x)=\chi^{m}(\lambda(t))f(x)=t^{m\langle\lambda,\chi\rangle}f (x)\]
Suppose that \(\lim_{t\to 0}\lambda(t)\cdot x\) exists and it is equal to \(x^{\prime}\in R\), then
\[f(x^{\prime})=\lim_{t\to 0}t^{n\langle\chi,\lambda\rangle}f(x).\]
Since \(f(x)\neq 0\), we must have that \(\langle\lambda,\chi\rangle\geq 0\). The statement in (2) will follow from noting that one-parameter subgroups such that \(\lim_{t\to 0}\lambda(t)\cdot x\) exists correspond to inflations of \(X\) in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\). As we have seen before, \(\lambda(t)=u\tilde{\lambda}(t)u^{-1}\) where \(\tilde{\lambda}\) is a one-parameter subgroup with image in a maximal torus of \(G\). Explicitly,
\[\lambda(t)=(\lambda_{-1}(t),\lambda_{0}(t))=(g_{-1}\ \tilde{\lambda}_{-1}(t)\ (g_{-1})^{-1},g_{0}\ \tilde{\lambda}_{0}(t)\ g_{0}^{-1})=\] \[=\left(\begin{array}{ccccc}\left(\begin{array}{ccccc}\left( \begin{array}{ccccc}t^{\lambda_{1,1}^{-1}}&...&0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{-1}^{-1}}\end{array}\right)&0&...&0&0\\ &\vdots&\ddots&\vdots&\ddots&\vdots&\ddots&\vdots\\ &&&0&...&\left(\begin{array}{ccccc}t^{\lambda_{1,1}^{-1}}&...&0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{-1}^{-1}},i\end{array}\right)&...&0&(g_{-1})^{-1},\\ &\vdots&\ddots&\vdots&\ddots&\vdots&\ddots&\vdots\\ &&&&&0&...&0&\left(\begin{array}{ccccc}t^{\lambda_{1,n}^{-1}}&...&0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{n-1}^{-1}},n\end{array}\right)\end{array}\right)\\ &\left(\begin{array}{ccccc}\left(\begin{array}{ccccc}t^{\lambda_{1,1}^{0}}&... &0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{n-1}^{0}},n\end{array}\right)&0&...&0&0&\\ &\vdots&\ddots&\vdots&\ddots&\vdots&\ddots&\vdots&\\ &&0&...&\left(\begin{array}{ccccc}t^{\lambda_{1,i}^{0}}&...&0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{n-1}^{0}},i\end{array}\right)&...&0&\left(g_{0}^{-1}\right) \\ &\vdots&\ddots&\vdots&\ddots&\vdots&\ddots&\vdots&\\ &0&0&...&0&0&\left(\begin{array}{ccccc}t^{\lambda_{1,n}^{0}}&...&0\\ \vdots&\ddots&\vdots\\ 0&...&t^{\lambda_{n-1}^{0}},n\end{array}\right)\end{array}\right)\end{array}\right)\]
where \(\lambda_{-1}(t),\ g_{-1}\in\mathrm{Aut}(X^{-1})\), and \(\lambda_{0}(t),\ g_{0}\in\mathrm{Aut}(X^{0})\) for every \(t\in\mathbb{C}^{*}\). Here, \(\lambda_{i,i}^{\varepsilon}\) is the weight corresponding to the \(l\)-th copy of the projective indecomposable \(P_{i}\) inside of \(X^{\varepsilon}\) with \(\varepsilon\in\{-1,0\}\), \(1\leq i\leq n\) and \(1\leq l\leq\theta_{i}^{\varepsilon}\). We get that, for \(\varepsilon\in\{-1,0\}\), \(X^{\varepsilon}=\bigoplus_{m\in\mathbb{Z}}X_{m}^{i}\) where each \(X_{m}^{i}\) is the direct sum of indecomposable projectives summands \(Q\) of \(X^{\varepsilon}\) such that \(\lambda_{\varepsilon}(t)(Q)=t^{m}Q\). So, for any \(m,n\in\mathbb{Z}\), we have the following commutative diagram :
Since the limit when \(t\to 0\) exists, then \(\pi_{m}^{0}\cdot x|_{X_{n}^{-1}}\) must be zero when \(n+m<0\). Let \(X_{\leq n}^{-1}=\bigoplus_{i\leq n}X_{-i}^{-1}\) and \(X_{\leq n}^{0}=\bigoplus_{i\leq n}X_{i}^{0}\). Then, for every \(n\in\mathbb{Z}\), \(x\) defines the
\(X_{\leq n}^{-1}\) objects \(X_{\leq n}:=\left\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Remark 4.2**.: Note that if every inflation satisfies (2) from the previous proposition, then \(\langle\lambda,\chi\rangle\geq 0\) for every one-parameter subgroup \(\lambda\) such that the limit \(\lim_{t\to 0}\lambda(t)\cdot x\) exists. If \(G\) were reductive, this would imply that \(x\) is \(\chi\)-semistable as in [10].
### Determinantal invariants
To determine whether Proposition 4.1 has a partial converse, one could try to explicitly describe its ring of semi-invariants \(SI(R)^{G,\chi}\) for a given character \(\chi\).
**Definition 4.3** ([10]).: Let \(X^{-1},X^{0}\in\operatorname{proj}\Lambda\) and \(M\in\operatorname{mod}\Lambda\) such that \(\ \langle[X^{0}]-[X^{-1}],[M]\rangle=0\), that is, such that \(\dim_{\Bbbk}(\operatorname{Hom}(X^{-1},M))=\dim_{\Bbbk}(\operatorname{Hom }(X^{0},M)\). Let \(R(X^{-1},X^{0})\) be as defined in the beginning of Section 4. We denote by \(s(-,M)\) the regular function such that, for any \(x\in R(X^{-1},X^{0})\)
\[s(x,M)=\det\left(\operatorname{Hom}(X^{0},M)\xrightarrow{-\circ x=x^{*}} \operatorname{Hom}(X^{-1},M)\right).\]
We say that the map \(s(-,M)\) is a **determinantal semi-invariant**.
**Remark 4.4**.: Let \(M\in\operatorname{mod}\Lambda\) and let \(\dim M\) be its class in \(K_{0}(\operatorname{mod}\Lambda)\). Note that the value of the map \(s(-,M)\) depends on the choice of basis for the \(\operatorname{Hom}(X^{0},M)\) and \(\operatorname{Hom}(X^{-1},M)\) spaces. Once we fix a basis for \(\operatorname{Hom}(P_{i},S_{j})\) for all \(1\leq i,j\leq 0\), we know that for any \(P\in\operatorname{proj}\Lambda\) with a given decomposition \(P=\bigoplus_{i=1}^{n}P_{i}^{e_{i}}\in\operatorname{proj}\Lambda\) and \(M\in\operatorname{mod}\Lambda\), then
\[\operatorname{Hom}(P,M)=\bigoplus_{i=1}^{n}\bigoplus_{j=1}^{i}\operatorname{ Hom}(P_{i},S_{j})^{e_{i}(\dim M)_{j}}\]
and thus basis is given by that of the \(\operatorname{Hom}(P_{i},S_{j})\). However, we mostly care about the non-annihilation of these functions, and since base change doesn't affect this property, the choice of basis is mostly omitted.
**Proposition 4.2**.: [11, Proposition 5.13] _Let \(X^{-1},X^{0}\in\operatorname{proj}\Lambda\) and \(M\in\operatorname{mod}\Lambda\). The regular function \(s(-,M)\) is a \(G(X^{-1},X^{0})\) semi-invariant over \(R(X^{-1},X^{0})\) with associated character \(\chi_{([M],[M])}\)._
Proof.: By hypothesis \(\dim_{\Bbbk}\operatorname{Hom}(X^{0},M)=\dim_{\Bbbk}\operatorname{Hom}(X^{ -1},M)\), thus \(s(s,M)\) is well defined for any \(x\in R(X^{-1},X^{0})\). Let \(g=(g_{-1},g_{0})\in G(X^{-1},X^{0})=\operatorname{Aut}(X^{-1})\times \operatorname{Aut}(X^{0})\), then
\[s(g\cdot x,M)=\det\left(\operatorname{Hom}(X^{0},M)\xrightarrow{(g_{0}\cdot x \cdot g_{-1})^{*}}\operatorname{Hom}(X^{-1},M)\right)=\]
\[=\det(g_{0}^{*})\cdot s(s,M)\cdot\det(g_{-1}^{*}).\]
The regular function \(\chi(g)=\det(g_{0}^{*})\cdot\det(g_{-1}^{*})\) defines a character for the action of \(G(X^{-1},X^{0})\), and as such, it factors through \(G(X^{-1},X^{0})_{red}\). Recall \((g_{0})_{red}=(g_{0}^{i})\in\prod_{i=1}^{n}GL_{g_{0}^{i}}(D_{i})\). Since \(\operatorname{Hom}(P_{i},M)\cong M_{i}\ \forall\ 1\leq i\leq n\), where \(M_{i}\) is the vector space in the vertex \(i\) associated to \(M\), we have that \((g_{0})_{red}^{*}\) is a bloc-diagonal matrix in which the bloc corresponding to \(g_{0}^{i}\) appears \(\dim M_{i}\) times. Thus, \(\det(g_{0}^{*})_{red}=\prod_{i=1}^{n}\det(g_{0}^{i})^{d_{i}}\) where \(\dim M=(\dim M_{i})_{1\leq i\leq n}=(d_{i})_{1\leq i\leq n}\) is the dimension vector of \(M\). The same argument gives \(\det(g_{-1})_{red}=\prod_{i=1}^{n}\det(g_{-,i})^{d_{i}}\) and so \(s(g\cdot X,M)=\chi(g)\cdot s(X,M)\) where \(\chi\) is of weight \(([M],[M])\).
**Remark 4.5**.: Let \(X^{-1}\), \(X^{0}\) and \(M\) be as before. Consider now \(R(X^{-1}\oplus P,X^{0}\oplus P)\) for some \(0\neq P\in\operatorname{proj}\Lambda\). Since \(\langle\begin{bmatrix}X\oplus\begin{matrix}P\\ P\end{matrix},[M]\rangle=\langle[X],[M]\rangle=0\), there is a regular
function \(s^{\prime}(-,M)\) that is a semi-invariant for the action of \(G(X^{-1}\oplus P,X^{0}\oplus P)\) on \(R(X^{-1}\oplus P,X^{0}\oplus P)\) and satisfies that
\[s^{\prime}\left(\begin{pmatrix}-&0\\ 0&\operatorname{Id}_{P}\end{pmatrix}\right)=s(-,M)\]
for any \(x\in R(X^{-1},X^{0})\), where \(s(-,M)\) is as in Proposition 4.2. Let \(x^{\prime}\in R(X^{-1}\oplus P,X^{0}\oplus P)\) and suppose that it belongs to the orbit of the point \(\begin{pmatrix}x&0\\ 0&\operatorname{Id}_{P}\end{pmatrix}\) for some \(x\in R(X^{-1},X^{0})\). Then, there exists \(g\in G(X^{-1}\oplus P,X^{0}\oplus P)\) such that
\[s^{\prime}(x^{\prime},M)=s^{\prime}\left(g\cdot\begin{pmatrix}x&0\\ 0&\operatorname{Id}_{P}\end{pmatrix}\right)=\chi_{([M],[M])}(g)\cdot s^{ \prime}\left(\begin{pmatrix}x&0\\ 0&\operatorname{Id}_{P}\end{pmatrix}\right)=\]
Thus, \(s^{\prime}(x^{\prime},M)\neq 0\) if and only if \(s(x,M)\neq 0\).
We now shift our attention back to \(\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). The last remark tells us that for any \(X\in\mathcal{K}_{\Lambda}\) and \(M\in\operatorname{mod}\Lambda\) such that \(\langle[X],[M]\rangle=0\), the non-annihilation of the \(s(-,M)\) does not depend on the representative of \(X\) in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\). We rephrase Definition 3.1 in terms of the \(s(-,M)\).
**Definition 4.6** (\(M\)**-semistability**).: Let \(X\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) and \(M\in\operatorname{mod}\Lambda\). We say that \(X\) is \(M\)**-semistable** if \(\langle[X],[M]\rangle=0\) and if there exist \(x\in R(X^{-1},X^{0})\) such that \(X\simeq\begin{array}{c}X^{-1}\\ \downarrow_{r}\end{array}\) and \(s(x,M)\neq 0\).
These semi-invariants and their links to cluster algebras where thoroughly studied by K. Igusa, K. Orr, G. Todorov and J. Weyman in [11, 12]. In their work, they define the ring of **virtual semi-invariants** for any \(\theta\in\mathcal{K}_{0}(\operatorname{proj}\Lambda)\) (see Section 4.4 for the definition), which can be interpreted as those semi-invariants that are well defined for objects in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). Notably, they show that that when \(\Lambda\simeq\Bbbk Q\), where \(Q\) is a finite quiver without oriented cycles, then the ring of virtual semi-invariants is spaned by determinantal semi-invariants ([11, Theorem 6.4.1 (Virtual First Fundamental Theorem)]). Although we are interested in the question of whether this holds for a general finite-dimensional algebra \(\Lambda\), the goal of this paper is to find a new categorical significance to this semi-invariants, inspired by all the theory branching off semistability theory in \(\operatorname{mod}\Lambda\).
### Semistability in \(\operatorname{mod}\Lambda\)
Recall that that the notion of King's semistability on \(\operatorname{mod}\Lambda\), first introduced in [10], gives rise to a certain class of wide subcategories of \(\operatorname{mod}\Lambda\).
**Definition 4.7**.: [10] Let \(\theta\in K_{0}(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda))\), we say that \(M\in\operatorname{mod}\Lambda\) is \(\theta\)**- semistable** if
1. \(\langle\theta,[M]\rangle=0\);
2. For every submodule \(N\subset M\), \(\langle\theta,[N]\rangle\leq 0\).
We denote by \(\mathscr{W}_{\theta}\subset\operatorname{mod}\Lambda\) the wide subcategory whose objects are those who are \(\theta\) - semistable.
One of the key results in [10] is that, for a module \(M\) of dimension \(d\), the notion of being \(\theta\)-semistable is equivalent to the existence of a semi-invariant \(f\) of weight \(\theta\) over the variety of representations \(\operatorname{rep}(\Lambda,d)\) such that
\(0\). When \(\Lambda\) is a finite-dimensional algebra over an algebraic closed field, all of these semi-invariants are generated by the rational functions \(s(X,-)\) where \(X\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) such that \([X]=\theta\) ([12, 13, 14, 15]). That is, \(M\) is \(\theta\)-semistable if and only if there exists \(X\in\mathcal{K}_{\Lambda}\) such that \(s(X,M)\neq 0\). We rephrase this statement in terms of the subcategories \(\mathscr{W}(X)\subset\operatorname{mod}\Lambda\) defined in Section 3.2.
**Proposition 4.3**.: _[_12, 13, 14_]_ _Let \(\theta\in K_{0}(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda))\), then_
\[\mathscr{W}_{\theta}=\bigcup_{\begin{subarray}{c}X\in\mathcal{K}^{[-1,0]}( \operatorname{proj}\Lambda)\\ [X]=\theta\end{subarray}}\mathscr{W}(X)\]
Proof.: If \(M\) is \(\theta\)-semistable, there exists \(X=\underset{X^{0}}{\overset{}{\bigcup}}\in\mathcal{K}^{[-1,0]}( \operatorname{proj}\Lambda)\) such that \([X]=\theta\) and \(s(X,M)\neq 0\), which in turns translates to \(x^{*}\) being an isomorphism. We get that for every \(M\in\mathscr{W}_{\theta}\), there is \(X\) such that \(M\in\mathscr{W}(X)\), and so \(\mathscr{W}_{\theta}\subset\bigcup_{\begin{subarray}{c}X\in\mathcal{K}^{[-1,0] }(\operatorname{proj}\Lambda)\\ [X]=\theta\end{subarray}}\mathscr{W}(X)\). The other inclusion follows from the fact that \(s(X,-)\) is a semi-invariant of weight \([X]=\theta\).
The following result shows the relation between semistability in \(\operatorname{mod}\Lambda\) and \(\tau\)-tilting theory. It has been proved in full generality in [11, 1]. We include here a proof when \(\Lambda\) is an algebra over a algebraically closed field, to showcase the relevance of the \(s(X,-)\) invaraints, and why one could be brought to define semistability in their terms.
**Theorem 4.4**.: _[_11_, 12_]_ _Let \(U\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) be a presilting complex. Then \(\mathscr{W}_{[U]}={}^{\perp}H^{-1}(\nu U)\cap H^{0}(U)^{\perp}=\mathscr{W}(U)\) where \(\nu\) is the Nayakama endofuctor on \(\mathcal{D}^{b}(\operatorname{mod}\Lambda)\)._
Proof.: Let \(U\in K_{\Lambda}\) be a presilting complex. By Proposition 4.3, we have that \(\mathscr{W}(U)\subset\mathscr{W}_{[U]}\). Now consider \(X\in\mathcal{K}_{\Lambda}\) such that \([X]=[U]\) and let \(M\in\mathscr{W}(X)\). Then, there exist \(x\in R(X^{-1},X^{0})=R\) such that \(X\simeq\underset{X^{0}}{\overset{}{\bigcup}}\) and \(s(x,M)\neq 0\). Since this condition is open, the generic point \(\eta\in R\) must satisfy that \(s(\eta,M)\neq 0\). Because \(U\) is presilting, we can choose a representative \(\underset{\mathcal{J}^{u}}{\overset{}{\bigcup}}\), such that there are no non-zero common projective direct summands between \(U^{-1}\) and \(U^{0}\). In particular, we can suppose that there is \(P\in\operatorname{proj}\Lambda\) such that \(X^{i}=U^{i}\oplus P\) for \(i\in\{-1,0\}\). Let \(s^{\prime}(-,M)\) be the determinantal semi-invariant definided by \(M\) on \(R^{\prime}=R(U^{-1},R^{0})\), by [1, Corollary 6.2.2] we get that \(s^{\prime}(\eta^{\prime},M)\neq 0\) since \(s(\eta,M)\neq 0\). By a Dehy-Keller argument, we know that since \(U\) is a \(2\)-term presilting complex, the orbit \(\mathcal{O}_{u}\) inside \(R^{\prime}\) must be open and dense. In particular, \(\mathcal{X}\cap\mathcal{O}_{u}\neq\emptyset\), where \(\mathcal{X}=\{y\in R^{\prime}\ |\ s^{\prime}(y,M)\neq 0\}\). This implies that there must exist \(u^{\prime}\in\mathcal{O}_{u}\) such that \(s(u^{\prime},M)\neq 0\) and \(U\simeq\underset{\mathcal{J}^{u^{\prime}}}{\overset{}{\bigcup}}\); that is, \(M\) is \(U\)-semistable. We get that \(\mathscr{W}(X)\subset\mathscr{W}(U)\) for all \(X\) such that \([X]=[U]\), and thus, \(\mathscr{W}(U)=\mathscr{W}_{[U]}\). The rest of the proposition
follows from the fact that we have an exact sequence
(4.1) \[\operatorname{Hom}(M,H^{-1}(\nu U))\xrightarrow{\ \
who take any \(x\in R(P(\eta^{-1}),P(\eta^{0}))\) and sends it to \(\left(\begin{smallmatrix}x&0\\ 0&\operatorname{Id}_{P(\gamma)}\end{smallmatrix}\right)\). We reef to these as **stabilization maps**. We define the **virtual representation space** of \(\theta\) as the direct limit over \(PD(\theta)\)
\[R^{vir}(\theta)=\varinjlim_{(\eta^{-1},\eta^{0})\in PD(\theta)}R(P(\eta^{-1}), P(\eta^{0})).\]
Every \(R(\eta^{-1},\eta^{0})=R(P(\eta^{-1}),P(\eta^{0}))\) gives rise to a ring of semi-invariants for the action of the group \(G(\eta^{-1},\eta^{0})=G(P(\eta^{-1}),P(\eta^{0}))\). The restriction maps induced by the functions \(x\mapsto\left(\begin{smallmatrix}x&0\\ 0&\operatorname{Id}_{P}\end{smallmatrix}\right)\) described above define an inverse system over \(PD(\theta^{0}-\theta^{-1})\) of the rings \(SI(R(\theta^{-1},\theta^{0}))^{G(\theta^{-1},\theta^{0})}\). The ring **virtual semi-invariants** for \(\theta\in\mathbb{Z}^{n}\) is the inverse limit over \(PD(\theta)\)
\[SI^{vir}(\theta)=\varinjlim_{(\eta^{-1},\eta^{0})\in PD(\theta)}SI(R(\eta^{-1 },\eta^{0}))^{G(\eta^{-1},\eta^{0})}.\]
A **virtual semi-invariant** associated to \(X\) is element \(f\) in \(SI^{vir}([X])\). The following proposition tell us that, up to adding enough \(\begin{smallmatrix}P\\ 0\end{smallmatrix}\) summands to a given representative of an object \(X\in K_{\Lambda}\), virtual semi-invariants have weights given by \(\bar{d}=(d,d)\) for some \(d\in\mathbb{Z}^{n}\). We say that \(f\in SI^{vir}(\theta)\) has weight \(d\in\mathbb{Z}^{n}\) when this is the case.
**Proposition 4.5**.: _[_10_, Proposition 3.3.3]_ _Consider \(R(X^{-1},X^{0})\) where \(X^{-1}=\bigoplus_{i=0}^{n}P_{i}^{\theta_{i}^{-1}}\) and \(X^{0}=\bigoplus_{i=0}^{n}P_{i}^{\theta_{i}^{0}}\), with its usual \(G(X^{-1},X^{0})\) action. Suppose there is a non-zero \(f\in SI(R)^{G(X^{-1},X^{0}),\chi}\) with \(\chi=\chi_{\bar{d}}\) for some \(\bar{d}=(d^{-1},d^{0})\in\mathbb{Z}^{2n}\). If, for every \(1\leq i\leq n\), both \(d_{i}^{0}\neq 0\) and \(d_{i}^{-1}\neq 0\) then \(d_{i}^{-1}=d_{i}^{0}\)._
Remark 4.5 implies that the \(s(-,M)\) are virtual semi-invariants of weight \(\dim M\) for those \(\theta\in\mathbb{Z}^{n}\) such that \(\langle\theta,[M]\rangle=0\). Let \(X\in\mathcal{K}_{\Lambda}=\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). Suppose that the exists a virtual semi-invariant \(f\) of weight \(d\in K_{0}(\operatorname{mod}\Lambda)\) such that \(f(X)\neq 0\). In particular, there exists a representative \(X\simeq\begin{matrix}X^{-1}\\ \downarrow x\end{matrix}\) such that \(f\) defines a semi-invariant for the action of \(G(X^{-1},X^{0})\) over \(R(X^{-1},X^{0})\) with \(f(x)\neq 0\). By Proposition 4.1, we get that
* \(\langle(-[X^{-1}],[X^{0}]),(d,d)\rangle=0\)
* For any inflation \(Y\rightsquigarrow X\) in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\), we must have that \[\langle(-[Y^{-1}],[Y^{0}]),(d,d)\rangle\geq 0.\]
Recall that for any inflation \(Y\rightsquigarrow X\) in \(\mathcal{K}_{\Lambda}\) we can find representatives of \(Y\) and \(X\) that give an inflation in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\). The following definition is an attempt to summarize these facts.
**Definition 4.9** (**Numerical semistability**).: Let \(X\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) and \(d\in\mathcal{K}_{0}(\operatorname{mod}\Lambda)\). We say that \(X\) is \(d\)-semistable if
1. \(\langle[X],d\rangle=0\),
2. For every inflation \(Y\rightsquigarrow X\) we have \(\langle[Y],d\rangle\geq 0\).
The following theorem links both numerical semistability and \(M\)-semistability.
**Theorem 4.6**.: _Let \(X\in\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\) and \(M\in\operatorname{mod}\Lambda\) with corresponding class \([M]\) in \(\mathcal{K}_{0}(\operatorname{mod}\Lambda)\). Suppose \(X\) is \(M\)-semistable, then \(X\) is \([M]\)-semistable._
Proof.: Since \(X\) is \(M\)-semistable, \(s(X,M)\neq 0\). Let \(Y\rightarrowtail X\) be an inflation in \(K^{[-1,0]}(\operatorname{proj}\Lambda)\). By Remark 2.2, there exists \(P\in\operatorname{proj}\Lambda\) such that \(Y\rightarrowtail X\oplus\begin{subarray}{c}P\\ 0\end{subarray}\) is an inflation in \(\mathcal{C}^{[-1,0]}(\operatorname{proj}\Lambda)\). But \(X\oplus\begin{subarray}{c}P\\ 0\end{subarray}\) stills satisfies that the virtual semi-invariant \(P\) is non-zero. By Proposition 4.1, and noting that \(\langle(-[X^{-1}],[X^{0}]),([M],[M])\rangle=\langle[X],[M]\rangle\) we get the result.
Note that if \(Y\xrightarrow{f}X\) is a map inside \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\), in order for \(\operatorname{Cone}(f)\) to lie in \(\mathcal{K}\) -which implies that \(f\) is an inflation- the map \(Y^{-1}\xrightarrow{f^{-1}}X^{-1}\) must be a section. If \(X\) is \(M\)-semistable, we get a commutative square
This implies that the map \(y^{*}\) is an epimorphism, and thus
\[\langle[Y],[X]\rangle=\dim\operatorname{Hom}(Y^{0},M)-\dim\operatorname{Hom}( Y^{-1},M)\geq 0.\]
This gives us a proof of Theorem 4.6 that does not relay on geometric arguments.
We end this section with two examples. The first shows that numerical semistability does not necessarily imply geometric semistability in \(\mathcal{K}^{[-1,0]}(\operatorname{proj}\Lambda)\). The second tells us that the subcategory of objects that are \(d\)-semistable is not closed under extensions in general.
**Example 4.10** (**Numerical semistability does not imply geometric semistability**).: Consider \(\Lambda=\mathbb{C}Q/I\), where \(Q\) is the quiver with relations
and \(I=\langle\alpha\beta,\beta\alpha\rangle\). Consider as well the objects \(X_{1}=P_{1}\xrightarrow{\alpha}P_{2}\) and \(X_{2}=P_{2}\xrightarrow{\beta}P_{1}\). We have \(\langle[X_{1}],P_{2}\rangle=0\) and that \(\operatorname{Hom}_{\Lambda}(S_{2},P_{2})=0\), and so, the virtual semi-invariant \(s(X_{1},P_{2})=\det(\operatorname{Hom}(P_{2},P_{2})\xrightarrow{-\alpha} \operatorname{Hom}(P_{1},P_{2}))\) is non-zero, that is, \(X_{1}\) is \(P_{2}\)-semistable. Likewise, \(X_{2}\) is \(P_{1}\)-semistable since \(s(X_{2},P_{1})\neq 0\). Consider now \(X=X_{1}\oplus X_{2}\in\operatorname{Hom}(P_{1}\oplus P_{2},P_{1}\oplus P_{2})\) with representative \(x=\begin{pmatrix}0&\beta\\ \alpha&0\end{pmatrix}\in R(P_{1}\oplus P_{2},P_{1}\oplus P_{2})\). Note that \(X=\begin{subarray}{c}P_{1}\oplus P_{2}\\ \downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the image of \(x\) by the stabilization map \(R(P_{1}\oplus P_{2},P_{1}\oplus P_{2})\to R(P_{1}^{n+1}\oplus P_{2}^{n+1},P_{1}^{n+ 1}\oplus P_{2}^{n+1})\) for \(n\gg 0\). Let
\[R_{n}=R(P_{1}^{n+1}\oplus P_{2}^{n+1},P_{1}^{n+1}\oplus P_{2}^{n+1})=\left( \begin{array}{cc}M_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{1}}&M_{n+1}( \mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&M_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{2}} \end{array}\right),\]
\[G_{n}=G(P_{1}^{n+1}\oplus P_{2}^{n+1},P_{1}^{n+1}\oplus P_{2}^{n+1})=\]
\[\left(\begin{array}{cc}GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{1}}&M_{n+1}( \mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{2}} \end{array}\right)^{op}\times\left(\begin{array}{cc}GL_{n+1}(\mathbb{C}) \cdot\mathrm{Id}_{P_{1}}&M_{n+1}(\mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{2}} \end{array}\right),\]
\[U_{n}=\left(\begin{array}{cc}\mathrm{Id}_{n}\cdot\mathrm{Id}_{P_{1}}&M_{n+1 }(\mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&\mathrm{Id}_{n}\cdot\mathrm{Id}_{P_{2}}\end{array} \right)^{op}\times\left(\begin{array}{cc}\mathrm{Id}_{n}\cdot\mathrm{Id}_{P _{1}}&M_{n+1}(\mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&\mathrm{Id}_{n}\cdot\mathrm{Id}_{P_{2}}\end{array} \right),\]
\[(G_{n})_{red}=\]
\[\left(\begin{array}{cc}GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{1}}&0\\ 0&GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{2}}\end{array}\right)^{op}\times \left(\begin{array}{cc}GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{1}}&0\\ 0&GL_{n+1}(\mathbb{C})\cdot\mathrm{Id}_{P_{2}}\end{array}\right),\]
where \(U_{n}\) is the unipotent radical of \(G_{n}=(G_{n})_{red}\rtimes U_{n}\). The group \(G_{n}\) acts on \(R_{n}\) in the following way. For every \(g=\left(\begin{matrix}X&Y\\ Z&W\end{matrix}\right)\times\left(\begin{matrix}X^{\prime}&Y^{\prime}\\ Z^{\prime}&W^{\prime}\end{matrix}\right)\in G_{n}\) and \(y=\left(\begin{matrix}A&B\\ C&D\end{matrix}\right)\in R_{n}\) we have that
\[g\cdot y=\left(\begin{matrix}X^{\prime}AX&X^{\prime}AY+X^{\prime}BW+Y^{\prime }DW\\ Z^{\prime}AX+W^{\prime}CX+W^{\prime}DZ&W^{\prime}DW\end{matrix}\right).\]
Any \(G_{n}\)-semi-invariant \(f\) on \(R_{n}\) must be a \(U_{n}\)-invariant function. In particular, \(f\) must be invariant for the action of the subgroup \(V=\mathrm{Id}_{2n+2}\times\left(\begin{matrix}\mathrm{Id}_{n+1}&M_{n+1}( \mathbb{C})\cdot\beta\\ 0&\mathrm{Id}_{n+1}\end{matrix}\right)\). Since \(\left(\begin{matrix}\mathrm{Id}_{n+1}&Y^{\prime}\\ 0&\mathrm{Id}_{n+1}\end{matrix}\right)\cdot\left(\begin{matrix}A&B\\ C&D\end{matrix}\right)=\left(\begin{matrix}A&B+Y^{\prime}D\\ C&D\end{matrix}\right)\), then the ring of invariants \(\mathbb{C}[R_{n}]^{V}=\mathbb{C}[A,B,C,D]^{V}\) is isomorphic to \(\mathbb{C}[A,C]\otimes\mathbb{C}[B,D]^{V^{\prime}}\), where \(\mathbb{C}[B,D]^{V^{\prime}}\) is the ring of invariants of the action of \(V^{\prime}=\left(\begin{matrix}\mathrm{Id}_{n+1}&M_{n+1}(\mathbb{C})\\ 0&\mathrm{Id}_{n+1}\end{matrix}\right)\) over the set of \(2n+2\) by \(n+1\) matrices \(\left(\begin{matrix}B\\ D\end{matrix}\right)\) given by left multiplication. In [10, Section 4.2], the author explicity describes a basis of the invariant functions on the variety of \(N\times M\) matrices under the action of certain unipotent subgroups of \(GL_{N}(\mathbb{C})\) by left multiplication. Applying these results to our particular case, we obtain that \(\mathbb{C}[B,D]^{V^{\prime}}=\mathbb{C}[D]\), and thus \(\mathbb{C}[R_{n}]^{V}=\mathbb{C}[A,C,D]\). A similar argument shows that the ring of invariants on \(R_{n}\) by the action of \(H=\mathrm{Id}_{2n+2}\times\left(\begin{matrix}\mathrm{Id}_{n+1}&0\\ M_{n+1}(\mathbb{C})\cdot\alpha&\mathrm{Id}_{n+1}\end{matrix}\right)\) is \(\mathbb{C}[A,B,D]\), which in turn implies that \(\mathbb{C}(R_{n})^{V\cdot H}=\mathbb{C}[A,D]\), where \(V\cdot H=\mathrm{Id}_{2n+2}\times\left(\begin{matrix}\mathrm{Id}_{n+1}&M_{n+1} (\mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&\mathrm{Id}_{n+1}\end{matrix}\right)\). Moreover, the functions in \(\mathbb{C}[A,D]\) are also invariant for the action of \(\left(\begin{matrix}\mathrm{Id}_{n+1}&M_{n+1}(\mathbb{C})\cdot\beta\\ M_{n+1}(\mathbb{C})\cdot\alpha&\mathrm{Id}_{n+1}\end{matrix}\right)^{op}\times \mathrm{Id}_{2n+2}\), which gives us that \(\mathbb{C}[R_{n}]^{U_{n}}=\mathbb{C}[A,D]\). The only \((G_{n})_{red}\) semi-invariants of weight \((1,1)\) on \(\mathbb{C}[A,D]\) are the functions \(f_{k}(A,D)=k\cdot\det(A)\cdot\det(D)\) for \(k\in\mathbb{C}^{*}\) ([4,
Theorem 4.4.4]), and for all of them, \(f_{k}(x^{\prime})=0\), since \(A_{x^{\prime}}=D_{x^{\prime}}\) are not of full rank. That is, \(X\) is not \((1,1)\)-geometrically semistable.
**Example 4.11** (**Numerical semistability is not closed under extensions**).: Consider again the quiver of the last example. As we have seen, both \(X_{1}=P_{1}\xrightarrow{\alpha}P_{2}\) and \(X_{2}=P_{2}\xrightarrow{\beta}P_{1}\) are numerically \((1,1)\)-semistable. In \(\mathcal{K}_{\Lambda}\), the following sequence is a conflation
However, \(P_{1}\oplus P_{1}[1]\) is not \((1,1)\)-numerically semistable. Indeed \(\langle-[P_{1}],(1,1)\rangle=-1\leq 0\).
|
2305.15393 | LayoutGPT: Compositional Visual Planning and Generation with Large
Language Models | Attaining a high degree of user controllability in visual generation often
requires intricate, fine-grained inputs like layouts. However, such inputs
impose a substantial burden on users when compared to simple text inputs. To
address the issue, we study how Large Language Models (LLMs) can serve as
visual planners by generating layouts from text conditions, and thus
collaborate with visual generative models. We propose LayoutGPT, a method to
compose in-context visual demonstrations in style sheet language to enhance the
visual planning skills of LLMs. LayoutGPT can generate plausible layouts in
multiple domains, ranging from 2D images to 3D indoor scenes. LayoutGPT also
shows superior performance in converting challenging language concepts like
numerical and spatial relations to layout arrangements for faithful
text-to-image generation. When combined with a downstream image generation
model, LayoutGPT outperforms text-to-image models/systems by 20-40% and
achieves comparable performance as human users in designing visual layouts for
numerical and spatial correctness. Lastly, LayoutGPT achieves comparable
performance to supervised methods in 3D indoor scene synthesis, demonstrating
its effectiveness and potential in multiple visual domains. | Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, William Yang Wang | 2023-05-24T17:56:16Z | http://arxiv.org/abs/2305.15393v2 | # LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
###### Abstract
Attaining a high degree of user controllability in visual generation often requires intricate, fine-grained inputs like layouts. However, such inputs impose a substantial burden on users when compared to simple text inputs. To address the issue, we study how Large Language Models (LLMs) can serve as visual planners by generating layouts from text conditions, and thus collaborate with visual generative models. We propose LayoutGPT, a method to compose in-context visual demonstrations in style sheet language to enhance the visual planning skills of LLMs. LayoutGPT can generate plausible layouts in multiple domains, ranging from 2D images to 3D indoor scenes. LayoutGPT also shows superior performance in converting challenging language concepts like numerical and spatial relations to layout arrangements for faithful text-to-image generation. When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness. Lastly, LayoutGPT achieves comparable performance to supervised methods in 3D indoor scene synthesis, demonstrating its effectiveness and potential in multiple visual domains.
Figure 1: Generated layouts from LayoutGPT in 2D images and 3D indoor scenes. LayoutGPT can serve as a visual planner to reflect challenging numerical and spatial concepts in visual spaces.
Introduction
Can Large Language Models (LLMs) comprehend visual concepts and generate plausible arrangments in visual spaces? Recently, LLMs have shown significant advancement in various reasoning skills [50; 49] that remain challenging to existing visual generative models. For instance, text-to-image generation (T2I) models suffer from generating objects with specified counts, positions, and attributes [10]. 3D scene synthesis models face challenges in preserving furniture within pre-defined room sizes [30]. Addressing these issues necessitates the development of compositional skills that effectively arrange components in a coherent manner, accurately reflecting object specifications and interactions.
Visual layout is an essential symbolic representation that has been widely studied as it reflects the global compositions of a visual space [32; 53; 47; 45; 24; 33]. For instance, layout generation models [22; 25; 18; 53; 24] can be combined with region-controlled image generation methods [56; 27] to improve image compositionality [52]. But unlike LLMs, these models are restricted to discrete categories or have limited reasoning skills for complicated text conditions. Recently, LLMs like ChatGPT [36], are adopted as a centralized module of frameworks or systems where multiple foundational computer vision models are integrated. Through defined action items or API calls, LLMs can interact with visual generative models to extend the systems' capability into image generation tasks. [51].
Despite the significant advancement, existing approaches that involve the collaboration between LLMs and image generation models are either limited to executing the latter through program generation or using LLMs for language data augmentation for image editing [3]. Current LLM-centered systems fail to improve the compositional faithfulness of a generated image by simply incorporating T2I models into the pipeline. While one could additionally integrate models that synthesize images with the guidance of layouts [56; 27; 2], keypoints [27], or sketches [21; 57], users still have to create fine-grained inputs on their own, resulting in extra efforts and degraded efficiency compared to using pure language instructions.
To address these challenges, we introduce **LayoutGPT**, a training-free approach that injects visual commonsense into LLMs and enables them to generate desirable layouts based on text conditions. Despite being trained without any image data, we discover that LLMs can learn visual commonsense through in-context demonstrations and then apply the knowledge to infer visual planning for novel samples. Specifically, we observe that representing image layouts is highly compatible with how style sheet language format images on a webpage. Therefore, as LLMs are trained with program data, constructing layouts as structured programs may enhance LLMs' ability to "imagine" object locations from merely language tokens. Our programs not only enable stable and consistent output structures but strengthen LLMs' understanding of the visual concepts behind each individual attribute value. When combined with a region-controlled image generation model [27], LayoutGPT outperforms existing methods by 20-40% and achieves comparable performance as human users in generating plausible image layouts and obtaining images with the correct object counts or spatial relations.
In addition, we extend LayoutGPT from 2D layout planning to 3D indoor scene synthesis. With a slight expansion of the style attributes, LayoutGPT can understand challenging 3D concepts such as depth, furniture sizes, and practical and coherent furniture arrangements for different types of rooms. We show that LayoutGPT performs comparably to a state-of-the-art (SOTA) supervised method. Our experimental results suggest that LLMs have the potential to handle more complicated visual inputs. Our contribution can be summarized as the following points:
* We propose LayoutGPT, a program-guided method to adopt LLMs for layout-based visual planning in multiple domains. LayoutGPT addresses the _inherent_ multimodal reasoning skills of LLMs and can improve end-user efficiency.
* We propose **N**umerical and **S**patial **R**easoning (NSR-1K) benchmark that includes prompts characterizing counting and positional relations for text-to-image generation.
* Experimental results show that LayoutGPT effectively improves counting and spatial relations faithfulness in 2D image generation and achieves strong performance in 3D indoor scene synthesis. Our experiments suggest that the reasoning power of LLMs can be leveraged for visual generation and handling more complicated visual representations.
Related Work
**Image Layout Generation** Layout generation has been an important task for automatic graphical design for various scenarios, including indoor scenes [40; 46], document layouts [59; 60; 16], and graphical user interface [8]. Previous work has proposed various types of models that need to be trained from scratch before generating layouts. LayoutGAN [25] is a GAN-based framework to generate both class and geometric labels of wireframe boxes for a fixed number of scene elements. LayoutVAE [22] generates image layouts conditioned on an input object label set. Transformer-based methods are proposed to enhance flexibility in the layout generation process. For instance, LayoutTransformer [18] adopts self-attention to learn contextual relations between elements and achieve layout completion based on a partial layout input. BLT [24] proposes a hierarchical sampling policy so that any coordinate values can be modified at the sampling stage to enable flexible and controlled generation. However, existing methods are restricted to class labels and fail to reason over numerical and spatial concepts in text conditions. In contrast, LayoutGPT can convert challenging textual concepts to 2D layouts and generate free-form, detailed descriptions for each region.
**Indoor Scene Synthesis** Indoor scene synthesis aims at generating reasonable furniture layouts in a 3D space that satisfies room functionality. Early work adopting autoregressive models requires supervision of 2D bounding boxes and other visual maps [40]. Later, SceneFormer proposes to apply a set of transformers to add furnitures to scenes. While previous work adopts separate models to predict different object attributes, ATISS [38] demonstrates that a single transformer model can generate more realistic arrangments while being more efficient. In this work, we investigate leveraging LLMs to achieve scene synthesis without any fine-tuning.
**LLMs for Vision** Language inputs have been an essential part of many vision language tasks [11; 28; 43; 14; 15]. With the strong generalization ability of contemporary LLMs, recent work attempts to adapt the power of LLMs on multimodal tasks [31; 55]. For instance, multimodal chain-of-thought [58] trained a model to incorporate visual inputs as rationales for question answering. [23] proposes to learn translation parameters to map embeddings between visual and language domains such that an LLM can ground on both modalities. VisProg [19] and ViperGPT [44] uses LLMs to design modular pseudocode instructions or executable Python programs to achieve visual reasoning. Visual ChatGPT [51] proposes a prompt manager that supports the execution of various image generation models. In this work, we directly involve LLMs in the generation process by leveraging LLMs to design visual layouts through in-context learning and structured representations.
## 3 Method
### Overview
Given a condition \(\mathcal{C}\), the goal of layout generation is to predict a set of tuples \(\mathcal{O}=\{o_{j}|j=1,2,\ldots,n\}\) where each tuple \(o_{j}\) denotes the layout information of a 2D or 3D bounding box of object \(j\). In image planning, \(\mathcal{C}\) is the input text prompt, \(o_{j}\) consists of a category \(\mathbf{c}_{j}\), bounding box location \(\mathbf{t}_{j}=(x_{j},y_{j})\in\mathbb{R}^{2}\) and bounding box size \(\mathbf{s}_{j}=(w_{j},h_{j})\in\mathbb{R}^{2}\), i.e. \(\mathbf{o}_{\mathbf{j}}=(\mathbf{c}_{j},\mathbf{t}_{j},\mathbf{s}_{j})\). Similarly, in 3D scene synthesis, \(\mathcal{C}\) specifies the room type and room size, \(o_{j}\) consists of category \(\mathbf{c}_{j}\), location \(\mathbf{t}_{j}\in\mathbb{R}^{3}\), size \(\mathbf{s}_{j}\in\mathbb{R}^{3}\), and orientation \(\mathbf{r}_{j}\in\mathbb{R}\), i.e. \(\mathbf{o}_{\mathbf{j}}=(\mathbf{c}_{j},\mathbf{t}_{j},\mathbf{s}_{j},\mathbf{ r}_{j})\). While \(\mathbf{c}_{j}\) can be modeled as a discrete value, our method directly predicts the category text.
### LayoutGPT Prompt Construction
As is shown in Fig. 2, LayoutGPT prompts consist of three main components: **task instructions**, and in-context exemplars in **CSS structures** with **normalization**.
**CSS Structures** In autoregressive layout generation, \(o_{j}\) is usually modeled as a plain sequence of values, i.e. (\(c_{1},x_{1},y_{1},w_{1},h_{1},c_{2},x_{2},\ldots\)) [18; 24]. However, such a sequence can be challenging for LLMs to understand due to underspecified meaning of each value. Therefore, we seek a structured format that specifies the physical meaning of each value for LLMs to interpret spatial knowledge. We realize that image layouts are highly similar to how CSS (short for Cascading Style Sheets) formats the layout of a webpage and defines various properties of the img tag in HTML. For instance, \(x_{j},y_{j}\) corresponds to the standard properties left and top, while \(w_{j},h_{j}\) corresponds to width and height in CSS. As LLMs like GPT-3.5/4 are trained with code snippets, formatting image/scene
layouts in CSS structures potentially enhances the LLMs' interpretation of the spatial meaning behind each value. Therefore, as is shown in Fig. 2, we place category name \(\mathbf{c}_{j}\) as the selector and map other attribute values into the declaration section following standard CSS styles.
**Task Instructions & Normalization** Similar to previous work in improving the prompting ability of LLMs [48; 42; 36], we prepend task instructions to the prompt to specify the task goal, define the standard format, unit for values, etc. Besides, as the common length unit of CSS is pixels (px), we normalize each property value based on a fixed scalar and rescale the value to a maximum of 256px. As will be shown in later sections (Sec. 4.4 & 5.4), all three components play important roles in injecting visual commonsense into LLMs and improving generation accuracy.
### In-Context Exemplars Selection
Following previous work [1; 54], we select supporting demonstration exemplars for in-context learning based on retrieval results. Given a test condition \(\mathcal{C}_{j}\) and a support set of demonstrations \(\mathcal{D}=\{(\mathcal{C}_{m}^{s},o_{m}^{s})|m=1,2,\ldots\}\), we define a function \(f(\mathcal{C}_{k}^{s},\mathcal{C}_{j})\in\mathbb{R}\) that measures the distances between two conditions. For 2D text-conditioned image layout generation, we adopt the CLIP [39] model to extract text features for the conditions, and measure distances with cosine similarity between condition features. For the 3D scene synthesis task where each room has length \(rl\) and width \(rw\), we measure distance with \(f(\mathcal{C}_{k}^{s},\mathcal{C}_{j})=\|rl_{k}-rl_{j}\|^{2}+\|rw_{k}-rw_{j} \|^{2}\). We select supporting demonstrations with the top-\(k\) least distance measures and construct them as exemplars following the CSS structure in Fig. 2. These supporting examples are provided to GPT-3.5/4 in reverse order, with the most similar example presented last.
### Image and Scene Generation
For text-conditioned image synthesis, we utilize a layout-to-image generation model to generate images based on the generated layouts. As for each object layout in 3D scene synthesis, we retrieve a 3D object based on the predicted category, location, orientation, and size following [38]. We directly render the scene with the retrieved 3D objects. See Sec. 4 & Sec. 5 for more details.
## 4 LayoutGPT for Text-Conditioned Image Synthesis
In this section, we provide an extensive evaluation of LayoutGPT for 2D text-to-image (T2I) synthesis and compare it with SOTA T2I models/systems. An ablation study is conducted to demonstrate the effect of individual components from LayoutGPT. We also showcase qualitative results and application scenarios of our method.
Figure 2: The overview process of our LayoutGPT framework performing 2D layout planning for text-conditioned image generation or 3D layout planning for scene synthesis.
### Experiment Setup
**Datasets & Benchmarks** To evaluate the generations in terms of specified counts and spatial locations, we propose NSR-1K, a benchmark that includes template-based and human-written (natural) prompts from MSCOCO [29]. Table 1 summarizes our dataset statistics with examples. For template-based prompts, we apply a set of filters to obtain images with only 1-2 types of object and then create prompts based on object categories and bounding box information. As for natural prompts, we extract COCO captions with keywords to suit the task of numerical reasoning (e.g. "four") or spatial reasoning (e.g. "on top of") and ensure that all objects from the bounding box annotations are mentioned in the caption to avoid hallucination. Each prompt from NSR-1K is guaranteed to have a corresponding ground truth image and layout annotations. Detailed benchmark construction processes are described in Appendix B.1.
**Evaluation Metrics** To evaluate generated layouts, we report precision, recall, and accuracy based on generated bounding box counts and spatial positions [9, 17]. For spatial reasoning, each prompt falls into one of the four types of relations ((_left, right, top, below_)) and we use the detected object center for evaluation following PaintSkills [7]. To evaluate generated images, we obtain bounding boxes from GLIP [26] detection results. While CLIP [39] is unreliable in counting objects [37], we still report CLIP cosine similarity between text prompts and generated images for reference. Detailed metric descriptions are listed in Appendix B.2.
**Baselines** As we consider both layout evaluation and image evaluation, we compare LayoutGPT with **end-to-end T2I models** (Stable Diffusion [41], Attend-and-Excite [4])2 and **two-stage systems** that generate layouts first and then apply GLIGEN [27] as the layout-to-image model. We also evaluate ground truth layouts and human-drawn layouts as the theoretical upper bounds. The human-drawn layouts are collected through crowdsourcing, in which we specifically ask human annotators to draw layouts given text prompts. We slightly modify LayoutTransformer [18] as a baseline for supervised conditional layout generation. Detailed descriptions of baseline setups and human annotating are discussed in the Appendix A and E.
\begin{table}
\begin{tabular}{l l l c c c} \hline \hline
**Task** & **Type** & **Example Prompt** & **\# Train** & **\# Val** & **\# Test** \\ \hline \multirow{3}{*}{T2I Numerical Reasoning} & Single Category & “_There are two signifies in the photo._” & 14890 & - & 114 \\ \cline{2-6} & Two Categories & “_Three potted plants with one vane in the picture._” & 7402 & - & 197 \\ \cline{2-6} & Comparison & “_A picture of three cars with a few fire hydrants, the number of cars is more than that of free hydrants._” & 7402 & - & 100 \\ \cline{2-6} & Natural & “_A fenced in pasture with four horses standing around eating grass._” & 9004 & - & 351 \\ \hline T2I Spatial & Two Categories & “_A dog to the right of a branch._” & 360 & - & 199 \\ \cline{2-6} & Natural & “_A black cat laying on top of a bed next to pillows._” & 378 & & 84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics and examples of the NSR-1K benchmark for image layout planning and text-to-image (T2I) generation with an emphasis on numerical and spatial reasoning.
Figure 3: Qualitative comparison between Stable Diffusion, LayoutGPT, and human annotations regarding numerical (top row) and spatial reasoning (bottom row) skills.
### Evaluation Results
**Quantitative Results** As shown in Table 2, among the variants of LayoutGPT (# 6 - # 9), GPT-3.5 achieves the best performance in numerical reasoning while GPT-4 performs the best in generating correct spatial positions. LayoutGPT outperforms LayoutTransformer (#5) by large margins, proving the strong cross-modal reasoning skills of LLMs. As for image-level evaluation, LayoutGPT surpasses end-to-end T2I models (#1-#3) by 20-40% in GLIP-based accuracy and relatively 1-6% in CLIP similarity. Therefore, using layouts as an intermediate representation indeed leads to more reliable and faithful generation outcomes. In addition, LayoutGPT achieves similar layout accuracy as human users (numerical #6 vs. #11 (86.33% v.s. 92.56%); spatial #9 vs. #11 (91.73% v.s. 91.17%)), which implies its potential to spare users from drawing layouts manually. The discrepancy between layout accuracy and GLIP-based accuracy suggests that the bottleneck mainly attributes to the layout-guided generation process and GLIP grounding results.
**Qualitative results** We show the qualitative results of LayoutGPT and baselines in Fig. 3. LayoutGPT can understand visual commonsense such as the clock sizes at a train station (top left) or complex spatial relations between multiple objects (bottom right), while SD fails to generate correct numbers or positions. Besides, LayoutGPT demonstrates a similar layout design to human users (bottom left).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Numerical Reasoning**} & \multicolumn{3}{c}{**Spatial Reasoning**} \\ \cline{2-10} & & \multicolumn{2}{c}{Layout Eval.} & \multicolumn{2}{c}{Image Eval.} & \multicolumn{2}{c}{Layout Eval.} & \multicolumn{2}{c}{Image Eval.} \\ \cline{3-10} & **Methods** & Precision & Recall & Accuracy & Acc. (GLIP) & CLIP Sim. & Accuracy & Acc. (GLIP) & CLIP Sim. \\ \hline \hline \multicolumn{10}{l}{_Text \(\longrightarrow\) Image_} & \multirow{2}{*}{1. [leftmargin=*] 1} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & 32.22 & 0.256 & - & 16.89 & 0.252 \\
2 & & & & & & 42.44 & 0.256 & - & 17.81 & 0.256 \\
3 & & & & & & 38.96 & 0.258 & - & 24.38 & 0.263 \\
4 & & & & & & 45.74 & 0.254 & - & 26.86 & 0.264 \\ \hline \hline \multicolumn{10}{l}{_Text \(\longrightarrow\) Layout \(\longrightarrow\) Image_} & \multirow{2}{*}{5} & \multirow{2}{*}{2. [leftmargin=*] 5} & \multirow{2}{*}{25.70} & \multirow{2}{*}{61.69} & \multirow{2}{*}{22.26} & \multirow{2}{*}{40.55} & \multirow{2}{*}{0.247} & \multirow{2}{*}{6.36} & \multirow{2}{*}{28.13} & \multirow{2}{*}{0.241} \\
6 & & LayoutGPT (GPT-3.5) & **94.81** & **96.49** & **86.33** & & 51.20 & 0.258 & 82.54 & 52.86 & 0.264 \\
7 & & LayoutGPT (G Codes) & 90.19 & 88.29 & 72.02 & 46.64 & 0.254 & 74.63 & 45.58 & 0.262 \\
8 & & LayoutGPT (FP-3.5, chat) & 81.84 & 85.47 & 75.51 & 54.40 & **0.261** & 85.87 & 56.75 & **0.268** \\
9 & & LayoutGPT (FP-4) & 78.36 & 86.29 & 78.43 & **55.64** & **0.261** & **91.73** & **60.64** & **0.268** \\ \hline
10 & GT layouts & 100.00 & 100.00 & 100.00 & 53.23 & 0.256 & 100.00 & 62.54 & 0.261 \\
11 & Human & 99.26 & 96.52 & 92.56 & 56.07 & 0.258 & 91.17 & 51.94 & 0.258 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of our LayoutGPT with baseline methods in terms of counting and spatial correctness. Line 5-11 generates layout and adopts GLIGEN [27] for layout-guided image generation. “Human” (line 11) denotes layouts collected from human users given text prompts. Text in bold shows the best results of LayoutGPT.
Figure 4: Application scenarios of LayoutGPT in assisting faithful and creative text-conditioned image generation. The colored sentences and bounding boxes are generated by LayoutGPT at the same time based on the prompts.
### Applications Scenarios
In Fig. 4, we present two benefits of applying LayoutGPT for 2D visual planning beyond predicting discrete object categories. As LLMs have strong language understanding abilities, LayoutGPT can perform **accurate attribute binding** (top row). More importantly, as shown in Fig. 4 bottom row, the inherent language generation ability of LayoutGPT facilitates creative and imaginative regional descriptions. Leveraging the commonsense knowledge of GPT-3.5/4, LayoutGPT enhances the composition and details of each object. We refer to such ability as **text-based inpainting**, which is to generate fine-grained regional descriptions from coarse prompts.
### Ablation Study
**Component Analysis** Table 3 presents the component analysis of our CSS-style prompt on spatial reasoning prompts. Comparisons between line 1-3 entails that the task instructions (\(\#2\)) and CSS format (\(\#3\)) effectively improve layout accuracy. Format in-context exemplars in CSS structures show a more significant effect on accuracy. Pairwise comparisons of line \(5\)-\(7\) support the argument that the CSS style is the most essential component. While solely applying normalization degrades accuracy in line 4, line 5&8 shows that it slightly improves the performance when combined with other components.
**Model-Agnostic Property** We show that LayoutGPT is agnostic to layout-guided image generation models in line 9-10 in Table 3. We feed the same generated layouts from LayoutGPT to LayoutGudance [6] and compute image-level metrics. Compared to using ground truth layouts (\(\#10\)), LayoutGPT (\(\#9\)) shows a minor gap in GLIP-based accuracy and a comparable CLIP similarity score. The discrepancy in GLIP-based accuracy is similar to that in Table 2, implying that the layouts generated by our method are agnostic to the downstream model.
## 5 LayoutGPT for Indoor Scene Synthesis
### Task Setup
**Datasets & Benchmarks** For indoor scene synthesis, we use an updated version of the 3D-FRONT dataset [12; 13] following ATISS [38]. After applying the same pre-processing operations, we end up with 4273 bedroom scenes and 841 scenes for the living room. We only use rectangular floor plans of the test set for evaluation since LayoutGPT is not compatible with irregular ones. Hence, we end up with 3397/453/423 for train/val/test split of bedroom scenes and 690/98/53 for train/val/test split of living room scenes.
**Evaluation Metrics** We follow prior work [38] to report KL divergence between the furniture category distributions of predicted and ground truth scenes. We also render scene images from four camera angles for each scene and report FID scores [20]. In addition, we report out-of-bound rates, i.e. the percentage of scenes with furniture exceeding the floor plan boundary.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & **w/** & **w/** & **w/** & **Layout-to-Image** & **Layout Eval** & \multicolumn{2}{c}{**Image Eval**} \\ \cline{4-7} & **Instr.** & **CSS** & **Norm.** & **Model** & Acc. & Acc. (GLIP) & CLIP Sim \\ \hline
1 & & & & & & 55.12 & 34.35 & 0.259 \\
2 & ✓ & & & & & 78.23 & 47.92 & 0.263 \\
3 & & ✓ & & & & 80.82 & 51.38 & 0.264 \\
4 & & & ✓ & & & 44.10 & 26.43 & 0.257 \\
5 & ✓ & ✓ & & & GLIGEN [27] & 81.84 & 52.08 & 0.264 \\
6 & ✓ & & & & & 73.36 & 44.88 & 0.262 \\
7 & & ✓ & ✓ & & & 76.61 & 47.56 & 0.263 \\
8 & ✓ & ✓ & ✓ & & & **82.54** & **52.86** & **0.264** \\ \hline
9 & ✓ & ✓ & ✓ & & & 82.54 & 31.02 & 0.258 \\
10 & & & GT layouts & & Layout-Guidance [6] & 82.54 & 31.02 & 0.258 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of LayoutGPT (GPT-3.5) on spatial reasoning prompts. “w/ Instr.”: with prepended task instructions. “w/ CSS”: format in-context demonstrations in CSS style. “w/ Norm.”: normalizing attribute values to integers by a fixed size.
### Evaluation Results
**Quantitative Results** The evaluation results are recorded in Table 4. We provide a random baseline for comparison denoted as "Random Scenes", in which the scene is randomly sampled from the in-context exemplars for each inference run.3
Footnote 3: Notice that while the scenes in “Random Scenes” are sampled from the training set, the out-of-boundary rate is larger than 0 since the 3D-FRONT dataset contains a small portion of scenes with out-of-bound furniture.
For both bedrooms and living rooms planning, LayoutGPT attains lower out-of-bound rates than ATISS (bedrooms: 43.26% vs. 49.88%; living rooms: 64.16% vs. 83.02%), which verifies LayoutGPT's spatial reasoning ability in 3D environments. In addition, LayoutGPT has lower FID compared to ATISS (bedrooms: 28.37 vs. 30.02; living rooms: 76.34 vs. 85.40), which indicates that the planned scene has higher quality. Noted here that the living room split contains much more objects on average (11 for living rooms vs. 5 in bedrooms) and is a low-resource split with only 690 training scenes. Therefore, while living rooms are challenging for both methods, LayoutGPT shows more significant improvement over ATISS as supervised methods tend to overfit in early epochs.
Meanwhile, ATISS performs better in terms of KL divergence, which means that the overall furniture distribution predicted by ATISS is closer to the test split. We observe that LayoutGPT tends to avoid furnitures that are extremely rarely seen in each scene (e.g. coffee tables for bedrooms) as these objects appear less frequently in the in-context demonstrations. The limited in-context demonstration size also restricts LayoutGPT to have a universal observation of the furniture distributions.
**Qualitative Results** As shown in Fig. 5, LayoutGPT manages to understand common 3D concepts, such as "the pendant lamp should be suspended from the ceiling" and "nightstands should be placed by the headboard of the bed" (bottom row). When given a floor plan size for both living and dining rooms, LayoutGPT can also generate complicated 3D planning with dining tables and chairs on one side, and a sofa, a coffee table, and a TV stand on the other side (bottom right).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**Bedrooms**} & \multicolumn{3}{c}{**Living Rooms**} \\ \cline{2-7} & Out of bounds (\(\downarrow\)) & KL Div. (\(\downarrow\)) & FID (\(\downarrow\)) & Out of bounds (\(\downarrow\)) & KL Div. (\(\downarrow\)) & FID (\(\downarrow\)) \\ \hline Random Scenes & 11.16 & 0.0142 & 23.76 & 9.43 & 0.1239 & 79.61 \\ \hline ATISS*[18] & 49.88 & **0.0113** & 30.02 & 83.02 & **0.1054** & 85.40 \\ LayoutGPT (GPT-3.5) & **43.26** & 0.0995 & **28.37** & 73.58 & 0.1405 & **76.34** \\ LayoutGPT (GPT-3.5, chat) & 57.21 & 0.0846 & 29.66 & 81.13 & 0.2077 & 89.40 \\ LayoutGPT (GPT-4) & 51.06 & 0.1417 & 29.88 & **64.15** & 0.1613 & 78.60 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of LayoutGPT with ATISS on indoor scene synthesis. “Random Scenes” means randomly sampling one training scene from the in-context demonstrations for each inference room sample. (* denotes results reproduced by us)
Figure 5: Visualization of LayoutGPT across different types of rooms with different floor plan sizes.
### Application Scenarios
LayoutGPT can be applied to partial scene completion due to its autoregressive decoding mechanism (see Fig. 6). Through in-context demonstrations, LayoutGPT learns critical (visual) commonsense such as visual symmetric (e.g. nightstands in Fig. 6 (a)), positional relations (e.g. stool at the end of the bed in Fig. 6 (d)), and room functions (e.g. desks and chairs in the dining area in Fig. 6 (f)).
### Ablation Study
Similar to Sec. 4.4, we study the effect of task instructions, CSS structure, and normalization on indoor scene synthesis (see Table 5). In contrast to our conclusion for 2D planning in Sec. 4.4, comparisons between line 1-4 show that normalization (4) is the most critical component for suppressing the out-of-bound rate while the CSS structure is also effective. We observe that LLMs occasionally copy attribute values directly from in-context exemplars even though the room sizes are different. Therefore, normalizing all exemplars to the same scale can reduce the out-of-bound rate. CSS style facilitates LLMs to understand the physical meaning behind each attribute value and hence leads to almost the best result when combined with normalization (#7).
## 6 Conclusion
In this work, we address a new direction of generative model collaborations. Specifically, we are interested in how Largeu Language Models (LLMs) can collaborate with visual generative models. To this end, we propose LayoutGPT, an approach that turns an LLM into a visual planner through in-context learning and CSS style prompts. LayoutGPT can generate plausible visual arrangements in both image space and 3D indoor scenes. LayoutGPT can effectively improve image compositions by generating accurate layouts and achieves comparable performance in indoor scene synthesis compared to supervised methods. Besides, LayoutGPT can improve user efficiency in image generation and serve as an essential part of a unified system for all types of multimodal tasks.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **w/** & **w/** & **w/** & **Out of** & **KL Div.** & **FID** \\ & **Instr.** & **CSS** & **Norm.** & **Bound \(\downarrow\)** & **KL Div.** & **FID** \\ \hline
1 & & & & & 55.32 & 0.1070 & 56.83 \\
2 & ✓ & & & & 54.85 & 0.1153 & 58.85 \\
3 & & ✓ & & & 51.77 & 0.0776 & 55.62 \\
4 & & & ✓ & 46.57 & 0.1276 & 58.24 \\
5 & ✓ & ✓ & & 51.30 & **0.0741** & 57.64 \\
6 & ✓ & & ✓ & 46.81 & 0.0913 & 58.61 \\
7 & & ✓ & ✓ & 43.74 & 0.0848 & 57.70 \\
8 & ✓ & ✓ & ✓ & **43.26** & 0.0995 & **56.66** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies on LayoutGPT on the bedroom split for 3D indoor scene synthesis.
Figure 6: LayoutGPT can successfully complete a partial scene for different rooms. We provide three starting objects for bedrooms and seven objects for living rooms. |
2307.03388 | General-Purpose Multimodal Transformer meets Remote Sensing Semantic
Segmentation | The advent of high-resolution multispectral/hyperspectral sensors, LiDAR DSM
(Digital Surface Model) information and many others has provided us with an
unprecedented wealth of data for Earth Observation. Multimodal AI seeks to
exploit those complementary data sources, particularly for complex tasks like
semantic segmentation. While specialized architectures have been developed,
they are highly complicated via significant effort in model design, and require
considerable re-engineering whenever a new modality emerges. Recent trends in
general-purpose multimodal networks have shown great potential to achieve
state-of-the-art performance across multiple multimodal tasks with one unified
architecture. In this work, we investigate the performance of PerceiverIO, one
in the general-purpose multimodal family, in the remote sensing semantic
segmentation domain. Our experiments reveal that this ostensibly universal
network struggles with object scale variation in remote sensing images and
fails to detect the presence of cars from a top-down view. To address these
issues, even with extreme class imbalance issues, we propose a spatial and
volumetric learning component. Specifically, we design a UNet-inspired module
that employs 3D convolution to encode vital local information and learn
cross-modal features simultaneously, while reducing network computational
burden via the cross-attention mechanism of PerceiverIO. The effectiveness of
the proposed component is validated through extensive experiments comparing it
with other methods such as 2D convolution, and dual local module (\ie the
combination of Conv2D 1x1 and Conv2D 3x3 inspired by UNetFormer). The proposed
method achieves competitive results with specialized architectures like
UNetFormer and SwinUNet, showing its potential to minimize network architecture
engineering with a minimal compromise on the performance. | Nhi Kieu, Kien Nguyen, Sridha Sridharan, Clinton Fookes | 2023-07-07T04:58:34Z | http://arxiv.org/abs/2307.03388v1 | # General-Purpose Multimodal Transformer meets Remote Sensing
###### Abstract
The advent of high-resolution multispectral/hyperspectral sensors, LiDAR DSM (Digital Surface Model) information and many others has provided us with an unprecedented wealth of data for Earth Observation. Multimodal AI seeks to exploit those complementary data sources, particularly for complex tasks like semantic segmentation. While specialized architectures have been developed, they are highly complicated via significant effort in model design, and require considerable re-engineering whenever a new modality emerges. Recent trends in general-purpose multimodal networks have shown great potential to achieve state-of-the-art performance across multiple multimodal tasks with one unified architecture. In this work, we investigate the performance of PerceiverIO, one in the general-purpose multimodal family, in the remote sensing semantic segmentation domain. Our experiments reveal that this ostensibly universal network does not effectively capture the interactions between different modalities of interest in remote sensing arena. Furthermore, the network struggles with object scale variation in remote sensing images and fails to detect the presence of smaller objects such as cars from a top-down view. To address these issues, we propose a spatial and volumetric learning component, which employs 3D convolutions with an UNet configuration to encode vital local information and learn cross-modal features simultaneously, while reducing network computational burden via the cross-attention mechanism of PerceiverIO. The effectiveness of the proposed approach is validated through extensive experiments comparing it with other methods such as 2D convolution, and dual local module (i.e. the combination of Conv2D \(1\times 1\) and Conv2D \(3\times 3\) inspired by UNetFormer). The proposed method significantly improves the performance of PerceiverIO, and provides competitive performance against specialized architectures like UNetFormer and SwinUNet, showing its potential to minimize network architecture engineering with a minimal compromise on the performance. Code and data will be available at [https://github.com/nhikiieu/SpatialVolumetricMultimodal](https://github.com/nhikiieu/SpatialVolumetricMultimodal).
## 1 Introduction
Semantic segmentation of remote sensing imagery refers to the task of categorizing each pixel of an image into a specific class or object to produce a dense pixel-wise segmentation map. Semantic segmentation models with good performance are crucial for the practical application of high-resolution remote-sensing images such as land cover mapping, traffic monitoring and urban management. However, designing remote-sensing semantic segmentation models
Figure 1: First row, left to right: RGIR (Red-Green-NearInfrared) image, ground truth, prediction of PerceiverIO and that of PerceiverIO with our proposed volumetric-aware module. Second row, closeup segmentation maps focusing on cars, from left to right: RGIR image, ground truth, prediction of PerceiverIO, PerceiverIO with Conv2D-based preprocessing module and that of PerceiverIO with our volumetric-aware module.
such as UNetFormer [19] usually requires a significant amount of time, effort and domain knowledge. Moreover, adding new modalities with different structures makes the network subject to heavy re-engineering.
General-purpose transformers provide a new direction to model design by a unified architecture capable of handling different types of data in the same way. General-purpose transformers such as PerceiverIO [9] can achieve competitive performance in multiple tasks compared to state-of-the-art domain-specific approaches.
While showing considerable promise in multimodal tasks such as learning joint representations of video, audio, and labels, the performance of these general-purpose transformers in multimodal geospatial settings has not been verified. This paper investigates the effectiveness of these techniques in multimodal settings for geospatial tasks. We apply PerceiverIO [9] to the multimodal semantic segmentation task of very-high-resolution remote sensing and compare its performance with the state-of-the-art domain-specific approach UNetFormer [19]. Our first observation is that the PerceiverIO performs poorly on segmenting small objects such as cars. In particular, in the Vaihingen [1] and Potsdam [2] datasets, PerceiverIO failed to detect cars. Our second observation is that PerceiverIO does not effectively fuse data from different modalities that are typically processed in remote sensing settings. The poor performance is firstly due to the weakness in spatial encoding, especially local information. Secondly, interactions between different modalities aren't captured to discriminating classes. We experiment multiple configurations and propose a volumetric-aware module to address these issues. Fig. 1 demonstrates the effectiveness of proposed methods in detecting small objects like cars.
**Contributions:** our main contributions in this paper are:
* Contribution 1: Propose a convolution-based preprocessing component to help with small objects detection
* Contribution 2: Propose a volumetric-aware preprocessing component to better exploit the synergies across different modalities
The remainder of the paper is organized as follows. Section II discusses related work. Section III describes our proposed methodology. Section IV presents our datasets, experimental setup, and experimental results. The paper is concluded in Section V.
## 2 Related Work
This section discusses related work in general semantic segmentation architectures, specialized semantic segmentation in remote sensing, and general-purpose multimodal architectures.
### Semantic Segmentation Architecture
UNet [15] is a convolutional architecture [13] that has been proven to be effective in general image semantic segmentation even though originally developed for the biomedical field. The encoder and decoder branches are independent allowing practitioners to experiment with different combinations of backbones. Hence, the idea is still widely used by the computer vision community today with more advanced backbones such as TransUNet [7] and SwinUNet [5]. TransUNet, for medical image segmentation, showed that Transformer can be a strong encoder while CNN remains a solid feature extractor and decoder. CNNs remain dominant in the computer vision community partially thanks to their ability of multiscale learning by progressively increasing receptive field. SwinUNet applying Swin Transformer [10] with sliding window mechanism aims to achieve the same goal. Skip connection is an important element in UNet-like architecture, which seeks to semantically join features learnt from multiscale between encoder and decoder. However, UCTransNet [17] pointed out that there is a huge semantic gap between the encoder and decoder. Especially with a hybrid structure where the encoder and decoder are totally different in nature, the gap is even more significant. Therefore, in this work, we lean towards exploring pure transformer architecture. SegFormer [20] and DC-Swin [18] demonstrated that a pure attention model can extract multiscale semantic features just as well as convolutional models. In this work, we adapted SwinUNet to multimodal data to understand the performance of state-of-the-art general semantic segmentation architectures on remote sensing data.
### Specialized Architecture in Remote Sensing
UNetFormer [19] is the current state-of-the-art architecture, specialized for remote sensing data. However, the original paper only reported results on unimodal input. Its main contribution lies in the proposal of the Feature Refinement Head and Global Local Transformer Block components in the decoder branch. In both of which, a channel path is used in conjunction with a spatial path. Even though it wasn't explicitly explained in the paper why such design was used, we speculate that it is an attempt of capturing cross-channel features in addition to spatial features. In this work, we adapted UNetFormer to multimodal data and experimented with integrating the idea of a dual local branch into a general-purpose architecture like PerceiverIO.
We also observe that top winners from the IEEE Data Fusion Contest 2018 (DFC2018) [21] have reported an early effort in multimodal learning. Independent branches are created for different modalities. For example, the runner-up in the contest used independent predictors for different classes. Also, heavy post-processing is required to boost performance. Since then, the potential of multimodal
is gradually appreciated by the remote sensing community. Specialized architectures for tasks in geospatial settings have grown increasingly complex, pushing the boundaries of performance. The multi-stream topology is dominant within this landscape where modalities are encoded in separate branches and fused by advanced modules. While these specialized networks [11] and [8] achieve high performance, they are largely not generalizable and will require heavy re-engineering when a new modality emerges.
### Multimodal General-Purpose Architecture
Recently, parallel to the development of specialized multimodal architectures, more attention is given to general frameworks such as MultiMAE (Multi-modal Multi-task Masked Autoencoders) [4] and GPNA (General-Purpose Neural Architecture) with geospatial inductive bias [14]. These studies prove that we can use just one unified Transformer-based encoder to learn features from different modalities offering a greater degree of flexibility. Perceive-riIO [9] is an important member in this realm. It has demonstrated advantages over convolutional networks and self-attention mechanisms. Cross-attention mechanism transforms the quadratic problem into a linear problem where high resolution and high dimensional inputs can be mapped to a much smaller latent space. It also claims that the network makes little assumption about the nature of the data achieving the general-purpose goal. However, in this work, we revealed its shortcomings when it is applied to remote sensing data. Specifically, it fails to detect small objects like cars from top orthogonal inputs. In addition, it struggles to fuse information across modalities.
## 3 Methodology
This section describes our two key contributions to address the issues when applying the general-purpose multi-modal PerceiverIO to remote sensing data.
### Contribution 1
Through our empirical experiments, it appears that the default PerceiverIO architecture with either the fixed Fourier or the learnable positional embeddings [6, 9] fails to perform segmentation on small objects such as cars. Even if we leverage the pretrained positional embeddings on ImageNet, the situation doesn't improve. We suspect that the model is missing crucial local information. Therefore, we first introduce an extra 2D Conv layer in the preprocessing step before feeding the inputs to the cross-attention head of the PerceiverIO. We immediately saw a huge improvement of PerceiverIO. It can detect cars, which was impossible for the default PerceiverIO.
To put more focus on spatial information and locality, we constructed a UNet-like module (Fig. 2) using several 2D Conv layers to capture more local details. As expected, there is a pronounced performance boost. However, the model can only detect very bright color cars (yellow, red, white) and ignore darker color cars (purple, gray, black). We suspected that the prediction was greatly dependant on color channels and hasn't taken into account the complementary features from another modality which is nDSM (normalised DSM). That leads us to the second contribution.
### Contribution 2
To improve the interaction among input modalities, RGB, DSM, SAR in the remote sensing setting, we first propose a dual local branch preprocessing module. The module has two local branches: one branch uses Conv \(1\times 1\) and the other uses Conv \(3\times 3\). This is inspired by the GLTB (Global Local Transformer block) and the FRH (Feature Refinement Head) in the UNetFormer architecture [19]. In their GLTB local branch, in order to decode features, one branch uses Conv \(1\times 1\) and the other uses Conv \(3\times 3\). In their FRH, one branch is called channel path using Global Average Pool and Reduce/Expand operations, the other is named spatial path using depth-wise Conv \(3\times 3\). Even though these design decisions were explicitly explained by the author, it could be interpreted as an attempt to fuse spatial and channel-wise features. Inspired by this, we propose a dual local branch within our UNet-like module as shown in Fig. 3).
To further improve the interaction among input modalities, we propose a Conv3D-based volumetric-aware module. The key intuition here is using 3D Convolutions will enable us to learn the interaction rather than hard coding in the network architecture. 3D Convolutional kernels can be learned to effectively fuse different input modalities for semantic segmentation. We kept the UNet-inspired design that has been work well and used 3D Conv layers to learn spatial and channel-wise features simultaneously. We observed that 3D Conv works particularly well in this situation, consolidated the volumetric nature of multimodal data even though it hasn't been widely applied. Fig. 4 illustrates the design of our preprocessing module using 3D Conv. To
Figure 2: The Conv2D-based preprocessing module to improve local information encoding for the PerceiverIO architecture for remote sensing data.
ensure that the global information isn't thrown away in the preprocessing step while trying to retain local information, we use multiscale architecture in both extractor and decoder branches, which can help minimize a well-known limitation of convolution operations. In the extractor line, there are three blocks of stacked \(3\times 3\) 3D Conv followed by a 3D Maxpool operation (except for the final block). The number of filters increases as the component goes deeper. In the decoder line, the final representation is then upsampled twice by 3D Conv Transpose operation. After every upsampling, features from higher levels in the extractor line are concatenated and parsed through another \(3\times 3\) 3D Conv. Finally, channels and depth are combined and re-projected using \(1\times 1\) 2D Conv resulting a preprocessed input that is ready to parse through the PerceiverIO network.
## 4 Experimental Results
This section presents our datasets, experimental setup, and experimental results.
### Datasets
**Vaihingen:** The Vaihingen dataset [1] from the International Society for Photogrammetry and Remote Sensing (ISPRS) contains remote sensing data of the Vaihingen region in Germany. It has two modalities: true orthophoto (TOP) and Digital Surface Model (DSM). The TOP modality has three bands RGIR: red, green, and near infrared. The DSM modality is converted from the 3D LiDAR. It contains 33 large image tiles of different sizes with a GSD of 9 cm. Dense ground truth masks are provided for training and testing.
**Potsdam:** The Potsdam dataset [2], also from the ISPRS, contains remote sensing data of the Potsdam region in Germany. The data set contains 38 patches of the same size, each consisting of a true orthophoto (TOP) and a DSM. The ground sampling distance of both, the TOP and the DSM, is 5 cm. Different to Vaihingen, Potsdam's TOP modality has four bands RGBIR: red, green, blue, and near infrared.
It's worth noting that both datasets are heavily imbalanced as shown in Fig. 5, which makes it very challenging for the network to pick up already hard-to-learn small objects like cars.
**MMFlood:** MMflood is a multimodal dataset used for flood monitoring and analysis. It includes data from Synthetic Aperture Radar (SAR - VV and VH channels), Hydrography and DEM (Digital elevation model). However, this dataset is very challenging because of two major issues: (1) More than half of the hydrography information is missing for train, (2) Severe class imbalance between flood area and background.
Figure 4: The Conv3D-based Volumetric-aware module to learn modality-interaction encoding for the PerceiverIO architecture for remote sensing data.
Figure 5: Class proportion of Vaihingen and Potsdam dataset. Severe class imbalance is present.
Figure 3: The Dual Local Branch preprocessing module to improve interaction among input modalities for the PerceiverIO architecture for remote sensing data.
### Experimental Setup
Selected tiles for train, validation and test are as specified on ISPRS data portal [1, 2]. For training purposes, from 15 large tiles of varying dimensions provided by the Vaihingen dataset, we generated 1,620 samples of size \(512\times 512\). Similarly, we created 3,466 samples of size \(512\times 512\) for the Potsdam dataset from 22 large tiles with diverse dimensions. Specifically, tiles with the following IDs are used: (1) **Vaihingen**: _Train_[1,3,5,7,11,13,15,17,21,23,26,28,32,34,37], _Validate_[30], _Test_[2,4,6,8,10,12,14,16,20,22,24,27,29,31,33,35,38]; (2) **Potsdam**: _Train_['2.11', '2.12', '3.10', '3.11', '3.12', '4.10', '4.11', '4.12', '5.10', '5.11', '5.12', '6.7', '6.8', '6.9', '6.10', '6.11', '6.12', '7.7', '7.8', '7.9', '7.11', '7.12'], _Validation_['2.10'], _Test_['2.13', '2.14', '3.13', '3.14', '4.13', '4.14', '4.15', '5.13', '5.14', '5.15', '6.13', '6.14', '6.15', '7.13']
Multimodal data is introduced to selected networks by stacking modalities on top of each other. For Vaihingen dataset, the multimodal input will have a shape of (512, 512, 5), where the final dimension includes Red-Green-NearInfrared, nDSM (normalised DSM), and NDVI (Normalized difference vegetation index - derived from R-G-IR channels). Similarly, Potsdam will have the same multimodal input shape of (512, 512, 5); however, the final dimension will be the combination of R-G-B-IR and nDSM. On the other hand, to tackle class imbalance issues, we experimented with different loss functions (Sec. 4.4). We found a joint loss of Dice [16] and Soft Cross-entropy without class weight perform the best. This joint loss function was applied in all reported experiments.
\[L=L_{dice}+L_{ce} \tag{1}\]
\[L_{dice}=1-\frac{2}{N}\sum_{n=1}^{N}\sum_{k=1}^{K}\frac{\hat{y}_{k}^{n}y_{k}^{ n}}{\hat{y}_{k}^{n}+y_{k}^{n}} \tag{2}\]
\[L_{ce}=-\frac{1}{N}\sum_{n=1}^{N}\sum_{k=1}^{K}y_{k}^{n}log\hat{y}_{k}^{n} \tag{3}\]
where \(N\) is the number of samples and \(K\) is the number of classes. \(y_{k}^{n}\) is the one-hot encoding map of true segmentation label of sample \(n\) class \(k\). \(\hat{y}_{k}^{n}\) is the confidence of sample \(n\) belong to class \(k\) (_i.e_. corresponding softmax output from the network).
In terms of evaluation metrics, class-wise F1 score (Dice Coefficient), mIoU (mean Intersection over Union), and Average Accuracy are used. They are calculated using the following equations:
\[F1=\frac{2\times Precision\times Recall}{Precision+Recall} \tag{4}\]
\[IoU=\frac{\text{Area of Overlap}}{\text{Area of Union}} \tag{5}\]
\[AA=\frac{1}{C}\sum_{i=1}^{C}\frac{N_{c}^{i}}{N_{a}^{i}} \tag{6}\]
where for each class \(i\), \(N_{c}^{i}\) is the number of samples classified correctly and \(N_{a}^{i}\) is the total samples.
### Experimental Results
Tab. 1 shows that our proposed approaches result in a pronounced performance boost for PerceiverIO, especially, it resolves the problem with the car class. The results also shows that UNet is an effective architecture for feature encoding, which encodes information through multiple scales and aggregate those features. As indicated in Tab. 1, the last three methods, which employ a UNet-like architecture, yield superior performance. It is also worth noting here that, the UNet-like 2D convolution module can only be effective to a certain point. When we increase to 3-stage module instead of the previous two-stage module, the result is worse.
Fig. 7 demonstrates the effectiveness of proposed methods compared to the original PerceiverIO. From the first row, it is clear that not only is the prediction of cars significantly improved, but also the overall prediction is also more realistic, devoid of obvious edge issues (_i.e_. less misclassified pixels at the instances' boundaries). From the second row, the integration of Conv3D improves the network's ability to handle dark-colored cars and reduces prediction noise in shaded areas.
Our proposed component - local spatial and volumetric encoding - allows a multimodal, general-purpose architecture like PerceiverIO to yield highly competitive results when compared to remote sensing specialized networks like UNetFormer and segmentation specialized networks like SwinUNet on both the Potsdam and Vaihingen datasets (Tab. 2 and Tab. 3). When applied to a different
Figure 6: A sample from mmflood dataset and Cumulative distribution function of the flooded areas ratios, which demonstrate severe class imbalance issue.
dataset - MMFIood [12], the three models perform very similarly; however, PerceiverIO with our proposed volumetric component and SwinUNet slightly outperform UNetFormer (Tab. 4). MMFIood dataset is a multimodal collection of remote sensing data focused on flood monitoring and analysis. It includes data from synthetic aperture radar (SAR), and hydrography and DEM (Digital Elevation Model). However, because more than half of the hydrography modality is missing at training, it is excluded in this study.
Fig. 8 presents several examples of semantic segmentation on the Potsdam dataset. The result is consistent with the observation in the Vaihingen dataset. The incorporation of our proposed volumetric preprocessing (UNet-inspired Conv3D) ameliorated the issue with the car class to some extent. However, we have to acknowledge that, while improved, PerceiverIO's performance is still not as precise as the specialized architectures like SwinUNet and UNetFormer, which opens opportunity for future research. Besides, a noteworthy point is SwinUNet assumes a grid-like structure for the input. Its performance hinges on a judicious choice of window size. As demonstrated in Fig. 8, the predictions made by SwinUNet are pixelated at the boundaries, resulting in a less smooth segmentation map compared to that generated by the PerceiverIO.
### Ablation Study
To arrive at the optimal loss function, some advanced/specialized options are experimented with. They are Focal Tversky Loss [3], Asymmetric Unified Focal Loss [22]. However, in this case, they aren't effective because of the challenge of the object scale variation in the scene on top
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Method & Imp.surf. & Building & Lowveg. & Tree & Car & MeanF1 & mIoU & AA \\ \hline PerceiverIO (original) & 0.78 & 0.83 & 0.60 & 0.78 & NaN & NaN & 0.48 & 0.90 \\ PerceiverIO + 1 layer of 2DConv & 0.77 & 0.82 & 0.61 & 0.78 & 0.35 & 0.67 & 0.53 & 0.90 \\ PerceiverIO + UNet-like 2DConv & 0.80 & 0.87 & 0.68 & 0.79 & 0.42 & 0.71 & 0.58 & 0.92 \\ PerceiverIO + UNet 2DConv 3Stages & 0.79 & 0.83 & 0.61 & 0.79 & 0.36 & 0.68 & 0.54 & 0.91 \\ PerceiverIO + UNet-like 3DConv & 0.81 & **0.86** & **0.69** & **0.82** & **0.51** & **0.74** & **0.60** & 0.92 \\ PerceiverIO + Dual Local Branch & **0.82** & **0.86** & 0.65 & 0.81 & 0.45 & 0.72 & 0.58 & 0.92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on the Vaihingen test set
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Method & Imp.surf. & Building & Lowveg. & Tree & Car & MeanF1 & mIoU & AA \\ \hline PerceiverIO\_Conv3D (Ours) & 0.87 & 0.91 & 0.73 & 0.57 & 0.73 & 0.76 & 0.63 & 0.92 \\ SwinUNet & 0.89 & 0.95 & 0.82 & 0.80 & 0.86 & 0.87 & 0.77 & 0.95 \\ UNetFormer & 0.89 & 0.95 & 0.81 & 0.78 & 0.82 & 0.85 & 0.75 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative results on the MmfIood test set
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Method & Imp.surf. & Building & Lowveg. & Tree & Car & MeanF1 & mIoU & AA \\ \hline PerceiverIO\_Conv3D (Ours) & 0.80 & 0.86 & 0.69 & 0.82 & 0.51 & 0.74 & 0.60 & 0.92 \\ SwinUNet & 0.84 & 0.89 & 0.74 & 0.84 & 0.60 & 0.78 & 0.65 & 0.93 \\ UNetFormer & 0.83 & 0.88 & 0.73 & 0.83 & 0.61 & 0.78 & 0.65 & 0.93 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative results on the Vaihingen test set
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Method & Imp.surf. & Building & Lowveg. & Tree & Car & MeanF1 & mIoU & AA \\ \hline PerceiverIO\_Conv3D (Ours) & 0.87 & 0.91 & 0.73 & 0.57 & 0.73 & 0.76 & 0.63 & 0.92 \\ SwinUNet & 0.89 & 0.95 & 0.82 & 0.80 & 0.86 & 0.87 & 0.77 & 0.95 \\ UNetFormer & 0.89 & 0.95 & 0.81 & 0.78 & 0.82 & 0.85 & 0.75 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results on the Potsdam test set
of the severe class imbalance issue. Assigning class weight is another option; nevertheless, it isn't easy to tune and is counter-intuitive if we want to develop a general-purpose architecture that can be applicable to different datasets. Hierarchical Perceiver (HiP) [6] - a successor of PerceiverIO, which claims to have multiscale learning power, was explored; however, with limited data, it performed worse than PerceiverIO. We tried different positional encoding scheme suggested by HiP in an attempt to capture local features. They are Fixed Fourier 2D positional embedding, learnable positional embedding, and fine-tuned positional embeddings that were pretrained on ImageNet; however, none could resolve the issue with car detection.
## 5 Conclusion
In this study, we proposed integrating a spatial and volumetric component into a multimodal general-purpose architecture (PerceiverIO). It helps overcome the challenge of object scale variation in severe class imbalance condition. Moreover, our experiments demonstrated the effectiveness of UNet-inspired architecture in extracting multi-scale features. The baselines we used for performance comparison are specialized architectures in multimodal context (UNetFormer and SwinUNet). Our proposed method, which deploys multilayers of 3D convolutions while maintaining computing efficiency via cross-attention mechanism, provides competitive semantic segmentation results on the Vaihingen, Potsdam and MMFlood datasets. However, the development of multimodal general-purpose AI for semantic segmentation is still hindered by the expense of acquiring high-quality pixel-level annotations. In the future work, we'll introduce self-supervised and weakly-supervised learning approaches to leverage existing sparse data labels.
|
2307.05921 | Reading Radiology Imaging Like The Radiologist | Automated radiology report generation aims to generate radiology reports that
contain rich, fine-grained descriptions of radiology imaging. Compared with
image captioning in the natural image domain, medical images are very similar
to each other, with only minor differences in the occurrence of diseases. Given
the importance of these minor differences in the radiology report, it is
crucial to encourage the model to focus more on the subtle regions of disease
occurrence. Secondly, the problem of visual and textual data biases is serious.
Not only do normal cases make up the majority of the dataset, but sentences
describing areas with pathological changes also constitute only a small part of
the paragraph. Lastly, generating medical image reports involves the challenge
of long text generation, which requires more expertise and empirical training
in medical knowledge. As a result, the difficulty of generating such reports is
increased. To address these challenges, we propose a disease-oriented retrieval
framework that utilizes similar reports as prior knowledge references. We
design a factual consistency captioning generator to generate more accurate and
factually consistent disease descriptions. Our framework can find most similar
reports for a given disease from the CXR database by retrieving a
disease-oriented mask consisting of the position and morphological
characteristics. By referencing the disease-oriented similar report and the
visual features, the factual consistency model can generate a more accurate
radiology report. | Yuhao Wang | 2023-07-12T05:36:47Z | http://arxiv.org/abs/2307.05921v3 | # Reading Radiology Imaging Like The Radiologist
###### Abstract
Automated radiology report generation aims to generate radiology reports that contain rich, fine-grained descriptions of radiology imaging. Compared with image captioning in the natural image domain, medical images are very similar to each other, with only minor differences in the occurrence of diseases. Given the importance of these minor differences in the radiology report, it is crucial to encourage the model to focus more on the subtle regions of disease occurrence. Secondly, the problem of visual and textual data biases is serious. Not only do normal cases make up the majority of the dataset, but sentences describing areas with pathological changes also constitute only a small part of the paragraph. Lastly, generating medical image reports involves the challenge of long text generation, which requires more expertise and empirical training in medical knowledge. As a result, the difficulty of generating such reports is increased.
To address these challenges, we propose a disease-oriented retrieval framework that utilizes similar reports as prior knowledge references. We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions. Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask consisting of the position and morphological characteristics. By referencing the disease-oriented similar report and the visual features, the factual consistency model can generate a more accurate radiology report. Our model mimics the thinking process of a radiologist by utilizing both visual features and past experience with radiology imaging. Experimental results illustrate that our model achieved state-of-the-art performance on two benchmark datasets, including the IU X-Ray and MIMIC-CXR. Furthermore, the ablation study demonstrated the effectiveness of each component we proposed.
Radiology Report Generation, Image Captioning,Transformer, Image Retrieval.
## I Introduction
Radiology Automated radiology report generation aims to generate comprehensive and accurate reports that contain abundant abnormal observations about medical images. Writing such reports manually is time-consuming and difficult, requiring the expertise of an experienced radiologist. Fully automated report generation can assist radiologists in writing imaging reports, improving the accuracy of disease detection, and reducing their workload. Additionally, automated radiology report generation can provide automatic medical image reports to regions with limited access to medical resources, helping to alleviate the shortage of local experts. Unlike traditional healthcare AI tasks like disease classification, radiology reporting requires AI models to possess a higher level of cognitive ability to produce medical image reports with descriptive expressions that resemble human-level cognition.
Medical reports are generated based on the task of image captioning. Considerable progress has been made in research on image captioning, with most frameworks adopting an encoder-decoder architecture, such as a CNN image encoder followed by an RNN decoder for report generation. The Transformer architecture, initially proposed for text modeling and later extended to visual-language tasks, has been widely applied in cross-modal domains due to its effectiveness in modeling sequences and reducing the semantic gap between images and text using stacked encoders and decoders with multi-head self-attention. Recent research in image captioning predominantly adopts the Transformer architecture as the main architecture.
However, radiology report generation differs significantly from image captioning. The main differences can be summarized as follows: 1. Most radiology images are highly similar to each other, with only subtle differences in the areas with pathological changes. Existing models often struggle to attend to the specific regions of these lesions, making it challenging to generate descriptions that focus on subtle pathological areas. 2. Medical image reporting involves the challenge of generating long-form text, which is more difficult, and the supervision signal for describing disease-specific information is often sparse. Radiology report generation also requires more expertise knowledge and training compared to image captioning in the natural image domain. 3.The data bias in radiology report generation is serious, resulting in the problem of shortcut learning. Deep learning models often generate radiology reports that lack significant disease descriptions crucial for clinical diagnosis.
To address the inherent problems in radiology report generation, we propose the RRGnet framework in this paper. It is known that for a medical imaging examination with a disease, only a small part of the corresponding imaging report describes the relevant disease. This situation causes important diagnostic terms to be buried within a large amount of normal statements. Predicting the disease tag of CXR is generally easier and more accurate compared to generating a long description of the symptoms, as it provides a stronger
supervision signal. We leverage interpretable artificial intelligence techniques and class activation maps, widely used in tasks such as weakly supervised object detection or image segmentation, to reflect the location and morphological characteristics of targets. Inspired by works that apply class activation maps to analyze the decision-making process in deep learning models, we propose a disease-oriented mask retrieval module. This module effectively retrieves more accurate reports from the database at the disease perspective. Experimental results and qualitative analysis demonstrate the effectiveness of the disease-oriented mask retrieval module in finding samples with the same disease as the input sample, exhibiting a high degree of consistency in disease location and morphological characteristics.
The search module based on a disease-oriented mask finds the most similar image reports from the database as reference reports, simulating how radiologists refer back to reports they have seen in the past with the same disease. Additionally, we propose a fact-consistent image report generation module based on copying mechanism. This module utilizes vocabulary-level prior information during the generation of medical image reports in the decoder, enhancing the clinical accuracy and fact consistency of the generated reports. The attention concept within the copying mechanism simulates the varying degrees of reliance that radiologists typically have on different reference reports. Furthermore, the model focuses on the parts of the original image that differ from similar textual descriptions, integrating prior knowledge and input images to generate accurate medical image reports.
The main contribution of this paper can be summarized as follows:
* We propose a CAM-based [1] similarity retrieval method that generates disease-oriented masks, which effectively represent the disease morphology and location information. This significantly improves the accuracy of similar report retrieval, enabling the corresponding decoder to attend to a greater extent to disease-specific descriptions during the decoding stage.
* We propose a fact-consistent image report generation module based on a copying mechanism. This module simulates the writing process of radiologists when composing image reports and effectively utilizes the input's relevant diseases. It greatly enhances the clinical accuracy and fact consistency of the generated medical image reports.
* We conducted experiments on two widely used medical image report generation datasets, IU-Xray [2] and MIMIC-CXR [3]. Both qualitative and quantitative experiments were performed to demonstrate the effectiveness of our proposed model.
## 2 Related Work
### Image Captioning
Image captioning, which aims to generate descriptive and meaningful captions for images, has been extensively explored in deep learning. Early approaches to image captioning relied on handcrafted features and language models [4; 5; 6; 7]. However, with the advent of deep learning, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have become the predominant architectures for image captioning [8]. These architectures allow for end-to-end training, enabling the models to learn both visual and textual representations.
Attention mechanisms have been proposed to improve image captioning by allowing the model to focus on relevant image regions during the caption generation process [9]. By attending to specific regions, the model can align the generated words with the corresponding visual content, resulting in more accurate and contextually relevant captions [10; 11; 12]. Furthermore, reinforcement learning techniques have been applied to optimize captioning models by incorporating reward-based feedback [13]. This approach involves training the model to maximize a reward signal, typically based on the quality of the generated captions, which can lead to improved captioning performance through iterative optimization.
### Image Retrieval
Image retrieval [14] is the task of retrieving similar images from a database based on the content of the images. One approach is to fine-tune convolutional neural networks (CNNs) with a ranking loss function [15; 16]. By optimizing the network parameters based on a ranking loss, the model learns to better differentiate between relevant and irrelevant images, resulting in improved retrieval performance. Attention mechanisms have also been incorporated into the image retrieval process [17; 18]. Inspired by human visual perception, attention mechanisms allow the model to focus on relevant parts of the image, improving the ability to capture and match distinctive features for retrieval. Despite these successes, challenges persist in deep learning-based image retrieval. One major challenge is the semantic gap that exists between low-level visual features and high-level semantics. Low-level features extracted from images may not capture the complex semantic meaning, making it difficult to accurately match images based on their content.
### Radiology Report Generation
Radiology report generation is similar to image captioning in many ways, but there are some notable differences. While image captioning typically generates single-sentence captions, radiology reports are paragraphs containing multiple sentences. To address this difference, some methods have adapted hierarchical LSTM models [19; 20; 21; 22] to handle the generation of longer radiology reports. In particular, the work by [19] employed chest X-ray disease classification as an auxiliary task to improve automatic radiology report generation.
Some radiology report generation methods [23; 24; 25; 26] utilize image retrieval to obtain a template report that closely resembles the input image. However, these methods suffer from two major shortcomings. Firstly, due to the similarity of chest X-ray images, it is difficult for these methods to retrieve similar reports for images with similar diseases. Secondly, these methods represent the retrieved similar report as a prior knowledge vector, which limits the model's ability to leverage the rich linguistic properties of the reports.
With the development of attention mechanisms [9], transformer models have emerged as powerful tools for bridging the gap between image and text modalities. Recent works [26; 27; 28; 11; 29] have adopted transformer encoder-decoder architectures for radiology report generation and demonstrated excellent performance. However, most existing methods rely solely on a visual encoder trained jointly with a decoder to extract information from the image, without explicitly leveraging the linguistic properties of similar radiology reports.
### Class Activation Map
The Class Activation Map (CAM) technique was initially proposed to highlight the regions in an image that are most important for a model's prediction [1]. It has primarily been used in image classification tasks [1], [30], where the objective is to identify the main objects in an image. CAM-based methods have been employed in weakly supervised object detection [31; 32; 33] to improve localization capabilities when only image-level annotations are available. Additionally, weakly supervised semantic segmentation works [34; 35; 36] have utilized CAM to generate pseudo segmentation labels and train segmentation models.
The success of CAM-based approaches in weakly supervised object-level tasks highlights their ability to capture position information and morphological characteristics of objects. Inspired by these works, we apply CAM to generate disease-oriented masks and incorporate them into the image retrieval process. By leveraging CAM, we aim to retrieve more accurate and relevant reference reports for similar diseases. To the best of our knowledge, this is the first paper to employ the CAM method to advance the task of radiology report generation.
## 3 Methodology
The proposed method is illustrated in 1. It consists of two stages aimed at improving radiology report generation. In the first stage, we generate a disease-oriented mask using the Class Activation Map (CAM) technique [1]. This is achieved by aggregating the class activation maps corresponding to different disease labels. The aggregated disease representation matrix is then subjected to dimensionality reduction through Singular Value Decomposition (SVD) to obtain the disease-oriented mask. The disease-oriented mask effectively captures various disease information along with their corresponding morphological and location details. This enables precise retrieval of similar diseases during the retrieval process, resulting in higher-quality corresponding image reports as prior knowledge. In the second stage, we incorporate the copy mechanism and propose a fact-consistent image report generation model. The decoder of the model takes advantage of both prior knowledge and input images, allowing for comprehensive consideration of both sources of information during the generation of text tokens. By leveraging prior knowledge and images together, the model generates more clinically meaningful and efficient image reports.
### Phase 1:Disease Oriented Mask Generation
The process of generating a disease-oriented mask simulates the procedure used by radiology experts to analyze diseases. Initially, we extracted disease labels from the corresponding medical imaging reports using Chexbert [37]. These disease labels encompass a range of common diseases such as 'Enlarged Cardiomediastinum,' 'Cardiomegaly,' 'Lung Opacity,' 'Lung Lesion,' 'Edema,' 'Consolidation,' 'Pneumonia,' 'Atelectasis,' 'Pneumothorax,' 'Pleural Effusion,' 'Pleural Other,' 'Fracture,' and 'Support Devices.' These extracted labels encompass a comprehensive set of diseases.
Next, we employed a CNN network with a global average pooling layer to perform multi-label classification for the aforementioned diseases. Additionally, we applied the Class Activation Mapping (CAM) method [1]to obtain a class activation map for each disease category. Subsequently, we aggregated multiple class activation maps at the channel level. To enhance retrieval efficiency, we utilized the Singular Value Decomposition (SVD) method to reduce the dimensionality of the aggregated class activation map. This process resulted in a matrix that characterizes the location and morphology information of various diseases, which we defined as the disease-oriented mask.
For a given image \(I\in\mathcal{R}^{H*W*C}\), the activation of the k unit feature maps of the last convolutional layer at the spatial location (x, y) is represented by \(f_{k}(x,y)\) for a given image. With performing GAP layer, the class activation map of c class can be formulated as:
\[S_{c}=\sum_{k}w_{k}^{c}\sum_{x,y}f_{k}(x,y),S_{c}\in\mathcal{R}^{H*W*1}\]
where \(w_{k}^{c}\) is the weight corresponding to class c for unit k. Subsequently, the disease oriented mask can be obtained by :
\[DOM_{i}=\left[S_{1},S_{2},.....S_{K}\right],DOM_{i}\in\mathcal{R}^{H*W*K}\]
k represents the number of diseases defined in the classification model. In our actual experiments, K represents 14, which is the number of disease labels that can be extracted by CheXbert [37]. To reduce memory usage and improve retrieval efficiency, we compressed and reduced the dimensionality of disease-oriented mask using SVD for storage. Finally, after the phase 1, we added disease-oriented mask for each image-report pair. Therefore, the basic format of the dataset can be expressed as follows: \((I,T,DOM))\) where \(I,T\) denoted the original Image and radiology report, \(DOM\) is corresponding disease-oriented mask.
### Phase 2:Fact consistency based radiology report generation
The module for generating radiology reports based on fact consistency consists of three components: 1. Similarity report retrieval module rely on disease-oriented mask 2. Prior information extraction and representation module 3. Fact-consistent image report generation module based on the copying mechanism.
Figure 1: The model consists of two stages: disease-oriented mask generation based on CAM and fact-consistent medical image report generation based on the copy mechanism. In the first stage, a disease-oriented mask, which can represent rich disease information, is generated for all images using the CAM mechanism. In the second stage, the disease-oriented mask is used instead of the original image for similarity retrieval, in order to obtain medical image reports that are highly similar to the input image in terms of diseases as prior knowledge. Then, a fact-consistent decoder based on the copy mechanism synthesizes the corresponding medical image report based on different attention weights for the image and vocab-level prior knowledge.
Figure 2: The illustration of fact-consistent decoder
#### Iii-A1 Similar Report Retrieval Module
For each input data unit \((I_{i},T_{i},DOM_{i})\), we have its corresponding disease-oriented mask. We calculate the cosine similarity to obtain the similarity scores between the disease-oriented mask and the disease-oriented mask pool. specifically,the knowledge pool is the disease-oriented mask pool: \(DOM_{Pool}=[DOM_{1},DOM_{2},....DOM_{N}]\). We calculate the cosine similarity between the input \(DOM_{i}\) and \(DOM_{Pool}\), and selecting Top k samples' radiology report as reference reports. Generally, radiology report is expressed as \(T=\{s_{1},s_{2},...s_{l}\}\),where \(s_{i}\) denotes the \(i\) th sentence. Meanwhile,the sentences \(s_{i}\) is a long sequence \(\{w_{1},w_{2},...,w_{T}\}\) and \(w_{i}\) is the \(i\) token of the reference report.
Using disease-oriented masks, the retrieved prior knowledge demonstrates a remarkable consistency with the input report in terms of disease localization and morphological information. Qualitative analysis has effectively showcased the efficacy of our method in retrieving reports from patients with the same disease, displaying a strong resemblance to the target report and achieving a notable alignment at the disease level. Additionally, we have observed a high degree of consistency between the position and morphological descriptions of the disease in certain retrieval results, with sentences describing abnormalities being remarkably similar. Based on these findings, we can confidently conclude that the retrieved prior knowledge is highly relevant to the target report. It serves as fact-consistent knowledge, contributing to the enhancement of medical image report quality generated by our model. Typically, after Retrieval, the basic data unit is \((I_{i},T_{i},R_{1},R_{2},...R_{K})\), where \(R_{i}\) denotes the reference report.
#### Iii-A2 Prior information extraction and representation module
Due to the fact that sentences describing abnormalities in medical image reports typically occupy a small portion of the overall report, utilizing all image reports as knowledge input often leads to a significant amount of redundant information. This redundancy can hinder the model's ability to focus on crucial disease-specific details. To address this issue, we utilized Chexbert [37] to analyze every sentence in all of the imaging reports. This approach can effectively identify the sentences that truly describe the diseases in an imaging report, thus obtaining more refined prior knowledge as prior knowledge.
For instance, the report "Lungs are clear. No pleural effusions or pneumothoraces.heart size is upper limits of normal, There are low lung volumes with bronchovascular crowding and scattered opacities in the bilateral lung", after identifying, the prior knowledge are "heart size is upper limits of normal" and "There are low lung volumes with bronchovascular crowding and scattered opacities in the bilateral lung".
By extracting all possible relevant information regarding lesions, we augment the quantity of valuable information derived from the prior knowledge. Subsequently, we encode both the original image and prior knowledge to obtain multimodal representations. The textual representation preserves vocabulary-level information, empowering the decoder to generate higher-quality medical image reports using the copying mechanism. The clinical effectiveness of the generated reports has been validated through the analysis of relevant metrics. Typically, after Retrieval, the basic data unit is \((I_{i},T_{i},R_{1},s_{2},...s_{m})\), where \(s_{i}\) denotes the abnoumal sentences, and \(m\) is the number of sentences.
#### Iii-A3 Fact-consistent image report generation module
Pointer Networks [38]are specifically designed for sequential decision-making tasks, addressing scenarios where the network needs to select elements from an input sequence based on contextual information. Unlike generating discrete tokens, Pointer Networks employ attention mechanisms to directly output indices or positions within the input sequence. The architecture comprises an encoder, attention mechanism, and decoder, enabling it to handle output sequences of variable lengths. Pointer Networks have demonstrated effectiveness in tasks such as routing, combinatorial optimization, and structure parsing. Furthermore, they have been widely adopted in text summarization, as the copying mechanism effectively captures key words in the input text, enhancing the accuracy of summary extraction. Specifically, the copying mechanism posits that the output generated by a model is derived from the input.
In each time-step of the decoder, general seq2seq [9] model produces the vector that influences content-based attention weights corresponding to the input sequence. In the case of the pointer network, these attention weights act as indicators pointing to specific positions in the input sequence. The input time-step with the highest weight is considered the output for that particular decoder time-step. The formulation can be be expressed in III-B3
\[u_{j}^{i}=v^{T}\tanh\left(W_{1}e_{j}+W_{2}d_{i}\right)\quad j\in(1,\ldots,n)\]
\[p\left(C_{i}\mid C_{1},\ldots,C_{i-1},\mathcal{P}\right)=\mathrm{softmax} \left(u^{i}\right)\]
the encoder and decoder hidden states as \((e_{1},...,e_{n})\) and \((d_{1},...,d_{p})\), and \(v^{T},W_{1}\) are all parameters of the model. The output of softmax operation points to the input token having the maximum value.
In the process of generating medical imaging reports, doctors also exhibit a similar implicit thinking process. When reviewing medical images, doctors draw upon their encounters with similar images in the past and reference previous writing styles while composing reports. Moreover, they need to consider the distinctions between existing imaging and reference images. Our model effectively simulates this thinking process. As the decoder produces an imaging report, it can simultaneously consider highly reliable prior knowledge and the original input image. The image report generation module, based on fact consistency, fully leverages vocabulary-level prior information while focusing on the input image, resulting in more precise medical imaging reports. Our specific implementation is as follows:
\[p\left(\mathcal{C}^{\mathcal{P}}\mid\mathcal{P};\theta\right)=\prod_{i=1}^{m \left(\mathcal{P}\right)}p_{\theta}\left(C_{i}\mid C_{1},\ldots,C_{i-1}; \mathcal{K};\mathcal{I};\theta\right)\]
here \(\mathcal{C}^{\mathcal{P}}=\{C_{1},\ldots,C_{m}(\mathcal{P})\}\) is target report, consisting a sequence of text tokens. \(\mathcal{I}=\{I_{1},\ldots,I_{n}\}\) is the the radiographics, are mede up of a sequence of image pathes
tokens. \(\mathcal{K}=\left\{K_{1},\ldots,K_{m}\right\}\) is prior knowledge, which is composed of a sequence of text tokens.
Given a training triplet(\(\left(I_{i},T_{i},K_{i}\right)\),We denote the final output of the decoder as \(z_{1},...,z_{t}\), our model aims at computing the conditional probability \(z_{i}\):
3
The output probability \(\mathbf{Y}\left(z_{i}\right)\)is composed of both the image attention \(\mathbf{Y}_{Gen}\left(z_{i}\right)\)and the prior knowledge attention \(\mathbf{Y}_{Copy}\left(z_{i}\right)\) in the model. Specifically, the model generates predictions by attending to both the input image and prior knowledge.
For the image attention part, we calculate the attention coefficients between the decoder hidden state vector and all the encoder input hidden state vectors. The softmax function normalize the attention coefficients \(u_{j}^{i}\) over all the image patches in the input.
\[\begin{array}{ll}u_{j}^{i}&=v^{T}\tanh\left(W_{1}I_{j}+W_{2}d_{i}\right)\quad j \in\left(1,\ldots,n\right)\\ a_{j}^{i}&=\operatorname{softmax}\left(u_{j}^{i}\right)\\ d_{i}^{\prime}&=\sum_{j=1}^{n}a_{j}^{i}e_{j}\end{array}\]
where \(v^{T},W_{1},W_{2}\) are all learnable parameters, \(I_{j}\) is the element of the input image patch sequences \(\mathcal{I}\). The \(d_{i}^{\prime}\) is the generation vector,\(d_{i}\) is the decoder hidden states. Then, the \(\mathbf{Y}_{Gen}\left(z_{i}\right)\) is obtained by:
\[\mathbf{Y}_{Gen}\left(z_{i}\right)=softmax(Linear\left(d_{i}^{\prime};d_{i} \right))\]
Lastly, \(d_{i}^{\prime}\) and \(d_{i}\) are concatenated and used as the hidden states from which we make predictions and which we feed to the next time step in the recurrent model.
For the part that incorporates prior knowledge, we calculate the similarity coefficients between the decoder hidden state vector and the token embeddings of all the prior knowledge inputs. The attention coefficients indicate the relevance of the prior knowledge to the current decoding step, and are used to compute the probability of outputting each token at the current time step. Specifically, in the last transformer block, attention weights are generated that represent the probabilities of copying the text from each token of prior knowledge. We define a token \(z_{i}\) to be produced if a node \(k_{j}\) is selected, and the text of that token starts with \(z_{i}\).
The computation process for incorporating prior knowledge using attention is as follows:
\[\begin{array}{ll}u_{j}^{i}&=v^{T}\tanh\left(W_{1}K_{j}+W_{2}d_{i}\right) \quad j\in\left(1,\ldots,n\right)\\ a_{j}^{i}&=\operatorname{softmax}\left(u_{j}^{i}\right)\end{array}\]
\[\mathbf{Y}_{Copy}\left(z_{i}\right)=\sum_{\begin{subarray}{c}j\in V\\ J=z_{i}\end{subarray}}a_{j}^{i}\]
In summary, our model for generating image reports considers both the token generation probability from the input image and the token copying probability from the prior knowledge, which helps improve the quality of the generated reports and their clinical relevance. The detail of the Fact-consistent image report generation module is illustated in 2
## 4 Experiments
### Datasets and Tool
#### 4.1.1 IU-Xray
IU X-Ray is a widely recognized benchmark dataset for evaluating medical image report generation models. The dataset comprises over 7470 chest X-rays and 3955 corresponding radiology reports, which have been manually annotated by expert radiologists. The reports typically consist of multiple sentences, outlining the observations, impressions, and recommendations based on the images. We adopt the "Findings" section which provides a detailed paragraph describing the observed evidence as the target sequence.
#### 4.1.2 Mimic-CXR
MIMIC-CXR is a dataset comprised of 64,588 patients collected at the Beth Israel Deaconess Medical Center between 2011 and 2016. The MIMIC-CXR dataset was collected over multiple years and encompasses a large volume of patient data. It provides a rich resource of chest X-ray images and associated radiology reports, enabling extensive research and algorithm development in the field of chest imaging analysis. It includes 77,110 chest X-ray images and 227,835 corresponding free-text radiology reports. To ensure experimental fairness, we followed the experimental setup of previous studies, resulting in a training set of 222,758 samples, and validation and test sets consisting of 1,808 and 3,269 samples.
#### 4.1.3 Chexbert
CheXbert is a method that combines automatic labeling and expert annotations to accurately label radiology reports using BERT. It can annotate 14 common medical observations, including fractures, consolidation, enlarged mediastum, not detected, pleural other, cardiomegaly, pneumothorax, atelectasis, support devices, edema, pleural effusion, lung lesions, and lung opacities. We utilized CheXbert to perform label extraction on IU-Xray and MIMIC-CXR datasets, extracting corresponding disease labels. Since CheXbert was pre-trained on MIMIC-CXR, it provides more accurate disease label extraction for MIMIC-CXR. Therefore, subsequent analysis experiments, such as ablation studies, were mainly conducted based on MIMIC-CXR.
### Implementation Details
In the first stage, we employ ResNet [39] with GAP layer as the network for disease classification, and adopt Class Activation Maps (CAM) [1] as the method for generating disease-oriented masks. The size of each class activation map for each category is 2242241. Furthermore, Chexbert provides a total of 14 disease labels. By aggregating multiple class activation maps along the channel dimension, we obtain a disease-oriented mask with dimensions of 224*224*14. During the generation of disease-oriented masks, we utilize Singular Value Decomposition (SVD) to reduce the dimensionality of the aggregated class activation maps, thereby enhancing the retrieval efficiency and reducing the storage space required for the masks. After compression, the size of each disease-oriented mask is 224*224*3.
In the second stage, we get the disease-oriented mask vectorization and computed the similarity between the disease-oriented mask of the source image and the disease-oriented masks in the mask pool, then selected the top k medical image
reports that strictly cannot correspond to the source image as prior knowledge. k is a hyperparameter, and we define it as 3 in our main experiment. We used all samples in the training set to construct the disease-oriented mask pool.
Subsequently, we extract sentences from the selected reports that indicate the presence of diseases, serving as prior knowledge. Specifically, we use Spacy [40] to tokenize a given reference imaging report into sentences based on punctuation. After dividing the text into individual sentences, we apply Chexbert for annotation analysis. We retain the sentences that contain positive disease labels as reference prior knowledge. We feed both the original images and the prior knowledge into a multimodal input encoder. The number of layers in the multimodal input encoder and text decoder is set to 3, and the number of attention heads is set to 8. All input images are resized to 224*224 pixels and split into _7*7_ patches. We concentrate several abnormal sentences.The max token length of prior knowledge is set 100. The maximum output token length is set to 60. We utilize the Adam optimizer with a learning rate of 1e-4. The training process spans 100 epochs, while maintaining the same parameter settings for both datasets. We use two NVIDIA A40 GPUs to trained our model and set the batch size 32. The experimental settings remain consistent across the IU-Xray and MIMIC-CXR.
### Evaluation Metric
#### 4.3.1 Natural Language Generation Metrics
To evaluate the generated report quality, we adopted BLUE-1,BLUE-2,BLUE-3,BLUE-4 [41],ROUGE-L [42],METEOR [43],CIDR [44] as the NLG metric. The assessment of predictive reports' descriptive accuracy relies on the utilization of NLG metrics. BLEU (Bilingual Evaluation Understudy) was originally designed for machine translation tasks and calculates the overlap of word n-grams between predictions and references, capturing the semantic and fluency aspects of sentences. However, it has limitations in accurately evaluating long sentences. Rouge effectively considers recall and enables effective evaluation of long texts. Meteor combines multiple evaluation metrics, taking into account precision, recall, and edit distance, among other factors. It considers the similarity of synonyms and word order, which allows for better evaluation of semantic variations. CIDER considers lexical consistency and coherence evaluation, providing a comprehensive assessment of the similarity between generated descriptions and reference descriptions. It is widely used in image captioning tasks. The CIDEr value is used to evaluate whether a model can generate more accurate thematic descriptions by assessing the frequency of non-repetitive sentences in the training set.
#### 4.3.2 Clinical Efficacy Metrics
To evaluate the clinical efficacy of the generated report, we utilized Chexbert [37] to extract disease labels from the model-generated report. Precision, recall, and F1-score were employed as metrics to assess the clinical efficacy of our model. It should be noted that Chexbert was pretrained on MIMIC-CXR, and its extraction results may not be sufficiently accurate for IU-Xray. Therefore, we only present the clinical efficacy metrics based on the MIMIC-CXR datasets.
## 5 Results and Discussion
### Comparison with SOTA
#### 5.0.1 Description Accuracy
We compare our methods with a range of previous SOTA radiology report generation methods and image captioning methods. For Image captioning methods, State-of-the-art (SOTA) models in the field of image captioning, such as ADAATT [10],ATT2IN [5],CoAT [19], that utilize encoder-decoder architectures, are included in the comparison.
For previous radiology report generation methods, we compared our methods with R2Gen [27],CMN [28] which employ the memory mechanism to restore the patter information during train process and other knowledge enhanced radiology generation methods like KERP [23],HRGR [45],ARRG [46] Methods like CMCL [47],CA [48] utilize contrastive learning to model the pairing relationship between images and text, enhancing the generation of image captions. As show in I,our methods achieve State-of-the-Art among all metrics comparing with all other methods. This indicates that our model has achieved comprehensive improvement in terms of language fluency and accuracy in generating medical image reports. Specifically, Our model surpasses others in terms of CIDEr [44] and ROUGE-L [42] metrics, while achieving comparable results in BLEU-4 [41] and METEOR metrics. The superior CIDEr values indicate that our model avoids redundant sentences from the training set and produces reports with more precise and relevant topics. The improvement in the CIDER metric indicates that our model has effectively addressed the issue of generating redundant text and to some extent mitigated data bias. Additionally, we prioritize clinical correctness in our approach.
#### 5.0.2 Clinical Efficacy
We evaluate the model by adopting chexbert [37] to extract common abnormalities from the generated radiology reports. Due to ChexBERT [37] was only trained on MIMIC-CXR, we present the clinical effectiveness metrics of MIMIC-CXR to demonstrate the clinical efficacy of our model. Since could not access the code of some methods, we only compare our results with some methods can be reproduced or report the clinical effectiveness. In II,it is evident that our model has made substantial advancements in terms of clinical effectiveness, exhibiting notable improvements in clinical accuracy, precision, and F1 score. These results highlight the remarkable efficacy of our model. Moreover, it demonstrates that incorporating prior abnormal knowledge as input effectively aids the model in focusing on abnormal information when generating corresponding medical imaging reports. This approach leads to the production of more reliable medical imaging reports.
### Ablation Study
#### 5.0.1 Effectiveness of every component
To assess the effectiveness of each module, we conducted ablation experiments specifically targeting those modules. In order to contrast with the general approach of using image encoding vectors for retrieval, we trained the network's backbone using a commonly used image-based autoencoder for image reconstruction. Subsequently, we extracted image encoding vectors through the encoder for retrieval purposes.
In Table 3, the term "with general" denotes the traditional retrieval approach. Furthermore, we incorporated the anomalous sentences retrieved by this method as prior knowledge embeddings into the model. The metrics reveal no significant degradation in language generation quality, but there is a noticeable decline in clinical effectiveness. After applying the general retrieval method in the model, there was a significant decrease of 5% in the F1 score. This suggests that our disease-oriented approach effectively enhances the quality of retrieved similar reports, thereby improving the model's perception of diseases.
"W/o Retrieval" refers to the model structure after removing the entire retrieval branch, causing the model to become a general seq2seq model that generates image reports solely through a transformer encoder-decoder. The model experienced a significant decrease in both language quality and clinical effectiveness. Specifically, language quality metrics such as BL-3, RG-L, and MTOR decreased by 1.3%, 2.9%, and 3.0%, respectively. The F1 score showed a substantial decline of 7.2%. These results indicate that embedding prior knowledge can effectively assist the model in generating higher quality medical image reports. Further exploration is warranted in the realm of more efficient knowledge integration.
"W/o FC mechanism" indicates the absence of a fact-consistent decoder based on the copying mechanism in the decoder. Similarly, we observed that the generated medical image reports did not exhibit significant degradation in language quality, but there was a severe decline in clinical effectiveness. Surprisingly, multiple language generation evaluation metrics of the model exhibited improvements, but there was a severe decline in clinical effectiveness indicators. Specifically, the language generation quality metrics, BL-3, RG-L, and MTOR, improved from 1.151 to 0.165, from 0.281 to 0.293, and from 0.145 to 0.152, respectively. However, the F1 score experienced a decline from 0.315 to 0.275. The increase in language generation metrics may be attributed to the fact that when the model removes the copying mechanism, it tends to generate more normal descriptions. As normal descriptions constitute a large portion of medical image reports, it becomes a "shortcut" for improving language generation metrics. However, the decline in clinical effectiveness indicators demonstrates that although language generation metrics have improved, the actual quality of generated medical image reports has decreased. The introduced copying mechanism effectively utilizes prior input knowledge at the vocabulary level during the generation of medical image reports, leading the model to generate descriptions similar to the prior knowledge. This ultimately improves the clinical effectiveness of the model.
Our ablation experiments on multiple branches validate the
effectiveness of the disease-oriented masked similar report retrieval and the fact-consistent decoder based on the copying mechanism proposed in our model.
#### 5.2.2 The amount of reference reports
We also conducted an ablation experiment to investigate the effectiveness of using different numbers of reference image reports in generating medical image reports. We maintained the same hyperparameter settings, and the specific experimental results are shown in IV. It was observed that when three reference image reports were used as input, the model achieved the best performance. This phenomenon suggests that in the generation of image reports, an excessive number of reference reports can introduce redundant information, causing the model to overlook important information in the prior knowledge. On the other hand, using too few reference reports can result in insufficient experiential knowledge for the model, leading to performance degradation.
### Qualitative Results:
The 3 presents a quantitative analysis of our model on the MIMIC dataset, where we provide a comparison between the reports generated by our model and the similar image reports retrieved through different methods. We have highlighted the abnormal sentences in different colors, while the reports describe normal conditions. We compare the reports obtained using the disease-oriented mask retrieval approach with those obtained using the conventional image retrieval method, selecting three reference reports as benchmarks. Through this comparison, we have found that the disease-oriented mask retrieval approach yields more accurate reference reports compared to the traditional image reconstruction-based image retrieval method. We observe that many disease-related abnormal description sentences in the Ground Truth exhibit a high degree of similarity with the abnormal description sentences in the retrieved reports, particularly in terms of the location information and morphological features of the diseases, indicating a significant overlap. By comparing the retrieved reports, the reports generated by our model, and the Ground Truth, we find that our proposed fact-consistent decoder based on the copy mechanism effectively incorporates relevant information from the reference similar reports. This enables the model to produce descriptive statements for important disease descriptions by leveraging the copied content, thereby enhancing the model's perception of diseases and clinical effectiveness in generating medical image reports.
## VI Conclusion
We propose a method based on disease-oriented mask retrieval, knowledge embedding, and fact-consistent image report generation. Our work encompasses two main innovations. Firstly, we are the first to generate disease-oriented masks for each sample by utilizing a classification model to generate multi-class class activation maps and then aggregating and reducing them. These disease-oriented masks possess powerful disease representation capabilities, encompassing rich disease categories, morphology, and positional information. Extensive experimental results demonstrate that disease-oriented masks can replace original images for retrieval, as the retrieved medical image reports often exhibit high similarity to the target reports. This provides the model with strong prior knowledge, significantly enhancing its clinical effectiveness. Secondly, we propose a fact-consistent decoder based on the copy mechanism. This decoder can effectively leverage vocab-level prior information while also considering the input image information, enabling comprehensive output generation. Extensive experiments demonstrate that our model achieves remarkable performance improvements in terms of language generation quality and clinical effectiveness on two large benchmark datasets, IU-Xray and MIMIC-CXR. Moreover, the disease-oriented masks we propose can be further paired with medical image reports due to their stronger semantic representation capabilities.
The fact-consistent decoder based on the copy mechanism, proposed in our work, can be effectively applied to the domain of medical image report generation, which requires highly specialized training. Its powerful copying ability simulates the process of radiologists writing image reports, and qualitative analysis indicates that the decoder can replicate highly credible prior knowledge, thus enhancing the clinical effectiveness of our proposed model.
For future work, as our approach heavily relies on similar text reports as knowledge, which may have relatively narrow expertise, and lacks broad domain knowledge as credibility constraints, we look forward to incorporating medical textbooks like PUMed to improve the model's understanding of knowledge. Additionally, using widely accepted textbook language can provide factual constraints for image report generation models, expanding their applicability.
|
2302.11152 | Multi-Message Shuffled Privacy in Federated Learning | We study differentially private distributed optimization under communication
constraints. A server using SGD for optimization aggregates the client-side
local gradients for model updates using distributed mean estimation (DME). We
develop a communication-efficient private DME, using the recently developed
multi-message shuffled (MMS) privacy framework. We analyze our proposed DME
scheme to show that it achieves the order-optimal
privacy-communication-performance tradeoff resolving an open question in [1],
whether the shuffled models can improve the tradeoff obtained in Secure
Aggregation. This also resolves an open question on the optimal trade-off for
private vector sum in the MMS model. We achieve it through a novel privacy
mechanism that non-uniformly allocates privacy at different resolutions of the
local gradient vectors. These results are directly applied to give guarantees
on private distributed learning algorithms using this for private gradient
aggregation iteratively. We also numerically evaluate the private DME
algorithms. | Antonious M. Girgis, Suhas Diggavi | 2023-02-22T05:23:52Z | http://arxiv.org/abs/2302.11152v1 | # Multi-Message Shuffled Privacy in Federated Learning
###### Abstract
We study differentially private distributed optimization under communication constraints. A server using SGD for optimization, aggregates the client-side local gradients for model updates using distributed mean estimation (DME). We develop a communication efficient private DME, using the recently developed multi-message shuffled (MMS) privacy framework. We analyze our proposed DME scheme to show that it achieves the order-optimal privacy-communication-performance tradeoff resolving an open question in [1], whether the shuffled models can improve the tradeoff obtained in Secure Aggregation. This also resolves an open question on optimal trade-off for private vector sum in the MMS model. We achieve it through a novel privacy mechanism that non-uniformly allocates privacy at different resolutions of the local gradient vectors. These results are directly applied to give guarantees on private distributed learning algorithms using this for private gradient aggregation iteratively. We also numerically evaluate the private DME algorithms.
## I Introduction
In federated learning (FL) distributed nodes collaborate to build learning models, mediated by a server1. In particular, they collaboratively build a learning model by solving an empirical risk minimization (ERM) problem (see (5) in Section II). Even though local data is not directly shared, such a collaborative interaction _does not_ provide any privacy guarantee. Therefore, the objective is to solve (5) while enabling strong privacy guarantees on local data from the server, but with good learning performance, _i.e.,_ a suitable privacy-learning performance operating point. Differential Privacy (DP) [2], is the accepted theoretical framework for formal privacy guarantees. Though DP was proposed for central data storage, the appropriate framework for privacy with distributed (local) data is local differential privacy (LDP) [3, 4], where even the mediating server is not trusted for privacy. Another important aspect is that communication in FL occurs in bandwidth limited (wireless) links, this communication bottleneck can be significant in modern large-scale machine learning. The overall goal of this paper is to develop (both theory and algorithms) for the _fundamental_ privacy-communication-performance trade-off to solve the ERM in (5) for FL.
Footnote 1: This is because no client has access to enough data to build rich learning models locally and we do not want to directly share local data.
**Private distributed mean estimation (DME) and optimization:** At the core of solving the ERM in (5) through (stochastic) gradient descent (SGD) is to aggregate the local gradients, which is equivalent to finding the (distributed) mean of the users' gradients. Therefore, the central problem is to study the privacy-communication-performance trade-off for DME. Since there are repeated interactions via iterations of SGD, each exchange leaks information about the local data, but we need as many steps as possible to obtain a good model; setting up the tension between privacy and performance. The objective is to obtain as many such interactions as possible for a given privacy budget. This is quantified through analyzing the privacy of the composition of privacy mechanisms as a function of the number of iterations, and such tight analyses have been developed for composition in [5, 6]. We use compositional bounds from [7, 8] in conjunction with our new private DME mechanisms to obtain the privacy-communication-performance trade-off for solving (5) (see Theorem 13).
**Privacy frameworks:** A strong privacy guarantee includes an untrustworthy server, and to guarantee this, in LDP each client randomizes its interactions with the server from whom the data is to be kept private (_e.g.,_ see implementations [9, 10]). The fundamental privacy-communication-performance trade-offs of LDP mechanisms for private DME have been recently studied [11, 12]. We study a new approach to the privacy-communication-performance trade-off (see Theorems 2, 4 which are also order optimal, and we adapt it for other privacy frameworks below.
LDP mechanisms suffer from poor performance in comparison with the central DP mechanisms [3, 13]. In order to overcome this, two privacy frameworks have been advocated, which enable significantly better privacy-performance trade-offs by amplifying privacy: (i) _Secure Aggregation (SecAgg)_: This is a secure sum protocol [14] which only allows the server to see the sum of vectors, and not individual ones. (ii) _Shuffled model_: Each user sends her private message to a secure shuffler that randomly permutes all the received messages before forwarding them to the server [15, 16]. The extension to this is the _multi-message shuffled (MMS)_ model, where there are multiple parallel shuffled models as above. In [17, 18] it has been shown that one can get significantly better trade-offs with such multi-message shuffled (MMS) models. In this paper we focus on such multi-message shuffled (MMS) privacy models.
**Contributions:** Motivated by these discussions, we make the following contributions.
* In [1], a (order-wise) fundamental trade-off for privacy-communication-performance was established for DME for the SecAgg privacy framework, and an open question was posed on this trade-off for the shuffled models. In this paper we resolve this question through a fundamental privacy-communication-performance trade-off for DME in the (multi-message) shuffled (MMS) models, for _all_ regimes; we believe ours is the first scheme to achieve the complete optimal trade-off (see Theorems 3, 5) which matches lower bound (see Theorem 6). Furthermore, we show that our MMS requires less amount of communication per client than used in the SecAgg to achieve the same order of MSE (See Remark 3).
* In [17, 18], it was shown that for computing _scalar sum_ in _multi-message shuffled_ (MMS) models can fundamentally achieve trade-off points that single-message shuffled models cannot. The optimal trade-off for computing _vector sum_ is an open question, and the only known result [19] has communication _per-user_ growing as \(\mathcal{O}(d\sqrt{n})\), where \(n\) is number of users and \(d\) is the vector dimension. In this paper we establish the fundamental privacy-communication-performance trade-off for computing _vector sum_ in the multi-message shuffled model (see Theorems 3, 5) for all trade-off regimes, which order-wise is better than the results in [19]. In doing so, we also resolve this trade-off for all regimes in the scalar case (see Remark 1).
* Our scheme when applied to LDP, also achieves the optimal trade-off for this privacy framework (see Theorems 2, 4), similar to [11, 12] and (order-wise) better performance than [20] when applied to LDP. (see Remark 4). Since the idea of [20] was used as a primitive in [1], we can plug in our method to potentially improve the trade-off in their scheme.
* We use the results for optimal private DME to analyze privacy-convergence trade-offs of the DP-SGD algorithm (similar to algorithms in [12, 19] in Theorem 13.
* In Section VI, we evaluate the performance of our proposed algorithms for scalar and vector private DME.
The core technical idea that enables these results is the following. Suppose each client \(i\) holds a real vector \(\mathbf{x}_{i}\), and we want to privately compute the sum \(\sum_{i}\mathbf{x}_{i}\). First we devise a co-ordinate sampling mechanism related to the target communication desired, independently for each client; then we compute the private scalar sum \(\sum_{i\in\mathcal{A}_{k}}\mathbf{x}_{i}[k]\), where \(\mathbf{x}_{i}[k]\) is the \(k\)-th co-ordinate, and \(\mathcal{A}_{k}\) is the set of clients that sampled the \(k\)-th co-ordinate. We can express \(\mathbf{x}_{i}[k]=0.\mathbf{b}_{i}^{(1)}\mathbf{b}_{i}^{(2)}\ldots,\mathbf{b} _{i}^{(m)}\ldots\) in binary form2, where \(\mathbf{b}_{i}^{(j)}\in\{0,1\}\). For privacy, we randomize each bit through a binary randomized response [21], but we randomize each bit with a different privacy budget, so that we meet an overall privacy budget. This careful choice of such non-uniform randomization is key to our method. Moreover, for communication
constraints we represent it with finite \(m\) bits (see more details in Section IV). We can either use this overall randomization as is, for LDP, or send each bit through a separate shuffler for multi-message shuffling (MMS). Then by carefully accounting for the composition using RDP, we obtain our privacy guarantees and performance (see Lemmas 3, 4). This simple mechanism yields explicit bounds for the complete trade-off and forms the core of our solution.
### _Related Work_
We give the most relevant work related to the paper and review some of their connections to our work.
Private DME:In [11, 12] the privacy-communication-performance tradeoff were studied both through schemes as well as lower bounds for the local DP model. [11] established the order optimal private DME under local DP model for bounded \(\ell_{2}\)-norm vectors. [12] established order optimal private DME for local DP for bounded \(\ell_{\infty}\)-norm and separately for bounded \(\ell_{2}\)-norm vectors. It also extended its use in the single-shuffled model and private optimization framework (see below). In [22, 23], a family of communication-efficient mechanisms is proposed under LDP constraints in federated learning.
In the multi-message shuffled (MMS) model, the private _scalar_ DME was studied in [17, 18], where order optimal strategies were established. The private vector DME has received less attention, with the exception of [19]. Our private vector DME result in Theorem 5 improves the privacy-communication-performance order-wise over it. In [1, 20], the privacy-communication-performance trade-off in the SecAgg privacy model was studied. In particular, using ideas from compressive sensing, [1] established an order-optimal private DME for SecAgg.
Private optimization in the shuffled model:There has been a lot of work on private optimization in the local model, see [12, 24] and references therein. We will focus on private optimization in the shuffled model, where there is relatively less work. Recently [25] and [12, 26] have proposed DP-SGD algorithms for federated learning, in the shuffled model, where at each iteration, each client applies an LDP mechanism on the gradients. [27] studied a private optimization framework using RDP and additionally evaluated subsampling (of clients) in the shuffled model. The approach in [25] was to send full-prevision gradients without compression, but [12, 27] did use compression for the gradients. These methods achived certain optimal privacy-communication-performance operating points, but not in all regimes. The use of RDP for establishing compositional bounds for interactive optimization was studied in [7, 8], which is used in establishing the privacy bounds for iterative stochastic optimization. All these were for the single-shuffle model. For the multi-message shuffled (MMS) model, private optimization was studied in [19], which at its core used a private vector DME with MMS. As explained earlier, our private vector DME is orderwise better than this scheme, and if we plug our scheme into the standard convergence analyses for optimization, we obtain better results as also given in Appendix F.
**Paper organization:** We formulate the problem, establish notation and some preliminary results in Section II. We present an overview of the algorithms and the main theoretical results in Section IV. The technical proof ideas are outlined in Section V. Some numerical results are presented in Section VI. The proof details are given in the appendices.
## II Preliminaries
We give privacy definitions in Section II-A and the binary randomized response in Section II-B.
### _Privacy Definitions_
In this section, we define different privacy notions that we will use in this paper: local differential privacy (LDP), central different privacy (DP), and Renyi differential privacy (RDP). We also give standard results on privacy composition as well as conversion between privacy notions.
**Definition 1** (Local Differential Privacy - LDP [3]).: For \(\epsilon_{0}\geq 0\), a randomized mechanism \(\mathcal{R}:\mathcal{X}\rightarrow\mathcal{Y}\) is said to be \(\epsilon_{0}\)-local differentially private (in short, \(\epsilon_{0}\)-LDP), if for every pair of inputs \(d,d^{\prime}\in\mathcal{X}\), we have
\[\Pr[\mathcal{R}(d)\in\mathcal{S}]\leq e^{\epsilon_{0}}\Pr[\mathcal{R}(d^{ \prime})\in\mathcal{S}],\qquad\forall\mathcal{S}\subset\mathcal{Y}. \tag{1}\]
Let \(\mathcal{D}=\{d_{1},\ldots,d_{n}\}\) denote a dataset comprising \(n\) points from \(\mathcal{X}\). We say that two datasets \(\mathcal{D}=\{d_{1},\ldots,d_{n}\}\) and \(\mathcal{D}^{\prime}=\{d^{\prime}_{1},\ldots,d^{\prime}_{n}\}\) are neighboring (and denoted by \(\mathcal{D}\sim\mathcal{D}^{\prime}\)) if they differ in one data point, i.e., there exists an \(i\in[n]\) such that \(d_{i}\neq d^{\prime}_{i}\) and for every \(j\in[n],j\neq i\), we have \(d_{j}=d^{\prime}_{j}\).
**Definition 2** (Central Differential Privacy - DP [2, 28]).: For \(\epsilon,\delta\geq 0\), a randomized mechanism \(\mathcal{M}:\mathcal{X}^{n}\rightarrow\mathcal{Y}\) is said to be \((\epsilon,\delta)\)-differentially private (in short, \((\epsilon,\delta)\)-DP), if for all neighboring datasets \(\mathcal{D}\sim\mathcal{D}^{\prime}\in\mathcal{X}^{n}\) and every subset \(\mathcal{S}\subseteq\mathcal{Y}\), we have
\[\Pr\left[\mathcal{M}(\mathcal{D})\in\mathcal{S}\right]\leq e^{\epsilon_{0}} \Pr\left[\mathcal{M}(\mathcal{D}^{\prime})\in\mathcal{S}\right]+\delta. \tag{2}\]
**Definition 3** (\((\alpha,\epsilon(\alpha))\)-RDP (Renyi Differential Privacy) [6]).: A randomized mechanism \(\mathcal{M}:\mathcal{X}^{n}\rightarrow\mathcal{Y}\) is said to have \(\epsilon(\alpha)\)-Renyi differential privacy of order \(\alpha\in(1,\infty)\) (in short, \((\alpha,\epsilon(\alpha))\)-RDP), if for any neighboring datasets \(\mathcal{D}\sim\mathcal{D}^{\prime}\in\mathcal{X}^{n}\), we have that \(D_{\alpha}(\mathcal{M}(\mathcal{D})||\mathcal{M}(\mathcal{D}^{\prime}))\leq \epsilon(\alpha)\), where \(D_{\alpha}(P||Q)\) denotes the Renyi divergence between two distributions \(P\) and \(Q\) defined by:
\[D_{\alpha}(P||Q)=\frac{1}{\alpha-1}\log\left(\mathbb{E}_{\theta\sim Q}\left[ \left(\frac{P(\theta)}{Q(\theta)}\right)^{\alpha}\right]\right), \tag{3}\]
The RDP provides a tight privacy accounting of adaptively composed mechanisms. The following result states that if we adaptively compose two RDP mechanisms with the same order, their privacy parameters add up in the resulting mechanism.
**Lemma 1** (Adaptive composition of RDP [6]).: _For any \(\alpha>1\), let \(\mathcal{M}_{1}:\mathcal{X}\rightarrow\mathcal{Y}_{1}\) be a \((\alpha,\epsilon_{1}(\alpha))\)-RDP mechanism and \(\mathcal{M}_{2}:\mathcal{Y}_{1}\times\mathcal{X}\rightarrow\mathcal{Y}\) be a \((\alpha,\epsilon_{2}(\alpha))\)-RDP mechanism. Then, the mechanism defined by \((\mathcal{M}_{1},\mathcal{M}_{2})\) satisfies \((\alpha,\epsilon_{1}(\alpha)+\epsilon_{2}(\alpha))\)-RDP._
We use the following result for converting the RDP guarantees of a mechanism to its DP guarantees.
**Lemma 2** (From RDP to DP [29, 30]).: _Suppose for any \(\alpha>1\), a mechanism \(\mathcal{M}\) is \((\alpha,\epsilon\left(\alpha\right))\)-RDP. For any \(\delta>0\), the mechanism \(\mathcal{M}\) is \((\epsilon_{\delta},\delta)\)-DP, where \(\epsilon_{\delta}\) is given by:_
\[\epsilon_{\delta}=\min_{\alpha}\epsilon\left(\alpha\right)+\frac{\log\left(1 /\delta\right)}{\alpha-1}+\log\left(1-1/\alpha\right)\]
### _Binary Randomized Response (2RR)_
The binary randomized response (2RR) is one of the most popular private mechanism that first proposed in [21]. We present an unbiased version of the 2RR mechanism in Algorithm 1 whose input is a bit \(b\in\{0,1\}\) and the output can take one of two values \(\{\frac{-p}{1-2p},\frac{1-p}{1-2p}\}\), where \(p\) controls privacy-accuracy trade-offs. Furthermore, we present the mean square error (MSE) of the 2RR in the following Theorem.
**Theorem 1**.: _For any \(p\in[0,1/2)\), the 2RR is \(\epsilon_{0}\)-LDP, where \(\epsilon_{0}=\log\left(\frac{1-p}{p}\right)\). The output \(y\) of the 2RR mechanism is an unbiased estimate of \(b\) with bounded MSE:_
\[\mathsf{MSE}^{\text{2RR}}=\sup_{b\in\{0,1\}}\mathbb{E}\left[\|b-y\|_{2}^{2} \right]=\frac{p(1-p)}{(1-2p)^{2}}. \tag{4}\]
For completeness, we present the proof of Theorem 1 in Appendix A.
```
1:Public parameter:\(p\)
2:Input:\(b\in\{0,1\}\).
3:Sample \(\gamma\leftarrow\text{Ber}\left(p\right)\)
4:if\(\gamma==0\)then
5:\(y=\frac{b-p}{1-2p}\)
6:else
7:\(y=\frac{1-b-p}{1-2p}\)
8:endif
9:Return: The client sends \(y\).
```
**Algorithm 1** Local Randomized \(\mathcal{R}_{p}^{2\text{RR}}\)
## III Problem formulation
We consider a distributed private learning setup comprising a set of \(N\) clients, where the \(i\)th client has a data set \(\mathcal{V}_{i}\) for \(i\in[N]\). Let \(\mathcal{D}=(\mathcal{V}_{1},\ldots,\mathcal{V}_{N})\) denote the entire training dataset, with \(\mathcal{V}_{i}\) held locally by user \(i\). The clients are connected to an untrusted server in order to solve the following empirical risk minimization (ERM) problem
\[\min_{\theta\in\mathcal{C}}\Big{(}F(\theta,\mathcal{D}):=\frac{1}{N}\sum_{i=1}^ {N}\sum_{\mathbf{v}\in\mathcal{V}_{i}}f(\theta,\mathbf{v})\Big{)}, \tag{5}\]
where \(\mathcal{C}\subset\mathbb{R}^{d}\) is a closed convex set, \(\mathbf{v}\in\mathcal{V}\), and \(f:\mathcal{C}\times\mathcal{V}\rightarrow\mathbb{R}\), is the loss function. Our goal is to construct a global learning model \(\theta\) via stochastic gradient descent (SGD) while preserving privacy of individual data points in the training dataset \(\mathcal{D}\) by providing strong DP guarantees. SGD can be written as
\[\theta_{t+1}\leftarrow\theta_{t}-\eta_{t}\frac{1}{n}\sum_{i\in\mathcal{I}} \mathcal{R}(\nabla f_{i}(\theta_{t})),\]
where \(\mathcal{R}\) is the local randomization mechanism and \(\mathcal{I}\) are the indices of the clients participating in that round of SGD, with \(n=|\mathcal{I}|\). Therefore, at each iteration the server does distributed mean estimation (DME) of the gradients \(\frac{1}{n}\sum_{i\in\mathcal{I}}\mathcal{R}(\nabla f_{i}(\theta_{t}))\), and we want it to be done privately and communication-efficiently. To isolate this problem we define DME under privacy and communication constraints. Suppose we have a set of \(n\) clients. Each client has has a \(d\) dimensional vector \(\mathbf{x}_{i}\in\mathcal{X}\) for \(i\in[n]\), where \(\mathcal{X}\subset\mathbb{R}^{d}\) denotes a bounded subset of all possible inputs. For example, \(\mathcal{X}\triangleq\mathbb{R}_{2}^{d}(r_{2})\) denotes the \(d\) dimensional ball with radius \(r_{2}\), i.e., each vector \(\mathbf{x}_{i}\) satisfies \(\|\mathbf{x}_{i}\|_{2}\leq r_{2}\) for \(i\in[n]\). Furthermore, each client has a communication budget of \(b\)-bits. The clients are connected to an (untrusted) server that wants to estimate \(\overline{\mathbf{x}}=\sum_{i=1}^{n}\mathbf{x}_{i}\).
**Privacy frameworks:** We assume an untrusted server, under two different privacy models: (i) Local DP (LDP) model (ii) Multi-message shuffled (MMS) model.
**LDP-model**: We design two mechanisms: (i) client-side mechanism \(\mathcal{R}:\mathcal{X}\rightarrow\mathcal{Y}\) and (ii) Server aggregator \(\mathcal{A}:\mathcal{Y}^{n}\rightarrow\mathbb{R}^{d}\). The local mechanism \(\mathcal{R}\) takes an input \(\mathbf{x}_{i}\in\mathcal{X}\) and generates a randomized output \(\mathbf{y}_{i}\in\mathcal{Y}\). The local mechanism \(\mathcal{R}\) satisfies privacy and communication constraints as follows. The output \(\mathbf{y}_{i}=\mathcal{R}\left(\mathbf{x}_{i}\right)\) can be represented using only \(b\)-bits. The mechanism \(\mathcal{R}\) satisfies \(\epsilon_{0}\)-LDP (see Definition 1). Each client sends the output \(\mathbf{y}_{i}\) directly to the server, which applies the aggregator \(\mathcal{A}\) to estimate the mean \(\hat{\mathbf{x}}=\mathcal{A}\left(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\right)\) such that the estimated mean \(\hat{\mathbf{x}}\) is an unbiased estimate of the true mean \(\overline{\mathbf{x}}\).
**MMS-model**: The _single_ shuffle model is similar to the local DP model but with a secure shuffler (permutation) which anonymizes the clients to the server; shuffling can amplify the privacy of the algorithm. Precisely, the shuffle model consists of three parameters \((\mathcal{R},\mathcal{S},\mathcal{A})\): (i) _Encode:_ a set of local mechanisms \(\mathcal{R}^{(k)}:\mathcal{X}\rightarrow\mathcal{Y},k=1,\ldots,m\) each similar to the local DP model. Each client sends the
\(m\) outputs \(\mathbf{y}_{i}^{(k)},k=1,\ldots,m\), where \(\mathbf{y}_{i}^{(k)}\in\mathcal{Y}\), to the secure shufflers. (ii) _Multi-message Shuffle_: a single secure shuffler \(\mathcal{S}_{k}:\mathcal{Y}^{n}\rightarrow\mathcal{Y}^{n}\) receives \(n\) outputs \(\mathbf{y}_{i}^{(k)},i=1,\ldots,n\) after applying the local mechanism \(\mathcal{R}^{(k)}\) on each input \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) and generates a random permutation \(\pi^{(k)}\) of the received messages. The multi-message shuffle is a parallel set of \(m\) single-message shufflers \(\{\mathcal{S}_{k}\}\). (iii) _Analyze_: the server receives the \(m\) shufflers' outputs and applies the aggregator \(\mathcal{A}:\mathcal{Y}^{nm}\rightarrow\mathbb{R}^{d}\) to estimate the mean \(\hat{\mathbf{x}}=\mathcal{A}\left(\mathbf{y}_{\pi^{(k)}(1)},\ldots,\mathbf{y} _{\pi^{(k)}(n)},k=1,\ldots,m\right)\). We say that the shuffled model is \((\epsilon,\delta)\)-DP if the view of the output of the multi-message shuffler \(\left(\mathbf{y}_{\pi^{(k)}(1)},\ldots,\mathbf{y}_{\pi^{(k)}(n)},k=1,\ldots,m\right)\) satisfies \((\epsilon,\delta)\)-DP.
In the two privacy models, the performance of the estimator \(\hat{\mathbf{x}}\) is measured by the expected loss:
\[\mathsf{MSE}=\sup_{\{\mathbf{x}_{i}\in\mathcal{X}\}}\mathbb{E}\left[\|\hat{ \mathbf{x}}-\overline{\mathbf{x}}\|_{2}^{2}\right], \tag{6}\]
where the expectation is taken over the randomness of the private mechanisms. Hence, our goal is to design communication-efficient and private schemes to generate an unbiased estimate of the true mean \(\overline{x}\) while minimizing the expected loss (6). We study the DME for bounded \(\ell_{\infty}\)-norm _i.e.,_\(\|\mathbf{x}_{i}\|_{\infty}\leq r_{\infty}\) for all \(i\in[n]\) and for bounded \(\ell_{2}\)-norm vectors where \(\|\mathbf{x}_{i}\|_{2}\leq r_{2}\).
## IV Overview and main theoretical results
In this section we give an overview of our algorithmic solution for private DME and the theoretical guarantees for two important cases of boundedness constraints on the individual vectors. We consider the private DME of bounded \(\ell_{\infty}\)-norm vectors in Section IV-A and that for bounded \(\ell_{2}\)-norm vectors in Section IV-B. We will use these results to provide the guarantees for solving the trade-off for the ERM problem of (5) in the Appendix F (Theorem 13).
### _Bounded \(\ell_{\infty}\)-norm vectors_
We consider privately computing \(\sum_{i=1}^{n}\mathbf{x}_{i}\) where \(i\)th client has a vector \(\mathbf{x}_{i}\) such that \(\|\mathbf{x}_{i}\|_{\infty}\leq r_{\infty}\) for \(i\in[n]\). For ease of operation, we will scale each vector such that each coordinate becomes bounded in range \([0,1]\), and then reverse it at the end. That is, each client scales her vector \(\mathbf{x}_{i}\) as follows: \(\mathbf{z}_{i}=\frac{\mathbf{x}_{i}+r_{\infty}}{2r_{\infty}}\), where the operations are done coordinate-wise. Thus, we have that \(\mathbf{z}_{i}[j]\in[0,1]\) for all \(j\in[d]\) and \(i\in[n]\), where \(\mathbf{z}_{i}[j]\) denotes the \(j\)th coordinate of the vector \(\mathbf{z}_{i}\). Observe that the vector \(\mathbf{z}_{i}\) can be decomposed into a weighted summation of binary vectors as follows:
\[\mathbf{z}_{i}=\sum_{k=1}^{\infty}\mathbf{b}_{i}^{(k)}2^{-k}, \tag{7}\]
where \(\mathbf{b}_{i}^{(k)}\in\{0,1\}^{d}\) for all \(k\geq 1\). Each client can recursively construct \(\mathbf{b}_{i}^{(k)}\) as follows. Let \(\mathbf{z}_{i}^{(0)}=\mathbf{0}\) and \(\mathbf{z}_{i}^{(k)}=\sum_{l=1}^{k}\mathbf{b}_{i}^{(l)}2^{-l}\). Hence, \(\mathbf{b}_{i}^{(k)}=\lfloor 2^{k}\left(\mathbf{z}_{i}-\mathbf{z}_{i}^{(k-1)} \right)\rfloor\) for \(k\geq 1\).
To make our mechanism communication efficient, each client approximates the vector \(\mathbf{z}_{i}\) by using the first \(m\) binary vectors \(\{\mathbf{b}_{i}^{(k)}:1\leq k\leq m\}\). Note that the first \(m\) binary vectors together give an approximation to the real vector \(\mathbf{z}_{i}\) with error \(\|\mathbf{z}_{i}-\mathbf{z}_{i}^{(m)}\|_{2}^{2}\leq d/4^{m}\), where \(\mathbf{z}_{i}^{(m)}=\sum_{k=1}^{m}\mathbf{b}_{i}^{(k)}2^{-k}\). However, this mechanism creates a biased estimate of \(\mathbf{z}_{i}\). Hence, to design an unbiased mechanism, the client approximates the vector \(\mathbf{z}_{i}\) using the first \(m-1\) binary vectors \(\{\mathbf{b}_{i}^{(k)}:1\leq k\leq m-1\}\) of the binary representation above and the last binary vector (\(\mathbf{u}_{i}\)) is reserved for unbiasness as follows:
\[\mathbf{u}_{i}[j]=\mathsf{Bern}\left(2^{m-1}(\mathbf{z}_{i}[j]-\mathbf{z}_{i }^{(m-1)}[j])\right), \tag{8}\]
where \(\mathbf{z}_{i}^{(m-1)}=\sum_{k=1}^{m-1}\mathbf{b}_{i}^{(k)}2^{-k}\) and \(\mathsf{Bern}(p)\) denotes Bernoulli random variable with bias \(p\). Note that when each client sends the \(m\) binary vectors \(\{\mathbf{b}_{i}^{(k)}:1\leq k\leq m-1\}\bigcup\{\mathbf{u}_{i}\}\), the server can generates an unbiased estimate to the mean \(\overline{z}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z}_{i}\) with error \(\mathcal{O}\left(\frac{d}{n4^{m}}\right)\). For completeness, we prove some properties of this quantization scheme in Appendix C.
The private DME mechanism is given in Algorithm 2, where \(v\) controls the total privacy of the mechanism. There are two communication parameters: \(m\) controls the number of bits for quantization and \(s\) controls the number of dimensions used to represent each binary vector. In Theorems 2 and 3, we present how the privacy and communication parameters \(v,m,s\) affects the accuracy of the mechanism. The server-side is presented in Algorithm 3. The server estimate the mean of each binary vectors \(\{b_{i}^{(k)}\}\) and decodes the messages to generate an estimate to true mean \(\overline{\mathbf{z}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z}_{i}\). Then, the server scales the vector \(\overline{\mathbf{z}}\) to generate an unbiased estimate of the mean \(\overline{\mathbf{x}}\).
```
1:Inputs:\(\mathcal{Y}_{1},\ldots,\mathcal{Y}_{n}\), where \(\mathcal{Y}_{i}=\left\{\mathcal{Y}_{i}^{(1)},\ldots,\mathcal{Y}_{i}^{(m)}\right\}\) is a set of \(m\) sets.
2:for\(k=1,\ldots,m-1\)do
3:\(\hat{\mathbf{b}}^{(k)}\leftarrow\mathcal{A}^{\text{Bin}}\left(\mathcal{Y}_{1}^ {(k)},\ldots,\mathcal{Y}_{n}^{(k)}\right)\)
4:endfor
5:\(\hat{\mathbf{u}}\leftarrow\mathcal{A}^{\text{Bin}}\left(\mathcal{Y}_{1}^{(m)}, \ldots,\mathcal{Y}_{n}^{(m)}\right)\)
6:\(\hat{\mathbf{z}}\leftarrow\sum_{k=1}^{m-1}\hat{\mathbf{b}}^{(k)}2^{-k}+\hat{ \mathbf{u}}2^{-m+1}\)
7:Return: The server returns \(\hat{\mathbf{x}}\gets 2r_{\infty}\hat{\mathbf{z}}-r_{\infty}\).
```
**Algorithm 3** : Analyzer \(\mathcal{A}^{\ell_{\infty}}\)
We prove the bound on the MSE of the proposed mechanisms in the local DP and MMS models in the following theorems, where we defer the proofs to Appendix D. For ease of presentation, we provide the order of the achievable MSE and give the \(\epsilon_{0}\)-LDP and/or central \((\epsilon,\delta)\)-DP guarantees of our mechanism for both local DP and shuffle models. We track the constants in the MSE in the detailed proofs in Appendix D,
see (45), (47). Furthermore, we present RDP guarantees of our mechanisms for both local DP and MMS models in the detailed proofs. We give the outline of the proofs in Section V.
**Theorem 2** (Local DP model).: _The output of the local mechanism \(\mathcal{R}_{v,m,s}^{\ell_{\infty}}\) can be represented using \(ms\left(\log\left(\lceil d/s\rceil\right)+1\right)\) bits. By choosing \(v=\epsilon_{0}\), the mechanism \(\mathcal{R}_{v,m,s}^{\ell_{\infty}}\) satisfies \(\epsilon_{0}\)-LDP. Let \(\hat{\mathbf{x}}\) be the output of the analyzer \(\mathcal{A}^{\ell_{\infty}}\). The estimator \(\hat{\mathbf{x}}\) is an unbiased estimate of \(\overline{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\) with bounded MSE:_
\[\begin{split}\mathsf{MSE}_{LDP}^{\ell_{\infty}}&= \sup_{\{\mathbf{x}_{i}\in\mathbb{B}_{\infty}^{d}\left(r_{\infty}\right)\}} \mathbb{E}\left[\|\hat{\mathbf{x}}-\overline{\mathbf{x}}\|_{2}^{2}\right]\\ &=\mathcal{O}\left(\frac{r_{\infty}^{2}d^{2}}{n}\max\left\{\frac{ 1}{d4^{m}},\frac{1}{s},\frac{s}{\epsilon_{0}^{2}}\right\}\right).\end{split} \tag{9}\]
Theorem 2 shows that each client needs to set \(m=1\) and \(s=\lceil\epsilon_{0}\rceil\) communication bits to achieve MSE \(\mathcal{O}\left(\frac{d^{2}}{n\min\{\epsilon_{0},\epsilon_{0}^{2}\}}\right)\) when \(\epsilon_{0}\leq\sqrt{d}\). Now, we move to the MMS privacy model.
**Theorem 3** (MMS model).: _The output of the local mechanism \(\mathcal{R}_{v,m,s}^{\ell_{\infty}}\) can be represented using \(ms\left(\log\left(\lceil d/s\rceil\right)+1\right)\) bits. For every \(n\in\mathbb{N}\), \(\epsilon\leq 1\), and \(\delta\in(0,1)\), the shuffling the outputs of \(n\) mechanisms \(\mathcal{R}_{v,m,s}^{\ell_{\infty}}\) satisfies \((\epsilon,\delta)\)-DP by choosing \(v^{2}=\frac{s\epsilon^{2}}{4\log(1/\delta)}\). Let \(\hat{\mathbf{x}}\) be the output of the analyzer \(\mathcal{A}^{\ell_{\infty}}\). The estimator \(\hat{\mathbf{x}}\) is an unbiased estimate of \(\overline{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\) with bounded MSE:_
\[\begin{split}&\mathsf{MSE}_{\text{MMS}}^{\ell_{\infty}}=\sup_{ \{\mathbf{x}_{i}\in\mathbb{B}_{\infty}^{d}\left(r_{\infty}\right)\}} \mathbb{E}\left[\|\hat{\mathbf{x}}-\overline{\mathbf{x}}\|_{2}^{2}\right]\\ &=\mathcal{O}\left(\frac{r_{\infty}^{2}d^{2}}{n^{2}}\max\left\{ \frac{n}{d4^{m}},n\left(\frac{1}{s}-\frac{1}{d}\right),\frac{\log\left(1/ \delta\right)}{\epsilon^{2}}\right\}\right).\end{split} \tag{10}\]
Theorem 3 shows that each client requires to set \(m=\lceil\log\left(n\epsilon^{2}/d\right)\rceil\) and \(s=\mathcal{O}\left(\min\{n\epsilon^{2},d\}\right)\) so that the error is bounded by \(\mathcal{O}\left(\frac{d^{2}}{n^{2}\epsilon^{2}}\right)\) that matches the MSE of central differential privacy mechanisms.
**Remark 1** (Scalar case).: When \(d=1\), i.e., scalar case, our MMS algorithm achieves the central DP error \(\mathcal{O}\left(\frac{1}{n^{2}\epsilon^{2}}\right)\) using \(m=\lceil\log\left(n\epsilon^{2}\right)\rceil\) bits per user. This result covers the private-communication trade-offs for all privacy regimes \(\epsilon\in(0,1)\). For example, for \(\epsilon=\frac{1}{\sqrt{n}}\), each client needs only a single bit to achieve the central DP error. On the other hand, IKOS mechanism proposed in [31, 32] requires \(\mathcal{O}\left(\log\left(n\right)\right)\)-bits of communication. Even when particular regimes of order-optimality are achieved for MMS, the communication bound is in expectation [33],, whereas ours is deterministic.
### _Bounded \(\ell_{2}\)-norm Vectors_
For private DME \(\sum_{i=1}^{n}\mathbf{x}_{i}\) where \(\|\mathbf{x}_{i}\|_{2}\leq r_{2}\) for \(i\in[n]\), _i.e.,_\(\ell_{2}\)-bounded, we use the random rotation proposed in [34] to bound the \(\ell_{\infty}\)-norm of the vector with radius \(r_{\infty}=\mathcal{O}\left(\frac{r_{2}}{\sqrt{d}}\right)\) and then we apply the bounded \(\ell_{\infty}\)-norm algorithm in Section IV-A.
**Theorem 4** (Local DP model).: _The output of the local mechanism \(\mathcal{R}_{v,m,s}^{\ell_{2}}\) can be represented using \(sm\left(\log\left(\lceil d/s\rceil\right)+1\right)\) bits. By choosing \(v=\epsilon_{0}\), the mechanism \(\mathcal{R}_{v,m,s}^{\ell_{2}}\) satisfies \(\epsilon_{0}\)-LDP. Let \(\hat{\mathbf{x}}\) be the output of the analyzer \(\mathcal{A}^{\ell_{2}}\). With probability at least \(1-\beta\), the estimator \(\hat{\mathbf{x}}\) is an unbiased estimate of \(\overline{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\) with MSE:_
\[\begin{split}&\mathsf{MSE}_{\text{LDP}}^{\ell_{2}}=\sup_{\{\mathbf{x}_{i} \in\mathbb{B}_{\sigma}^{d}\left(r_{2}\right)\}}\mathbb{E}\left[\|\hat{\mathbf{ x}}-\overline{\mathbf{x}}\|_{2}^{2}\right]\\ &\quad=\mathcal{O}\left(\frac{r_{2}^{2}\log\left(dn/\beta\right)}{ n}\max\left\{\frac{1}{4^{m}},\frac{d}{s},\frac{ds}{\epsilon_{0}^{2}}\right\} \right).\end{split} \tag{11}\]
**Theorem 5** (MMS model).: _The output of the local mechanism \(\mathcal{R}_{v,m,s}^{\ell_{2}}\) can be represented using \(sm\left(\log\left(\lceil d/s\rceil\right)+1\right)\) bits. For every \(n\in\mathbb{N}\), \(\epsilon\leq 1\), and \(\delta\in(0,1)\), the shuffling the outputs of
mechanisms \(\mathcal{R}_{v,m,s}^{\ell_{2}}\) satisfies \((\epsilon,\delta)\)-DP by choosing \(v^{2}=\frac{n\epsilon^{2}}{s\log(1/\delta)}\). Let \(\hat{\mathbf{x}}\) be the output of the analyzer \(\mathcal{A}^{\ell_{2}}\). With probability at least \(1-\beta\) The estimator \(\hat{\mathbf{x}}\) is an unbiased estimate of \(\overline{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\) with MSE:_
\[\begin{split}&\mathsf{MSE}_{\text{MMS}}^{\ell_{2}}=\sup_{\{\mathbf{x}_{i} \in\mathbb{R}_{2}^{2}(r_{2})\}}\mathbb{E}\left[\|\hat{\mathbf{x}}-\overline{ \mathbf{x}}\|_{2}^{2}\right]\\ &\quad=\mathcal{O}\left(\frac{r_{2}^{2}\log\left(dn/\beta\right)} {n^{2}}\max\left\{\frac{n}{4^{m}},n\left(\frac{d}{s}-1\right),\frac{d\log\left( 1/\delta\right)}{\epsilon^{2}}\right\}\right).\end{split} \tag{12}\]
**Remark 2** (Kashin's represention).: Observe that the MSE in (12) has a factor of \((\log(nd))\) that comes from using the random rotation matrix. We can remove this factor \(\log(nd)\) by using the Kashin's representation [35] to transform the bounded \(\ell_{2}\)-norm vector into a bounded \(\ell_{\infty}\)-norm vector (see e.g., [11, 36, 37])
```
1:Public parameter: Privacy budget \(v\), communication levels \(m\), communication coordinates per level \(s\), and confidence term \(\beta\).
2:Input:\(\mathbf{x}_{i}\in\mathbb{B}_{2}^{d}(r_{2})\).
3:Let \(U=\frac{1}{\sqrt{d}}\mathbf{H}D\), where \(\mathbf{H}\) denotes a Hadamard matrix and \(D\) is a diagonal matrix with i.i.d. uniformly random \(\{\pm 1\}\) entries.
4:\(\mathbf{w}_{i}\gets W\mathbf{x}_{i}\)
5:\(r_{\infty}\gets 10r_{2}\sqrt{\frac{\log(dn/\beta)}{d}}\)
6:for\(j=1,\ldots,d\)do
7:\(\mathbf{w}_{i}[j]=\min\left\{r_{\infty},\max\left\{\mathbf{w}_{i}(j),-r_{ \infty}\right\}\right\}\)
8:endfor
9:\(\mathcal{Y}_{i}\leftarrow\mathcal{R}_{v,m,s}^{\ell_{\infty}}(\mathbf{w}_{i})\)
10:Return: The client sends \(\mathcal{Y}_{i}\).
```
**Algorithm 4** : Local Randomized \(\mathcal{R}_{v,m,s}^{\ell_{2}}\)
Remark 3 (Comparison with SecAgg).When choosing \(s=d\), the output of our algorithm \(\mathcal{R}_{v,m,s}^{\ell_{2}}\) can be represented as \(m\) binary-vectors. Hence, it is compatible with secure aggregation to compute the sum of these vectors. Thus, using our \(\mathcal{R}_{v,m,d}^{\ell_{2}}\) with SecAgg gives the same privacy-communication trade-offs as the MMS model in Theorem 5, since SecAgg can be seen as a post-processing of shuffling. However, our algorithm needs \(d\lceil\log\left(\frac{n\epsilon^{2}}{d}\right)\rceil\)-bits per client to achieve the central error of \(\mathcal{O}\left(\frac{d}{n^{2}\epsilon^{2}}\right)\). On the other hand, the distributed-discrete-Gaussian in [20] needs \(\mathcal{O}\left(d\log\left(n\right)\right)\)-bits per client to achieve the same MSE.
Next we present a lower bound for DME under privacy and communication constraints, which can be derived using results from [1] and [38].
```
1:Public parameter: Privacy parameter \(p\), and communication budget \(s\).
2:Input:\(\mathbf{b}_{i}\in\{0,1\}^{d}\).
3:\(a\leftarrow\lceil\frac{d}{s}\rceil\)
4:If \(a\) is not integer, add \(\left(sa-d\right)\) dummy zeros to the binary vector \(\mathbf{b}\).
5:for\(j\in[s]\)do
6: Choose uniformly at random one coordinate \(a_{ij}\leftarrow\text{Unif}\left(\left\{\left(j-1\right)a,\ldots,ja\right\}\right)\).
7:\(y_{ij}\gets a\mathcal{R}_{p}^{\text{2RR}}\left(\mathbf{b}_{i}[a_{ij}]\right)\)
8:endfor
9:Return: The client sends \(s\) messages \(\mathcal{Y}_{i}\leftarrow\left\{\left(z_{i1},y_{i1}\right),\ldots,\left(z_{is},y_{is}\right)\right\}\).
```
**Algorithm 6** : Local Randomizer \(\mathcal{R}_{p,s}^{\text{Bin}}\)
```
1:Inputs:\(\mathcal{Y}_{1},\ldots,\mathcal{Y}_{n}\), where \(\mathcal{Y}_{i}\) is \(s\) messages each is a pair \(\left(a_{ij},y_{ij}\right)\) for \(j\in[s]\) and \(i\in[n]\).
2:\(\hat{\mathbf{b}}\leftarrow\mathbf{0}_{d}\)
3:for\(i\in[n]\)do
4:for\(j\in[s]\)do
5:\(\hat{\mathbf{b}}[a_{ij}]\leftarrow\hat{\mathbf{b}}[a_{ij}]+y_{ij}\).
6:endfor
7:endfor
8:\(\hat{\mathbf{b}}\leftarrow\frac{1}{n}\hat{\mathbf{b}}\)
9:Return: The server returns \(\hat{\mathbf{b}}\).
```
**Algorithm 7** : Analyzer \(\mathcal{A}^{\text{Bin}}\)
**Theorem 6** (Lower Bound For central DP model).: _Let \(n,d\in\mathbb{N}\), \(\epsilon>0\), \(r_{2}\geq 1\), and \(\delta=o(\frac{1}{n})\). For any \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{B}_{2}^{d}(r_{2})\), the MSE is bounded below by:_
\[\text{MSE}_{\text{central}}^{\ell_{2}}=\Omega\left(r_{2}^{2}\max\left\{\frac{d} {n^{2}\epsilon^{2}},\frac{1}{n4^{b/d}}\right\}\right) \tag{13}\]
_for any unbiased algorithm \(\mathcal{M}\) that is \((\epsilon,\delta)\)-DP with \(b>d\)-bits of communication per client. Furthermore, when \(b<d\) bits per client, the MSE is bounded below by:_
\[\text{MSE}_{\text{central}}^{\ell_{2}}=\Omega\left(r_{2}^{2}d\max\left\{\frac{ 1}{n^{2}\epsilon^{2}},\frac{1}{nb}\right\}\right) \tag{14}\]
**Remark 5**.: (Optimality of our mechanism) When the communication budget \(b>d\), we can see that our MSE in Theorem 5 matches the lower bound in 6 (up to logarithmic factor) by choosing \(s=d\) and \(m=b/d\). Furthermore, when the communication budget \(b<d\), our algorithm achieve the lower bound by choosing \(s=b\) and \(m=1\). Thus, our algorithm for MMS is order optimal for all privacy-communication regimes.
## V Proof outlines
As can be seen from (7), and Algorithm 2, the main ingredient is to solve the following subproblem. Suppose, each client has a binary vector \(\mathbf{b}_{i}\in\{0,1\}^{d}\). The goal is to privately compute the sum \(\overline{\mathbf{b}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{b}_{i}\) under privacy and communication constraints. If we can demonstrate a solution to this problem, then we can apply it to bounded-norm vectors as in Sections IV-A and IV-B using (7), along with another critical ingredient, to judiciously allocate the overall privacy budget among these bit-vectors describing the vectors at different resolution. These are the two main ideas that enable us to get the main theoretical results.
### _Binary vectors_
A straightforward solution to compute \(\overline{\mathbf{b}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{b}_{i}\), is to apply the scalar solution proposed in [16] for each coordinate. However, this requires \(d\) bits per client. We will design private mechanisms with much less communication budget per client.
The client-side mechanism is presented in Algorithm 6, where the parameter \(s\) determines the communication budget for each client and the parameter \(p\) determines the total privacy budget (see Theorem 7). For given \(s\in\{1,\ldots,d\}\), each client splits the binary vector \(\mathbf{b}_{i}\) into \(s\) sub-vectors each with dimension \(a=\left\lceil\frac{d}{s}\right\rceil\). Then, the client chooses uniformly at random one coordinate from each sub-vector and privatizes its bit using 2RR Algorithm 1. Observe that the output of Algorithm 6 can be represented as a sparse \(d\)-dimensional vector with only \(s\) non-zero bits.
When \(s=1\), then each client applies the 2RR mechanism on each coordinate separately. On the other hand, when \(s=d\), the client chooses uniformly at random one coordinate and applies the 2RR mechanism. Thus, we get trade-offs between privacy-communication and accuracy. The server aggregator \(\mathcal{A}^{\text{Bin}}\) is presented in Algorithm 7, where the server simply aggregates the received randomized bits.
In the following theorems, we prove the bound on the MSE of the proposed mechanisms in the local DP and shuffle models. The proofs are deferred to Appendix B. For ease of presentation, we provide the order of the achievable MSE and give the \(\epsilon_{0}\)-LDP and/or central \((\epsilon,\delta)\)-DP guarantees of our mechanism for both local DP and shuffle models. However, we track the constants in the MSE in the detailed proofs in Appendix B. Furthermore, we present RDP guarantees of our mechanisms for both local DP and shuffle models in the detailed proofs.
**Theorem 7** (Local DP model).: _The output of the local mechanism \(\mathcal{R}^{\text{Bin}}_{p,s}\) can be represented using \(s\left(\left\lceil d/s\right\rceil+1\right)\) bits. By choosing \(p=\frac{1}{2}\left(1-\sqrt{\frac{\epsilon_{0}^{2}/s^{2}}{\epsilon_{0}^{2}/s^{ 2}+4}}\right)\), the mechanism \(\mathcal{R}^{\text{Bin}}_{p,s}\) satisfies \(\epsilon_{0}\)-LDP. Let \(\hat{\mathbf{b}}\) be the output of the analyzer \(\mathcal{A}^{\text{Bin}}\). The estimator \(\hat{\mathbf{b}}\) is an unbiased estimate of \(\overline{\mathbf{b}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{b}_{i}\) with bounded MSE:_
\[\mathsf{MSE}^{\text{Bin}}_{\text{ldp}} =\sup_{\{\mathbf{b}_{i}\in\{0,1\}^{d}\}}\mathbb{E}\left[\|\hat{ \mathbf{b}}-\overline{\mathbf{b}}\|_{2}^{2}\right] \tag{15}\] \[=\mathcal{O}\left(\frac{d^{2}}{n}\max\left\{\frac{1}{s},\frac{s} {\epsilon_{0}^{2}}\right\}\right).\]
Theorem 7 shows that each client needs to send \(s=\left\lceil\epsilon_{0}\right\rceil\) communication bits to achieve MSE \(\mathcal{O}\left(\frac{d^{2}}{n\min\{\epsilon_{0},\epsilon_{0}^{2}\}}\right)\). Now, we move to the shuffle model, where we assume that there exists \(s\) shuffler. The \(j\)-th shuffler randomly permutes the set of messages \(\{(a_{ij},y_{ij}):i\in[n]\}\) from the \(n\) clients.
**Theorem 8** (MMS model).: _The output of the local mechanism \(\mathcal{R}^{\text{Bin}}_{p,s}\) can be represented using \(s\left(\log\left(\left\lceil d/s\right\rceil\right)+1\right)\) bits. For every \(n\in\mathbb{N}\), \(\epsilon\leq 1\), and \(\delta\in(0,1)\), shuffling the outputs of \(n\) mechanisms \(\mathcal{R}^{\text{Bin}}_{p,s}\) satisfies \((\epsilon,\delta)\)-DP by choosing \(p=\frac{1}{2}\left(1-\sqrt{\frac{v^{2}}{v^{2}+4}}\right)\), where \(v^{2}=\frac{ne^{2}}{4s\log(1/\delta)}\). Let \(\hat{\mathbf{b}}\) be the output of the analyzer \(\mathcal{A}^{\text{Bin}}\). The estimator \(\hat{\mathbf{b}}\) is an unbiased estimate of \(\overline{\mathbf{b}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{b}_{i}\) with bounded MSE:_
\[\mathsf{MSE}^{\text{Bin}}_{\text{shuffle}} =\sup_{\{\mathbf{b}_{i}\in\{0,1\}^{d}\}}\mathbb{E}\left[\|\hat{ \mathbf{b}}-\overline{\mathbf{b}}\|_{2}^{2}\right] \tag{16}\] \[=\mathcal{O}\left(\frac{d^{2}}{n^{2}}\max\left\{n\left(\frac{1}{s }-\frac{1}{d}\right),\frac{\log\left(1/\delta\right)}{\epsilon^{2}}\right\} \right).\]
Theorem 8 shows that each client requires to send \(s=\mathcal{O}\left(\min\{n\epsilon^{2},d\}\right)\) communication bits such that the error in the shuffle model is bounded by \(\mathcal{O}\left(\frac{d^{2}}{n^{2}\epsilon^{2}}\right)\) that matches the MSE of central differential privacy mechanisms. For the scalar case when \(d=1\), our results in Theorem 8 matches the optimal MSE as in [16].
### _Putting things together_
We start with proof outlines for Theorems 2 and 3. For both, the local randomization is the same, and the basic idea is that of non-uniform randomization of the different bits used to quantize a real vector \(\mathbf{z}_{i}\), arising from (7). In particular, we use distinct randomizations for each bit vector \(\mathbf{b}_{i}^{(k)}\in\{0,1\}^{d}\), with different parameters \(p_{i}\) causing different privacy for each resolution level \(k\). For a given local privacy guarantee of \(\epsilon_{0}\), we divide this into guarantees \(\epsilon_{0}^{(k)}\) for the \(k\)-th resolution level, such that \(\epsilon_{0}=\sum_{k=1}^{m}\epsilon_{0}^{(k)}\). The intuition is that one allocates higher privacy (lower \(\epsilon_{0}^{(k)}\)) to the MSBs (lower \(k\)), for a given overall privacy budget \(\epsilon_{0}\). This is because to get better accuracy (performance in terms of MSE) we want the higher-order bits to be less noisy than the lower-order bits. We connect this non-uniform choice to the MSE for the LDP and MMS privacy models below.
**Lemma 3** (Non-uniform privacy allocation).: _Consider \(m\) privacy mechanisms for \(\{\mathbf{b}_{i}^{(k)}\in\{0,1\}^{d}\},\mathbf{u}_{i}\) denoted by \(\mathcal{R}_{\mathcal{P}_{1},s}^{\text{Bin}}(\mathbf{b}_{i}^{(1)}),\ldots, \mathcal{R}_{\mathcal{P}_{m-1,s}}^{\text{Bin}}(\mathbf{b}_{i}^{(m-1)}), \mathcal{R}_{\mathcal{P}_{m,s}}^{\text{Bin}}(\mathbf{u}_{i})\), parametrized by \(\{p_{i}\}\). For a given total privacy allocation of the choice of \(v\stackrel{{\triangle}}{{=}}\epsilon_{0}\), the choice of \(v_{k}\stackrel{{\triangle}}{{=}}\epsilon_{0}^{(k)}=\frac{4^{- \frac{k}{3}}}{\left(\sum_{l=1}^{m-1}4^{\frac{-l}{3}}+4^{-\frac{m+1}{3}} \right)}v\) for \(k\in[m-1]\) and \(v_{m}=\frac{4^{\frac{-m+1}{3}}}{\left(\sum_{l=1}^{m-1}4^{\frac{-l}{3}}+4^{- \frac{m+1}{3}}\right)}v\), we can get the following LDP and MMS models' RDP-privacy guarantees:_
\[\epsilon_{\text{LDP}}\left(\alpha\right) =\sum_{k=1}^{m}\epsilon_{\text{LDP}}^{(k)}\left(\alpha\right) \tag{17}\] \[\epsilon_{\text{MMS}}\left(\alpha\right) =\sum_{k=1}^{m}\epsilon_{\text{MMS}}^{(k)}\left(\alpha\right)\leq c \frac{\alpha v^{2}}{sn} \tag{18}\]
_for some constant \(c\) and \(\epsilon_{\text{LDP}}^{(k)}\left(\alpha\right)\leq\frac{s}{\alpha-1}\log \left(p_{k}^{\alpha}\left(1-p_{k}\right)^{1-\alpha}+p_{k}^{1-\alpha}(1-p_{k}) ^{\alpha}\right)\) (see Appendix D for details)._
This lemma immediately yields the central DP guarantees of \(\epsilon_{0}\) for the LDP model, and a \(\left(\epsilon_{\text{MMS}},\delta\right)\)-DP, for the MMS model, where \(\epsilon_{\text{MMS}}\) is bounded by
\[\epsilon_{\text{MMS}}\leq 2c\sqrt{\frac{\epsilon_{0}^{2}\log(1/\delta)}{sn}}, \tag{19}\]
which suggests setting \(\epsilon_{0}^{2}=\frac{sn^{2}}{4\log(1/\delta)}\), for the local randomization. Critically, this choice of non-uniform privatization enables the following result, proved in Appendix D.
**Lemma 4** (MSE performance).: _With the non-uniform privacy allocation specified in Lemma 3, we get the the following LDP and MMS models' MSE performance for DME:_
\[\mathsf{MSE}_{\text{LDP}}^{\ell_{\infty}} \leq\mathcal{O}\left(\frac{r_{\infty}^{2}d^{2}}{n}\max\left\{ \frac{1}{d4^{m}},\frac{1}{s},\frac{s}{\epsilon_{0}^{2}}\right\}\right) \tag{20}\] \[\mathsf{MSE}_{\text{MMS}}^{\ell_{\infty}} \leq\mathcal{O}\left(\frac{r_{\infty}^{2}d^{2}}{n^{2}}\max\left\{ \frac{n}{d4^{m}},n\left(\frac{1}{s}-\frac{1}{d}\right),\frac{\log\left(1/ \delta\right)}{\epsilon^{2}}\right\}\right) \tag{21}\]
Theorem 2 follows from (20) and Theorem 3 follows from (19) and (21). Theorems 4, 5 directly follow by using Theorem 10 in Appendix E in 2 and 3.
## VI Numerical Results
In this section, we evaluate the performance of our algorithms in the local DP model and the shuffle model.
### _Local DP model_
We start by comparing the performance of our algorithm \(\mathcal{R}_{v,m,s}^{\ell\infty}\) with the performance of the Laplace mechanism [1] in the local model for scalar case, i.e., \(d=1\). Hence, the elements \(\mathbf{x}_{i}\in[-1,1]\). Observe that the Laplace mechanism is the optimal scheme is this case, however, it has infinite communication bits. In Figure 0(a), we plot the MSE of our \(\mathcal{R}_{v,m,s}^{\ell\infty}\) with different communication budget \(s=1\) and \(m\in\{1,2,3,4\}\) for a single client \(n=1\). We can observe that our mechanism achieves MSE closer to the MSE of the Laplace mechanism. Furthermore, we only need at most \(m=3\) bits to achieve similar performance as Laplace mechanism.
### _Shuffler model_
We consider two cases in the shuffler model: 1) The scalar case when \(d=1\) to evaluate the performance of our \(\mathcal{R}_{v,m,s}^{\ell\infty}\) mechanism in the shuffle model. 2) The vector case when \(d=1000\) to evaluate the performance of our \(\mathcal{R}_{v,m,s}^{\ell 2}\) mechanism in the shuffle model.
ScalarIn Figure 0(b), we plot the MSE of two different mechanisms versus the central privacy \(\epsilon\) for fixed \(\delta=10^{-5}\). The first mechanism is single message shuffle (SMS) obtained using Laplace mechanism with privacy amplification results in []. Observe that Laplace mechanism is the optimal LDP mechanism for LDP and the privacy amplification results in [39] is approximately optimal for \((\epsilon,\delta)\)-DP. Hence, we expect that this is the best that an SMS mechanism can achieve. The second mechanism is our multi-message shuffling (MMS) mechanism \(\mathcal{R}_{v,m,s}^{\ell\infty}\) mechanism for \(d=1\) and \(m\in\{4,6\}\). Since we have MMS, we use the RDP results of privacy amplification by shuffling in [7] which is better for composition to compute the RDP of our mechanism. Then, we transform from RDP bound to approximate \((\epsilon,\delta)\)-DP. We choose number of clients \(n=1000\). We can see that our multi-message shuffle model achieve lower MSE than the single message shuffle especially for large value of central DP parameter \(\epsilon\).
Bounded \(\ell_{2}\)-norm vectorsSimilar to the scalar case, we consider two mechanisms. The first mechanism SMS is obtained by using privunit mechanism with the privacy amplification results in [39], where privunit[40] is asymptotically optimal LDP mechanism [41]. We choose \(n=1000\) and \(d=300\). For our MMS \(\mathcal{R}_{v,m,s}^{\ell_{2}}\), we choose \(s\in\{200,250\}\). It is clear from Figure 0(c) that our MMS mechanism has better performance than SMS mechanism. |
2308.04262 | SDLFormer: A Sparse and Dense Locality-enhanced Transformer for
Accelerated MR Image Reconstruction | Transformers have emerged as viable alternatives to convolutional neural
networks owing to their ability to learn non-local region relationships in the
spatial domain. The self-attention mechanism of the transformer enables
transformers to capture long-range dependencies in the images, which might be
desirable for accelerated MRI image reconstruction as the effect of
undersampling is non-local in the image domain. Despite its computational
efficiency, the window-based transformers suffer from restricted receptive
fields as the dependencies are limited to within the scope of the image
windows. We propose a window-based transformer network that integrates dilated
attention mechanism and convolution for accelerated MRI image reconstruction.
The proposed network consists of dilated and dense neighborhood attention
transformers to enhance the distant neighborhood pixel relationship and
introduce depth-wise convolutions within the transformer module to learn
low-level translation invariant features for accelerated MRI image
reconstruction. The proposed model is trained in a self-supervised manner. We
perform extensive experiments for multi-coil MRI acceleration for coronal PD,
coronal PDFS and axial T2 contrasts with 4x and 5x under-sampling in
self-supervised learning based on k-space splitting. We compare our method
against other reconstruction architectures and the parallel domain
self-supervised learning baseline. Results show that the proposed model
exhibits improvement margins of (i) around 1.40 dB in PSNR and around 0.028 in
SSIM on average over other architectures (ii) around 1.44 dB in PSNR and around
0.029 in SSIM over parallel domain self-supervised learning. The code is
available at https://github.com/rahul-gs-16/sdlformer.git | Rahul G. S., Sriprabha Ramnarayanan, Mohammad Al Fahim, Keerthi Ram, Preejith S. P, Mohanasankar Sivaprakasam | 2023-08-08T13:59:16Z | http://arxiv.org/abs/2308.04262v1 | # SDLFormer: A Sparse and Dense
###### Abstract
Transformers have emerged as viable alternatives to convolutional neural networks owing to their ability to learn non-local region relationships in the spatial domain. The self-attention mechanism of the transformer enables transformers to capture long-range dependencies in the images, which might be desirable for accelerated MRI image reconstruction as the effect of undersampling is non-local in the image domain. Despite its computational efficiency, the window-based transformers suffer from restricted receptive fields as the dependencies are limited to within the scope of the image windows. We propose a window-based transformer network that integrates dilated attention mechanism and convolution for accelerated MRI image reconstruction. The proposed network consists of dilated and dense neighborhood attention transformers to enhance the distant neighborhood pixel relationship and introduce depth-wise convolutions within the transformer module to learn low-level translation invariant features for accelerated MRI image reconstruction. The proposed model is trained in a self-supervised manner. We perform extensive experiments for multi-coil MRI acceleration for coronal PD, coronal PDFS and axial T2 contrasts with 4x and 5x under-sampling in self-supervised learning based on k-space splitting. We compare our method against other reconstruction architectures and the parallel domain self-supervised learning baseline. Results show that the proposed model exhibits improvement margins of (i) \(\sim\) 1.40 dB in PSNR and \(\sim\) 0.028 in SSIM on average over other architectures (ii) \(\sim\) 1.44 dB in PSNR and \(\sim\) 0.029 in SSIM over parallel domain self-supervised learning. The code is available at [https://github.com/rahul-gs-16/sdlformer.git](https://github.com/rahul-gs-16/sdlformer.git)
Keywords:MRI reconstruction Self supervised learning Transformers.
## 1 Introduction
Vision transformers have emerged as a competitive alternative to convolutional blocks in various image reconstruction tasks [9],[20],[15],[24]. They offer a flexible
mechanism to capture relationships between regions from distant neighborhoods [14], helping in relating patterns useful for image restoration. MRI acceleration can specifically benefit from this, as the imaging process involves sampling k-space trajectories, which impacts the image domain representation in a non-local manner.
In this work, we consider the problem of MR image reconstruction using the window-based self-attention mechanism of vision transformers. Window-based transformers such as SwinMR [6] have been used for MRI reconstruction, but windowing trades-off restricted receptive field for computational complexity, which we propose to alleviate by designing a variant of the transformer module. Further, we complement the global information operations of transformers with convolutions, imparting fine-grained local feature modeling, valuable in MRI reconstruction. (Fig 1 (a)).
**Related works:** To increase the range over which attention is computed without increasing the computation cost, the Attention Retractable Transformer (ART) [22] uses a sparse attention block (SAB) where the input to the transformer is the windowed image regions processed in a dilated manner. On the other hand, the Cross Aggregation Transformer (CAT) [1] introduces the Locality Complementary Module (LCM) in the self-attention stage of the transformer to provide local context, but processes the input using dense neighboring windows. An alternative method of inducing locality is Locality enhanced Feed-Forward (LeFF) [15], which uses depth-wise convolutions in the transformer's feed-forward layer. While recursive dilated CNNs [13] have been applied in MRI reconstruction, an intersection of dilated or sparse self-attention mechanism with local context remains unexplored for MRI reconstruction. Figure 1b tabulates the key factors and motivations for our proposed method.
Our proposed model is a window-based self-attention transformer that incorporates sparse and dense attention blocks with convolutions. Designed to capture long-range pixel interactions and local contextual features, the proposed model is trained in a data-driven self-supervised manner [17], and demonstrated for 4x and 5x accelerations in multi-coil MRI.
We summarise our contributions as follows. **1)** We propose SDLFormer, a computationally efficient and performant transformer-based network hybridized
Figure 1: Description of the proposed model. In the sparse attention block pixels are given in dilated manner to the input of the transformer. In dense attention block the input is given in a continuous manner. Cyan and green denote different channels.
with CNNs for accelerated multi-coil MRI reconstruction. **2)** Our proposed transformer block is designed to capture long-range dependencies via sparse attention on dilated windows in the input image domain and dense attention over neighborhood windows. The sparse and dense attention blocks are augmented with depth-wise convolutions to learn local contextual features with self-attention. **3)** We extensively evaluate the proposed network on self-supervised multi-coil MRI reconstruction using multi-coil knee datasets for three contrasts: Proton Density (PD), Proton Density with Fat Suppression (PDFS), and Axial T2. We have achieved an improvement of \(\sim 0.6\) dB in PSNR and \(\sim 0.018\) in SSIM over the best-performing model, the SwinMR transformer. We perform an ablative study to understand the contribution of each component of our proposed transformer network.
## 2 Method
In this section, the mathematical formulation of the MRI under-sampling process, the overall architecture pipeline, and the Locality Enhanced Transformer (LET) block are described.
**Problem formulation:** Let \(x\in\mathbb{C}^{N_{x}\times N_{y}}\) represent 2-D MRI image with height \(N_{y}\) and width \(N_{x}\). The forward model of the k-space undersampling process with \(N_{c}\) coils is given by,
\[y_{i}=M\odot\mathcal{F}\left(S_{i}\odot x\right);\quad i=1,\ldots,N_{c} \tag{1}\]
where \(M\) is a 2-D under-sampling mask, \(\odot\) represents Hadamard product, and \(\mathcal{F}\) is 2-D Fourier transform respectively. \(S_{i}\) represents the sensitivity map that encodes the coil's spatial sensitivity and is normalized such that \(\sum_{i=1}^{N}S_{i}^{*}S_{i}=I_{n}\)
Our goal is to reconstruct image \(x\) from \(y\) which is formulated as an optimization problem for supervised learning given by,
\[\underset{\theta}{\operatorname{argmin}}\quad||x-h_{\theta}(y)||_{2}^{2}+ \lambda||M\odot\mathcal{F}(x)-y||_{2}^{2} \tag{2}\]
where \(x_{u}=\mathcal{F}^{-1}(y)\) is the undersampled image obtained by zero filling the missing k-space values, and \(h_{\theta}\) is the image reconstruction network.
**Self-supervised learning:** Following [17], we randomly partition \(y\) into two disjoint sets \(y_{1}\) and \(y_{2}\) as follows \(y_{1}=M_{1}\odot y,y_{2}=M_{2}\odot y\), where \(M_{1}\) and \(M_{2}\) are the two disjoint masks used to partition the k-space \(y\). The loss \(L\) function is defined as,
\[L(y_{1},y_{2})=||M_{2}\odot\mathcal{F}(h_{\theta}(y_{1}))-y_{2}||_{1} \tag{3}\]
This self-supervised approach eliminates the need for fully sampled data, which requires extensive amounts of measurements for multi-coil acquisition.
**Architecture Details:** The overall pipeline of the proposed method is shown in Figure 2 (a). The input to the pipeline is the under-sampled k-space data which is processed using a k-space CNN. The output of k-space is converted to the image domain. Initially, two Sparse Attention Transformer modules are
present, followed by two Dense Attention Transformer modules. The LET block operates as the transformer module in the sparse and dense attention blocks. The sparse attention differs from the dense attention transformer by operating in a dilated manner. K-Space CNN is a 5-layer CNN with instance normalization, ReLU activation, and a residual connection from the input to the output to enable gradient flow. The pipeline of the architecture is shown in Figure 2 (a).
**Locality Enhanced Transformer (LET):** The internal architecture of the LET block is shown in Figure 2 (c). This architecture tries to address two main challenges faced by a vanilla transformer. 1) The quadratic computation cost with respect to the number of tokens. 2) The transformers show a limitation in capturing local dependencies [16], [8] which are essential for image restoration.
Computational complexity is reduced using a non-overlapping Locality enhanced Window-based Multi-head Self-Attention (LeW-MSA). Here the input feature map \(X\in\mathbb{R}^{(H\times W\times C)}\) is split into non-overlapping windows with window size \(M\times M\) and flattened to obtain features \(X_{i}\in\mathbb{R}^{M^{2}\times C}\) from each window i. The flattened features are projected into subspace using linear projection to obtain query (Q), key (K), and Value (V). Multi-headed self-attention is applied, to the flattened features in each window using the equation 4 a. Inspired by CAT [1], a Locality Complementary Module (LCM) is introduced in the transformer, which is a 3x3 depth-wise convolution module, used to extract local contextual features, as shown in Figure 2 (b). Following Uformer [15],[19], the ability of the transformer's feed-forward layer to capture local contextual features is improved by introducing a 3 x 3 depth-wise convolution in the feed-forward layer,
Figure 2: Proposed Overall Pipeline. The pipeline consists of k-space CNN, Sparse attention block, dense attention block, and convolution layer in series. LeWMSA, LeFF, and LCM represent Locally enhanced Window Multi-head Self Attention, Locally enhanced Feed Forward, and Locality Complementary Module respectively. The under-sampling mask is split into two and multiplied with the k-space measurement to obtain two subsets of measured k-space. One subset is used by the model for prediction and the other subset is used as reference. L1 loss is computed between the reference k-space subset and the FFT of the image predicted by the model.
with the necessary reshaping as shown in Figure 2 (d). The architecture of the transformer block is shown in Figure 2 (c).
\[Attention(Q,K,V)=Softmax(\frac{QK^{T}}{\sqrt{d_{r}}}+B)V+LCM(V), \tag{4a}\] \[X^{\prime}=LeWMSA(X_{in})\ +\ X_{in},\] (4b) \[X_{out}=LeFF(LN(X^{\prime}))\ +\ X^{\prime} \tag{4c}\]
Where \(X^{{}^{\prime}}\) and \(X_{out}\) are the outputs of Window-based Multi-head self-attention and \(LeFF\) blocks with skip connections respectively. \(LN\) represents Layer Norm.
**Dataset details:** Three protocols: coronal proton-density (PD), coronal fat-saturated PD (PDFS), and axial fat-saturated T2 from the dataset Multi-Coil Knee dataset [4], were chosen. The data was acquired through a 15-channel multi-coil setting for 20 subjects. Each 3D volume has 40 slices of 640x368 resolution complex-valued data and their corresponding sensitivity maps. The center 19 slices were considered for our experiments. The dataset was partitioned into 10 volumes containing 190 slices each for training purposes, and 10 volumes with 190 slices each for validation.
**Implementation details:** The models are trained using PyTorch v1.12 on a 24GB RTX 3090 GPU. The Adam optimizer [7] without weight decay is employed with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and an initial learning rate of 1e-3, which undergo step-wise reduction using a learning rate scheduler with a step-size of 40 epochs and \(\gamma\) of 0.1. The training is performed for 150 epochs using the L1 loss, and the instances with the best validation loss are saved for comparison. Performance evaluation is based on Peak Signal-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Since aliasing artifacts are structural, non-local processing in the image domain alone might be insufficient [10]. To address this, k-space CNN is added at the beginning of the proposed method and all the other comparison methods as suggested by [3], [25], [11]. For faster convergence, the weights of K-space CNN in the proposed model are initialized with the weights obtained by training KIKI-net in a self-supervised manner. The weights of the transformer are initialized with the weights obtained from the Uformer [15] model trained with natural images (SIDD dataset). The data consistency (DC) [12] in the network's output is ensured using the partition \(y_{1}\). The model is also trained in parallel domain training methodology [5] and evaluated.
## 3 Results and Discussion
Our results are organized as follows. 1. Comparison of the proposed model with other State of the Art MRI reconstruction models. 2. Ablative study of various components in the architecture
### Qualitative and Quantitative Comparison on Multi coil knee MRI dataset.
The quantitative comparison of the proposed model with other models proposed for the Multi-Coil MRI image reconstruction is shown in Table 1. Our method outperforms other methods proposed for MRI reconstruction in PSNR and SSIM metrics on all three MRI sequences (coronal PD, coronal PDFS, and axial T2) in both 4x and 5x acceleration factors, except coronal PD with 5x acceleration in terms of PSNR. The proposed model outperforms the second-best model by 0.59 dB in PSNR and 0.0178 in SSIM, in the axial-T2 dataset for the acceleration factor of 4x and in coronal PD for the acceleration factor 5x respectively. It can be seen that in the parallel domain self-supervised training mode [5], the proposed model outperforms the ISTA-Net.
The qualitative comparison of our model with other models proposed for multi-coil MRI reconstruction for coronal PD, axial T2 dataset for acceleration factors of 4x and 5x are shown in Fig 3 (a), 3(b) respectively.
The reconstructions obtained through zero padding exhibit significant aliasing artifacts and lose important anatomical details. Although VS-Net, and Recurrent VarNet models are able to reduce the aliasing artifacts to some extent, the artifacts remain visible in the reconstructions. While the KIKI-net, U-Net, and ISTA-Net models are more effective in reducing the artifacts, they fall short in their ability to reconstruct the structures as accurately as transformer-based models. This can be attributed to their limited capability to capture long-range dependencies. Out of the transformer-based models, it can be seen that the proposed model reconstructs the image structures more accurately than the SwinMR transformer-based model. In Figure 3 (a), the SwinMR transformer-based model introduces some artifacts (region pointed to, by the blue arrow)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Coronal PD} & \multicolumn{3}{c}{Coronal PDFS} & \multicolumn{3}{c}{Axial T2} \\ \cline{2-9} & 4x & 5x & 4x & 5x & 4x & 5x \\ & PSNR / SSIM & PSNR / SSIM & PSNR / SSIM & PSNR / SSIM & PSNR / SSIM & PSNR / SSIM \\ \hline ZF & 28.14 / 0.7838 & 25.99 / 0.7119 & 30.67 / 0.7848 & 28.84 / 0.7206 & 31.35 / 0.8186 & 30.38 / 0.7829 \\ Recurrent VarNet [18] & 29.49 / 0.8255 & 25.65 / 0.7134 & 30.40 / 0.7880 & 28.42 / 0.7237 & 32.44 / 0.8402 & 31.26 / 0.7983 \\ VS-Net [2] & 30.65 / 0.8431 & 26.71 / 0.7369 & 30.59 / 0.7810 & 28.76 / 0.7176 & 32.17 / 0.8209 & 30.82 / 0.7885 \\ KIKI-net [3] & 31.80 / 0.8617 & 27.42 / 0.7574 & 31.3 / 0.8286 & 30.36 / 0.7462 & 34.12 / 0.8564 & 32.65 / 0.8138 \\ ISTA-Net [23] & 31.94 / 0.8635 & 27.72 / 0.7649 & 32.43 / 0.8072 & 29.54 / 0.7320 & 33.73 / 0.8485 & 31.82 / 0.8013 \\ ISTA-Net [PL] & 32.10 / 0.8698 & 27.66 / 0.7620 & 32.38 / 0.8067 & 29.56 / 0.7322 & 33.73 / 0.8501 & 31.59 / 0.7988 \\ U-Net [21] & 32.90 / 0.8884 & 27.90 / 0.7729 & 33.58 / 0.8318 & 30.79 / 0.7561 & 34.36 / 0.8596 & 32.67 / 0.8152 \\ SwinMR [6] & 33.22 / 0.8954 & **28.43** / 0.7853 & 33.65 / 0.8303 & 30.59 / 0.7508 & 34.38 / 0.8596 & 32.81 / 0.8157 \\ Proposed (SSL) & **33.77** / **0.9056** & 28.42 / **0.8031** & **33.96** / **0.8359** & **30.90** / **0.7611** & **34.97** / **0.8651** & **32.97** / **0.8193** \\ Proposed (PL) & 33.86 / 0.9065 & 28.71 / 0.8085 & 34.07 / 0.8358 & 30.97 / 0.7604 & 35.01 / 0.8656 & 33.08 / 0.8186 \\ Proposed (SL) & 36.16 / 0.9282 & 30.97 / 0.8495 & 35.09 / 0.8534 & 32.15 / 0.7837 & 36.01 / 0.8814 & 34.24 / 0.8393 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of different methods on different datasets and acceleration factors. The best results for self-supervised learning [17] are highlighted in bold. The SL, PL, and SSL represent Supervised Learning, self-supervised learning in the Parallel domain training, and Self Supervised Learning with only k-space splitting respectively.
in the image, which are not present in the ground truth. In Figure 3 (b), the proposed model recovers fine details better than the SwinMR model (as pointed to, by the blue arrow) when trained in self-supervised [17][5] or supervised techniques. From the results, it can be seen that increasing the receptive field by using dilated attention and complementing global information operations of transformers with fine-grained local contextual features from convolutions, positively impacts accelerated MRI image reconstruction.
### Ablation Study
The impact of each block in the network is analyzed in Table 2. The results show that sparse attention blocks (SAB) significantly improve the model's performance. Dense attention block (DAB) individually performs better than SAB, but when they are combined together, they complement each other and provide
Figure 3: (a). Coronal PD 4x acceleration. Blue arrows highlight some artifacts produced by SwinMR which is not present in the ground truth and in the result of the proposed model. (b). Coronal PDFS 4x acceleration. Blue arrows highlight the sharp reconstruction produced by the proposed method.
the benefits of both attention methods. It can be seen that the locality-based enhancement improves the SSIM considerably, as it enables better capturing of local contextual features. This can be seen in Figure 4.
## 4 Conclusion
Our work increases the receptive field of the transformer without increasing the computational complexity and complements global information with local contextual features by integrating convolutions in transformers. We have trained the proposed model in a self-supervised manner to remove the necessity of fully sampled data. We have evaluated our model for the reconstruction of the multi-coil knee MRI datasets in three different acquisition protocols and show that the proposed architecture outperforms other methods. We show that the integration of local contextual features obtained from convolution, with global information obtained using dilated self-attention improves the performance of the image reconstruction.
\begin{table}
\begin{tabular}{c c} \hline Architecture & PSNR / SSIM \\ \hline CNN & 31.80 / 0.8617 \\ \hline SAB & 32.99 / 0.8863 \\ \hline DAB & 33.10 / 0.8926 \\ \hline SAB + DAB w/o locality & 33.47 / 0.8974 \\ \hline
**SAB + DAB** & **33.77** / **0.9056** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of different blocks in the model. The best results are highlighted in bold.
Figure 4: (a) Qualitative comparison of different models (b). Residual map to highlight the difference. Images from left to right correspond to the output of CNN, SAB, DAB, SAB and DAB without locality, Proposed method, and ground truth respectively. |
2301.08153 | SwiftAvatar: Efficient Auto-Creation of Parameterized Stylized Character
on Arbitrary Avatar Engines | The creation of a parameterized stylized character involves careful selection
of numerous parameters, also known as the "avatar vectors" that can be
interpreted by the avatar engine. Existing unsupervised avatar vector
estimation methods that auto-create avatars for users, however, often fail to
work because of the domain gap between realistic faces and stylized avatar
images. To this end, we propose SwiftAvatar, a novel avatar auto-creation
framework that is evidently superior to previous works. SwiftAvatar introduces
dual-domain generators to create pairs of realistic faces and avatar images
using shared latent codes. The latent codes can then be bridged with the avatar
vectors as pairs, by performing GAN inversion on the avatar images rendered
from the engine using avatar vectors. Through this way, we are able to
synthesize paired data in high-quality as many as possible, consisting of
avatar vectors and their corresponding realistic faces. We also propose
semantic augmentation to improve the diversity of synthesis. Finally, a
light-weight avatar vector estimator is trained on the synthetic pairs to
implement efficient auto-creation. Our experiments demonstrate the
effectiveness and efficiency of SwiftAvatar on two different avatar engines.
The superiority and advantageous flexibility of SwiftAvatar are also verified
in both subjective and objective evaluations. | Shizun Wang, Weihong Zeng, Xu Wang, Hao Yang, Li Chen, Yi Yuan, Yunzhao Zeng, Min Zheng, Chuang Zhang, Ming Wu | 2023-01-19T16:14:28Z | http://arxiv.org/abs/2301.08153v2 | # SwiftAvatar: Efficient Auto-Creation of Parameterized Stylized Character
###### Abstract
The creation of a parameterized stylized character involves careful selection of numerous parameters, also known as the "avatar vectors" that can be interpreted by the avatar engine. Existing unsupervised avatar vector estimation methods that auto-create avatars for users, however, often fail to work because of the domain gap between realistic faces and stylized avatar images. To this end, we propose _SwiftAvatar_, a novel avatar auto-creation framework that is evidently superior to previous works. SwiftAvatar introduces dual-domain generators to create pairs of realistic faces and avatar images using shared latent codes. The latent codes can then be bridged with the avatar vectors as pairs, by performing GAN inversion on the avatar images rendered from the engine using avatar vectors. Through this way, we are able to synthesize paired data in high-quality as many as possible, consisting of avatar vectors and their corresponding realistic faces. We also propose semantic augmentation to improve the diversity of synthesis. Finally, a light-weight avatar vector estimator is trained on the synthetic pairs to implement efficient auto-creation. Our experiments demonstrate the effectiveness and efficiency of SwiftAvatar on two different avatar engines. The superiority and advantageous flexibility of SwiftAvatar are also verified in both subjective and objective evaluations.
\({}^{1}\)Beijing University of Posts and Telecommunications \({}^{2}\)Douyin Vision
{wangshizun, zhangchuang, wuming}@bupt.edu.cn
{zengweihong, wangxu.ailab, yang.hao, chenli.phd, yuanyi.cv, zengyunzhao, zhengmin.666}@bytedance.com
## 1 Introduction
The emerging of the Metaverse concept is alongside with the wide usage of virtual avatars, embodiment of Metaverse users in various styles, which are popular in modern digital lives such as socialization, e-shopping and gaming. Even though many avatar platforms, such as Zepeto, BitMoji and ReadyPlayerMe, have enabled users to create their own stylized avatars by specifying avatar vectors that can be interpreted by their avatar engines, the manual creation process is tiresome and time consuming, especially when the engine provides with a large set of options making up a long avatar vector. Besides, the avatar vectors to specify usually consists of parameters in both continuous forms (to control facial shape, eye spacing, etc.) and discrete forms (to control hair styles, wearings etc.), all requiring careful selection and cautious adjustment to achieve a satisfactory result. Therefore, it is valuable to study _how to automatically create a stylized avatar that best matches the user's input selfie_.
A straightforward way to realize avatar auto-creation is using supervised learning: a network is trained on labeled data to predict avatar vectors based on real face inputs. However, this requires large amount of data collection and manual labeling, which is laborious, expensive, and is not generalizable across engines. Because the definition of avatar vectors and assets vary from engine to engine, data labeled for one engine can not be used to train on other engines.
Several unsupervised learning methods have been proposed to address avatar auto-creation without using any labeled data, including Tied Output Synthesis (TOS) [12] and the Face-to-Parameter (F2P) series [13, 14]. The main idea of these works can be abstracted by Fig. 2-a. In order to achieve an avatar vector that renders avatar image as similar to the input face as possible, these methods impose constraints on the image-level. They suffer from issues as follows: 1) The image-level similarity constraints they establish are designed for realistic avatar images, not applicable to stylized avatar images that have domain gap with real face images. 2) Leveraging the image-level supervision requires that the avatar rendering process is differentiable. So they usually introduce an imitator network that imitates the behavior of the non-differentiable avatar engine. However, the un-avoidable deviation of imitators from the original avatar engine, as illustrated by Fig. 2-a, degrades the accuracy of the similarity measure. 3) Some approaches like F2P [13] need iterative optimization to guarantee the quality of estimated avatar vectors, which is time-consuming in inference.
To address these issues, we propose _SwiftAvatar_, a novel avatar auto-creation framework shown by Figure. 2-b). Unlike previous works that use similarity constraints to find the avatar vector whose rendered image best matches a given face, the core idea of our framework is cross-domain data synthesis. SwiftAvatar is able to synthesize pairs of avatar vectors and corresponding realistic faces in high fidelity as many as possible. They are used to train a light-weight estimator that directly predicts avatar vectors from input selfie. Specifically, the SwiftAvatar framework consists of three components: _dual-domain generators_, a pipeline for _cross-domain paired data production_, and an _avatar estimator_.
The dual-domain generators comprise a realistic generator and an avatar generator, both adopting an architecture from SemanticStyleGAN [13]. The realistic generator is pretrained on real faces, but the avatar generator is transfer-learned on engine-rendered avatar images with color consistency constraints. So that given a same latent code, the two generators could generate a realistic face image and an avatar image that naturally look similar. Data synthesis relies on the dual-domain generators. The production of synthetic data starts from randomly sampled avatar vectors. They are sent to the avatar engine to render avatar images, which are then inverted into latent codes through the avatar generator. Finally, the latent codes are fed into the realistic generator to get realistic face images corresponding to the sampled avatar vectors. Moreover, we introduce semantic augmentation to expand the diversity of produced data by adding local perturbation to latent code. With the synthetic data, we can then train an avatar estimator on them. The avatar estimator is light-weight and efficient at inference. Given user selfie image, it accomplish the auto-creation by directly predicting an avatar vector that matches the input.
Our experiments involve both objective and subjective evaluations, comparing SwiftAvatar with previous methods in different aspects. SwiftAvatar achieves advantageous results over all existing methods in creating avatars fidelity to the input images. Moreover, experiments on two diverse avatar engines verify the strong generality of SwiftAvatar. Qualitative results illustrated in Fig. 1, show that SwiftAvatar can generate reasonable avatars for both engines given input faces. In summary, our contributions are as follows:
* A novel framework, SwiftAvatar, is proposed that can automatically create a stylized avatar given a user selfie image. It can be swiftly applied to any arbitrary avatar engines without extra assumptions (e.g. differentiable rendering capability) on that engine.
* SwiftAvatar presents a novel pipeline that produces paired data across domains. It involves dual-domain generators to address the domain gap between realistic faces and stylized avatars. A novel semantic augmentation is also devised to improve the diversity of data synthesis.
* Experiments show that SwiftAvatar outperforms previous methods in terms of both quality and efficiency. Results on two different avatar engines also verify the strong generalizability of SwiftAvatar.
## 2 Related Work
3D Face ReconstructionMany progresses on 3D face reconstruction [13, 14] cannot be achieved without 3D morphable models (3DMM) [1] and its variants like BFM [1] and FLAME [11], where the geometry and texture of a photorealistic 3D face are parameterized as a vector through linear transform. Recent researches also explore representing 3D faces in other formats like dense landmarks [17] or position maps [18]. However, stylized avatar auto-creation is not 3D face reconstruction. They differ in two folds: 1) 3D face reconstruction methods aim to recover realistic 3D faces, not stylized characters; 2) Most avatar engines utilize avatar vectors to render stylized 3D characters according to manually designed assets. Estimating avatar vectors is much more difficult than reconstructing 3DMM coefficients, since the graphics rendering of avatar engines are usually black-box and non-differentiable.
GAN and GAN InversionThe rapid growth of generative networks, such as GANs [15, 16] and VAEs [19, 17], inspires methods to use _latent codes_ that can implicitly parameterize face images through a pre-trained generator. Among the pre-trained generators, the most popular one is the generator of the StyleGAN [16, 17], known for its impressive capability in generating high quality face images. In order to leverage the pre-trained StyleGAN generator in editing existing face images, GAN in
version Zhu et al. (2020) is required to compute the latent code whose StyleGAN output best matches a given face image. Existing GAN inversion methods estimate latent codes using either iterative optimization Abdal et al. (2020) or training a feed-forward encoder Richardson et al. (2021) using VGG perceptual loss Johnson et al. (2016), LPIPS loss Zhang et al. (2018) or face identity loss Deng et al. (2019). Though GAN inversion can not be directly applied to estimate the avatar vectors due to the hand-crafted nature of avatar engines, SwiftAvatar leverages GAN inversion in its data production pipeline to build the correspondence between avatar vectors and latent codes.
Portrait StylizationRecent literatures on portrait stylization also benefit a lot from pre-trained StyleGANs. On one hand, finetuning the pre-trained StyleGAN generator provides with an agile approach for synthesizing faces of new styles Song et al. (2021); Back (2021). On the other hand, freezing low-level layers of the generator during finetuning helps preserving structural information (e.g. face attributes) between face images generated from the original generator (realistic domain) and the finetuned generator (stylized domain), when they are using similar latent codes Back (2021); Huang et al. (2021). Though this cross-domain latent code sharing strategy cannot be directly applied on avatar engines, we discover that it has a strong potential in cross-domain data synthesis. In specific, we design the data production pipeline of SwiftAvatar to leverage a pre-trained SemanticStyleGAN Shi et al. (2022). It adopts a compositional generator architecture, disentangling the latent space into different semantic areas separately, and could provide more precise local controls on synthesized face images.
Stylized Avatar Auto-CreationAuto-creating avatars for individual users has become an important capability to the entertainment industry.To ensure quality, commercial solution usually involve a large amount of manual annotations, something this paper seeks to avoid. Among the published approaches that avoid manual annotations, some Shi et al. (2019, 2020) are designed for realistic avatars only, which share the same domain with the real face images, it is easy to define whether they match or not. Therefore, these methods can utilize straightforward in-domain supervisions to improve fidelity of creation, such as the image-level L1/L2 losses, face parsing loss, or the face identity loss Deng et al. (2019). Creating stylized avatars from real faces, on the contrary, is more difficult than creating realistic avatars. There are only few works toward this direction. Tied Output Synthesis (TOS) Wolf et al. (2017) devises a encoder-decoder structure, that eliminates the domain gap by sharing one encoder across two domains. The contemporaneous work AgileAvatar Sang et al. (2022) formulates a cascaded framework which progressively bridges the domain gap. To address the non-differentiable issue of avatar engines, all these methods need to train an imitator network to imitate the engines' rendering procedure. While the quality of the images generated by the imitator is relatively poor. Besides, they all impose explicit constraints on images from different domains, resulting in suboptimal matches because of the domain gap. By contrast, our SwiftAvatar framework employs original avatar engine to generate high-quality images and utilizes dual-domain generators to overcome the domain gap.
## 3 Methodology
In this section, we present our unsupervised framework for stylized avatar's auto-creation, as shown in Fig.3. It aims at estimating avatar vector \(p\) for input real face image \(x\), then the estimated \(p\) can be used to render the corresponding stylized avatar \(y\) by avatar engine \(E\). Our solution is split into three parts: dual-domain generators in Sec. 3.1, paired data production in Sec. 3.2, and avatar estimator in Sec. 3.3. Dual-domain generators address the domain gap problem by generating realistic faces and stylized avatars with shared latent codes. Then, paired data production focuses on how to generate paired data consisting of avatar vectors and realistic faces. Finally, avatar estimator estimates desired avatar vectors similar to input real face images.
### Dual-domain Generators
The dual-domain generators consist of a realistic generator \(G_{\textit{real}}\) and an avatar generator \(G_{\textit{avatar}}\) to achieve cross-domain image generation. Given the same latent code, they can si
Figure 2: Conceptual comparison of previous and our methods. a) Previous methods try to solve unknown avatar vectors under image-level constraints, which requires to train a differentiable engine imitator to participate the optimization. b) While our method can use origin avatar engine and cross-domain generation to synthesize paired data (image-label), and then a model is trained based on the synthetic dataset.
multaneously synthesize paired images of both realistic face and stylized avatar while preserving the same attributes (e.g. skin color, hair style). To impose an extra facial consistency constraint between the realistic and the avatar domains, we adopt SemanticStyleGAN[20] as the architecture of two generators owing to its extra semantic segmentation output.
Cross-domain GenerationPretrained SemanticStyleGAN on CelebAMask-HQ [14] is directly used as realistic generator \(G_{\textit{real}}\), and also used to initialize the weight of \(G_{\textit{avatar}}\). We perform transfer learning on \(G_{\textit{avatar}}\): using only limited number of avatar images \(\mathcal{Y}\) to finetune avatar generator \(G_{\textit{avatar}}\). \(\mathcal{Y}\) are rendered from avatar engine using randomly sampled avatar vectors. The finetune procedure follows the settings in SemanticStyleGAN, which using the loss of StyleGAN2 [11]:
\[\mathcal{L}_{StyleGAN2}=\mathcal{L}_{adv}+\lambda_{R1}\mathcal{L}_{R1}+ \lambda_{path}\mathcal{L}_{path} \tag{1}\]
where \(\lambda_{R1}\), \(\lambda_{path}\) are the constant weights of R1 regularization [11] and path length regularization [11] separately. Adversarial loss \(\mathcal{L}_{adv}\) adopts non-saturating logistic loss [12] and it forces \(G_{\textit{avatar}}\) to generate images similar to avatar image dataset \(\mathcal{Y}\). R1 regularization \(\mathcal{L}_{R1}\) is employed to improve the training stability and reduce the number of artifacts. And path length regularization \(\mathcal{L}_{path}\) leads to more reliable and consistently behaving models.
Facial ConsistencyAlthough directly fine-tuning SemanticStyleGAN can generate structurally similar paired data of avatar image and realistic face, the colors of each region are not well matched. Since SemanticStyleGAN learns the joint modeling of image and semantic segmentation. Such design can simultaneously synthesize face images and their semantic segmentation results. We utilize the semantic segmentation output and introduce a color matching loss for cross-domain facial color consistency. Specifically, we extract specified pixels from same semantic areas in generated paired images, and match the mean color of them. \(m^{s}(I)\) is the mean color of the region \(s\) in the image \(I\), and we mainly consider matching the color in hair and skin areas: \(s\in\{hair, skin\}\). The color matching loss is:
\[\mathcal{L}_{color}=\sum_{s}\left\|m^{s}(G_{\textit{real}}(z))-m^{s}(G_{ \textit{avatar}}(z))\right\|^{2} \tag{2}\]
Overall, the final finetuning loss \(\mathcal{L}_{total}\) for \(G_{\textit{avatar}}\) is formulated as:
\[\mathcal{L}_{total}=\mathcal{L}_{StyleGAN2}+\lambda_{color}\mathcal{L}_{color} \tag{3}\]
An example is shown in Figure. 4, when given a shared latent code, the dual-domain generators can generate a pair of images: a realistic face and a stylized avatar. They are similar in facial structure and color composition, yet belong to two different domains.
### Paired Data Production
Paired data production pipeline focuses on synthesizing paired avatar vectors and realistic face images, as is illustrated in Figure. 3. We sample a number of random avatar vectors as labels \(\mathcal{P}\), which are used by the graphics engine to generate corresponding avatar images \(\mathcal{Y}\). Then, for every avatar image \(y\), we pass it through GAN inversion to get its latent code \(w\). We adopt optimization-based GAN inversion for its better performance:
\[w^{*}=\operatorname*{arg\,min}_{w}\mathcal{L}_{invert}(y,w) \tag{4}\]
where \(w^{*}\) represents our target latent code. For faster convergence, latent codes are initialized with mean value \(w_{mean}\), and optimized by gradient descent. We use LPIPS loss [15] to measure perceptual similarity, and
Figure 3: Overview of our method. a) Dual-domain generators consist of a fixed realistic generator and a transfer-learned avatar generator. They can generate corresponding realistic and avatar images when given the same latent codes. b) Paired data production starts from randomly sampled avatar vectors, which are sent to the avatar engine to render avatar images. These images are then inverted into latent codes through GAN inversion, and the latent codes are fed into the realistic generator to get realistic face images corresponding to the sampled avatar vectors. c) Supervised by the paired data, the avatar estimator can predict avatar vectors when given an real face input, then the avatar can be rendered by avatar engine.
mean squared error (MSE) loss between original images and reconstructed avatar images to measure reconstruction similarity. Besides, MSE loss between \(w_{mean}\) and \(w\) is set as latent regularization to ensure better generation results. The loss function is formulated as:
\[\mathcal{L}_{invert}=\lambda_{p}\textit{LPIPS}(y,G_{\textit{ avatar}}(w))+\\ \lambda_{i}\left\|y-G_{\textit{avatar}}(w)\right\|^{2}+\lambda_{l }\left\|w-w_{mean}\right\|^{2} \tag{5}\]
where \(\lambda_{p}\), \(\lambda_{i}\), \(\lambda_{l}\) are constants to balance three loss terms. After obtaining desired latent code \(w\), it is fed into the realistic generator \(G_{\textit{real}}\) to generate a realistic face \(x\) which is similar to original avatar image \(y\) in identification:
\[x=G_{\textit{real}}(w) \tag{6}\]
In this way, we can generate paired data \((p,x)\) used for later avatar vector estimation training process. An example illustration of paired data is shown in Figure. 5.
Semantic AugmentationThe sampled avatar vectors, as well as their rendered avatar faces suffer from the lack of diversity, due to the limited amount of assets available for avatar engines. Take the "hair style" attribute for example, in most stylized avatar engines, different hair styles are determined by selecting from different hair meshes all predefined in the engine. The limited number of predefined mesh assets restricts the capability of avatar engines to match a real-world user face, whose hair styles would be countless. To enrich the diversity of generated realistic faces in data production, we take advantage of the compositional generation ability of our dual-domain generators which are implemented as SemanticStyleGANs, and design the _semantic augmentation_. In SemanticStyleGAN, each semantic part is modulated individually with corresponding local latent codes. Such property enables us to manipulate only specific regions of the synthetic images while keeping other regions unchanged. In implementation, we add random noises to part of latent code corresponding to these ambiguous avatar vectors (e.g. hair type). Semantic augmentation can be described as:
\[w_{k}\rightarrow(1-\lambda_{aug})w_{k}+\lambda_{aug}n \tag{7}\]
where \(\lambda_{aug}\) is a hyper-parameter to adjust semantic augmentation tensity, \(w_{k}\) is the local part of latent code to be augmented, and \(n\) represents for random noise. A semantic augmentation example is shown in Figure. 5, where hair and background region are changed.
### Avatar Estimator
Once the aforementioned synthetic paired dataset is produced, we can train an avatar estimator to predict avatar vectors, which contain continuous and discrete parameters. We choose ResNet-18 He et al. (2016) pretrained on ImageNet Deng et al. (2009) as our backbone. We remove its last fully connected layers and add multiple separate MLP heads for different parameter estimation. All continuous parameters form a target vector to be predicted in one head, supervised by L1 loss. Every discrete parameter estimation is carried out with a standalone head. Because both generation and semantic augmentation would inevitably introduce noises to discrete labels, we choose symmetric cross-entropy (SCE) loss Wang et al. (2019), which has been proven robust to noises, for the optimization of discrete tasks. The total loss of avatar estimator is:
\[\mathcal{L}_{estimator}=\lambda_{d}\textit{SCE}(\widehat{p_{d}^{i}},p_{d}^{i})+ \lambda_{c}\left|\widehat{p_{c}}-p_{c}\right| \tag{8}\]
where \(\lambda_{d}\) and \(\lambda_{c}\) are hyper-parameters to balance two loss terms. \(\widehat{p_{d}^{i}}\), \(\widehat{p_{c}}\) are the prediction results of \(i\)-th discrete head and continuous head respectively. And \(p_{d}^{i}\), \(p_{c}\) are their corresponding ground-truth.
## 4 Experiments
Experimental DataTo verify the effectiveness of our method, we conduct experiments on two stylized avatar engines: the TikTok engine and the Alter engine. The TikTok engine contains resourceful discrete and continuous avatar parameters, so we generate 50000 avatar vectors and corresponding rendered images. The Alter engine is an open-source avatar engine and only contains discrete avatar parameters which has fewer assets than the TikTok engine, so we just generate 10000 avatar vectors and corresponding rendered images. The detailed information of both avatar engines can be found in supplementary materials. These
Figure 4: Generated results of our dual-domain generators. When providing randomly sampled latent codes, we can obtain a) and b), realistic faces and their semantic segmentation results. Simultaneously, we can obtain c) and d), stylized avatar images and their semantic segmentation results.
Figure 5: Examples of produced paired data. a) Engine rendered avatar images. b) Reconstructed avatar images. c) Generated realistic images. d) Semantic augmented images.
avatar images and avatar vectors are used for finetuning avatar generator and producing paired data. For evaluation, we choose 116 images from FFHQ dataset [10], which consists of diverse kinds of face shapes, hairstyles, etc. We invite designers to manually generate avatar vectors for these 116 images as ground truth.
Implementation DetailsWe implement our methods using PyTorch 1.10 library and perform all experiments on NVIDIA V100 GPUs. When finetuning the avatar generator \(G_{\textit{avatar}}\), we use the same optimizer settings as in SemanticStyleGAN. Batch size is set to 16, style mixing probability [10] is set to 0.3. \(\lambda_{R1}\), \(\lambda_{path}\) are set to 10 and 0.5 separately. Lazy regularization [10] is applied every 16 mini-batches for discriminator (R1 regularization) and every 4 mini-batches for generator (path length regularization). All the images used for generators are aligned and resized to resolution \(512\times 512\). The optimization-based GAN inversion approach employs Adam [11] optimizer in the paired data production stage, and the learning rate initially follows cosine annealing with 0.1. We optimize 200 steps for all latent codes, and \(\lambda_{i}\), \(\lambda_{p}\), \(\lambda_{l}\) are set to 0.1, 1 and 1, respectively. The mean latent code \(w_{mean}\) is the average among \(10^{4}\) randomly generated latent codes from avatar generator \(G_{\textit{avatar}}\) in \(\mathcal{W}\) space, and serve as the initialization. Notably, directly optimizing the latent code could be problematic since some avatar assets are transparent, e.g. glasses. Thus, we use a modified version \(\tilde{w}_{mean}\) for latent code initialization (See supplementary materials for details). For semantic augmentation, we generate 10 augmented images for each latent code, using randomly generated noise in \(\mathcal{W}\) space.we set \(\lambda_{aug}\) to 1 for the background to improve the model robustness of background variance, and also set \(\lambda_{aug}\) to 0.3, 0.06 for the hair part and glasses part to expand data diversity. In the avatar estimator training stage, the input images of avatar estimator are resized to \(224\times 224\). We use the Adam optimizer with batch size 256 to train 100 epochs. The learning rate is set to \(1e-3\), and decayed by half per 30 epochs. For the experiments on TikTok engine, there are 1 continuous head and 8 discrete heads inside the avatar estimator, so we set \(\lambda_{c}\) and \(\lambda_{d}\) to 1 and 10 separately. For the experiments on Alter engine, the avatar estimator contains 6 discrete heads and the training loss only contains discrete loss.
Figure 6: Visual comparison with other methods on TikTok engine and Alter engine. a) Given input images, b) our method creates stylized avatars that look similar to input images. c) Baseline method without considering the domain gap problem in stylized avatars. d) F2P [22], an image-level constraint method intending to create semi-realistic avatars, is not suitable to create stylized avatars. e) F2P v2 [22], a fast and robust version of F2P. f) Manually created avatars by professional designers, which can be regarded as ground truth.
### Comparison with Other methods
We compare our method with other methods, including Baseline, F2P [21], F2P v2 [22] on both their auto-creation similarity and inference speed. Baseline method is setup as ignoring the domain gap problem, where the avatar estimator is trained on \((y,p)\) paired data, that is, trained on rendered avatar images instead of real face images. Figure. 6 shows a comparison of rendered stylized avatars among different methods corresponding to their predicted avatar vectors. As can be seen, our method can better address the domain gap problem and create stylized avatars with high similarity with input real faces and approximate the quality of manual method from designers. For more results, please refer to supplementary materials.
### Quantitative Evaluation
Although the avatar auto-creation has no standard answer, we generally regard the avatar manually created by professional designers as the ground truth, and quantitatively evaluate at image level. We calculate the perceptual distance (LPIPS) [14] between auto-created avatar and manual-created avatar to simulate human observation. The lower distance indicates the avatar is better matched to input image. The results are presented in Table. 1, from which we can see our method significantly outperforms others. Since our method only needs one forward propagation, our inference speed is also competitive, which can be applied in real-time applications.
### Human Rating
We invited 50 volunteers to subjectively evaluate all algorithms of 50 images from evaluation dataset on both graphics engine. Given the input real face images and stylized avatars generated from our and other methods, the volunteers were requested to pick the best matching avatar for each input real face. The methods were presented in random order, and the volunteers were given an unlimited time to choose. The human rating results are shown in Table. 2. \(59.11\%\) of the answers on TikTok engine and \(63.68\%\) on the Alter engine selected our method as the best matching results, showing the superiority of our method compared with others.
### Ablation Study
We conduct the ablation experiments in terms of domain adaptation and semantic augmentation to verify their importance in our framework. We adopt the same evaluation metric described in Sec. 4.2. The ablation starts from Baseline method, where the avatar estimator is trained on rendered avatar images and avatar vectors pair. Then on the basis of baseline method, we add domain adaptation and semantic augmentation in turn. Table. 3 shows their quantitative results on different engines. The domain adaptation greatly alleviates the domain gap problem in stylized avatar auto-creation, establishes a bridge of connecting real faces and stylized avatars. The semantic augmentation brings noticeable improvement for our method due to its expansion of diversity to samples.
## 5 Limitations and Future Work
There are two main limitations we observed in the experiments. First, our method occasionally predicts wrong color influenced by environmental lighting. This problem might be resolved by considering lighting condition into our pipeline. Second, in paired data production, avatar vector sampling distribution directly influences the training data quality. Simply random sampling could produce some strange images and may cause long-tail problem (e.g. gender, age). In the future, we will perform attribute analysis, and introduce reasonable sampling priors to address synthetic data distribution problem.
## 6 Conclusion
In summary, we present a novel unsupervised framework for auto-creation of stylized avatars. We design the dual-domain generators to address the domain gap between the real images and stylized avatars. Then following the paired data production pipeline, high-quality paired data are produced, which is used for training the avatar estimator. Finally stylized avatars are created by conducting efficient avatar vector estimation. Compared with previous methods, our method is more concise in training stage and more efficient in inference stage. Results on quantitative evaluation and human rating demonstrate the superiority of our method. Also, the success of applying on two different avatar graphics engines demonstrates the generality of our method.
\begin{table}
\begin{tabular}{l|l|l|l} \hline Method & TikTok engine \(\downarrow\) & Alter engine \(\downarrow\) & Speed \(\uparrow\) \\ \hline Baseline & 0.5033 & 0.3962 & \(\sim 10^{2}\) Hz \\ F2P & 0.3562 & 0.3040 & \(\sim 1\) Hz \\ F2P v2 & 0.4466 & 0.2825 & \(\sim 10^{2}\) Hz \\ Ours & **0.3110** & **0.2405** & \(\sim\)**10\({}^{\mathbf{2}}\) Hz** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative Evaluation. We compare our method with three other methods on TikTok engine and Alter engine. Lower distance represents results are more similar to manual-creation. We test speed on NVIDIA V100.
\begin{table}
\begin{tabular}{l|l|l} \hline Method & TikTok engine \(\downarrow\) & Alter engine \(\downarrow\) \\ \hline baseline & 0.5033 & 0.3962 \\ + domain adaptation & 0.3401 & 0.3123 \\ + semantic aug & **0.3110** & **0.2405** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on two engines. Lower distance indicates better matching results to manual-creation.
\begin{table}
\begin{tabular}{l|l|l|l} \hline Method & Ours & baseline & F2P & F2P v2 \\ \hline TikTok & **59.11\%** & 5.66\% & 30.86\% & 4.37\% \\ Alter & **63.68\%** & 18.94\% & 8.65\% & 8.73\% \\ \hline \end{tabular}
\end{table}
Table 2: Human subjective rating on two engines. Our method earns the most choices when asked to choose the avatar which matches the human image. |
2304.03800 | Towards $\it{ab}$-$\it{initio}$ nuclear theory calculations of
$δ_\mathrm{C}$ | We propose a new theory framework to study the isospin-symmetry breaking
correction $\delta_\text{C}$ in superallowed nuclear beta decays, crucial for
the precise determination of $|V_{ud}|$. Based on a general assumptions of the
isovector dominance in ISB interactions, we construct a set of functions
$F_{T_z}$ which involve nuclear matrix elements of isovector monopole operators
and the nuclear Green's function. Via the functions $F_{T_z}$, a connection of
$\delta_\text{C}$ to measurable electroweak nuclear radii is established,
providing an experimental gauge of the theory accuracy of $\delta_\text{C}$. We
outline a strategy to perform ab-initio calculations of $F_{T_z}$ based on the
Lanczos algorithm, and discuss its similarity with other
nuclear-structure-dependent inputs in nuclear beta decays. | Chien-Yeah Seng, Mikhail Gorchtein | 2023-04-07T18:06:56Z | http://arxiv.org/abs/2304.03800v2 | # Towards _ab-initio_ nuclear theory calculations of \(\delta_{\rm C}\)
###### Abstract
We propose a new theory framework to study the isospin-symmetry breaking correction \(\delta_{\rm C}\) in superallowed nuclear beta decays, crucial for the precise determination of \(|V_{ud}|\). Based on a general assumptions of the isovector dominance in ISB interactions, we construct a set of functions \(F_{T_{s}}\) which involve nuclear matrix elements of isovector monopole operators and the nuclear Green's function. Via the functions \(F_{T_{s}}\), a connection of \(\delta_{\rm C}\) to measurable electroweak nuclear radii is established, providing an experimental gauge of the theory accuracy of \(\delta_{\rm C}\). We outline a strategy to perform ab-initio calculations of \(F_{T_{s}}\) based on the Lanczos algorithm, and discuss its similarity with other nuclear-structure-dependent inputs in nuclear beta decays.
## I Introduction
For many decades, superallowed beta decays of \(J^{p}=0^{+}\), \(T=1\) nuclei have provided the best measurement of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element \(V_{ud}\). The reason is of twofold: (1) At tree level only the vector charged weak current is involved, whose matrix element is exactly known assuming isospin symmetry, and (2) there are so far 23 measured superallowed transitions, with 15 among them whose \(ft\)-value precision is 0.23% or better; the large sample size implies a large gain in statistics [1].
This advantageous stance is now challenged by the free neutron decay. On the one hand, the latter benefits from recent improvements in the single-nucleon radiative correction theory [2; 3; 4; 5; 6; 7; 8] and measurements of the neutron lifetime \(\tau_{n}\)[9; 10; 11; 12] and the axial coupling constant \(g_{A}\)[13; 14; 15; 16]. On the other, recent analyses unveiled new sources of nuclear structure uncertainties in superallowed decays [3; 17; 18]. In fact, taking the single best measurement of \(\tau_{n}\) and \(g_{A}\), one obtains (adopting the value of single-nucleon radiative correction quoted in Ref.[19]):
\[|V_{ud}|_{n}=0.97413(43)\, \tag{1}\]
which should be compared to the superallowed beta decay determination quoted in the same reference:
\[|V_{ud}|_{0^{+}}=0.97367(32). \tag{2}\]
One sees that the precision of \(|V_{ud}|_{n}\) is indeed getting closer to \(|V_{ud}|_{0^{+}}\) and, more importantly, a small discrepancy between the two values starts to emerge. This could add to the so-called Cabibbo angle anomaly [20; 21; 22], the mutual disagreement between different extractions of the Cabibbo angle \(\theta_{C}\), which was also sharpened by new theory calculations in the \(V_{us}\) sector [23; 24; 25; 26; 27; 28; 29].
Further improvements in the nuclear-structure-dependent Standard Model (SM) theory for superallowed nuclear decays are required for the latter to regain their lead. Process-specific quantities originating from measurements and theoretical corrections are usually lumped into the nucleus-independent \(\mathcal{F}t\)-value,
\[\mathcal{F}t\equiv ft(1+\delta_{\rm R}^{\prime})(1+\delta_{\rm NS}-\delta_{\rm C })\,. \tag{3}\]
The nucleus-dependent \(ft\)-values are derived from experimental measurements of the decays' \(Q\)-values, branching ratios and halflives by absorbing Coulomb distortion effects in terms of the point-charge Fermi function [30] and beyond (see [31] for a review). The "outer radiative correction" \(\delta_{\rm R}^{\prime}\) accounts for QED effects beyond Coulomb distortions, and is well under control [32; 33; 34; 35]. The remaining two corrections depend on nuclear structure in a non-trivial way: \(\delta_{\rm NS}\) represents the nuclear-structure correction to the single-nucleon \(\gamma W\) box diagram, whereas \(\delta_{\rm C}\) represents the isospin-symmetry breaking (ISB) correction to the Fermi matrix element \(M_{F}\). The recent inflation of the \(|V_{ud}|_{0^{+}}\) theory uncertainty comes entirely from \(\delta_{\rm NS}\), where a previously missed correction from quasi-elastic nucleons was identified. A combination of the dispersive representation [36; 37] and ab-initio calculations of nuclear \(\gamma W\)-box diagrams may help reducing this uncertainty in the near future.
In this paper we concentrate on another important theory input, the ISB correction \(\delta_{\rm C}\). It measures the deviation of the full Fermi matrix element \(M_{F}\) from its isospin-symmetric limit, \(M_{F}^{0}=\sqrt{2}\) for \(T=1\). So far its determination relies solely on model calculations, which is a classic problem in nuclear theory for more than 6 decades [38]. Frequently quoted results include calculations based on the nuclear shell model with Woods-Saxon (WS) potential [39; 40; 41; 35], Hartree-Fock wave functions [42; 43], density functional theory [44; 45], random-phase approximation [46] and the isovector monopole resonance sum rule [47]. Results from different methods are almost randomly-scattered and show no sign of convergence (see, e.g. Table I in Ref.[48]).
Interestingly enough, despite the tremendous model-dependence, the assigned theory uncertainty for \(|V_{ud}|_{0^{+}}\) due to \(\delta_{\rm C}\) in a number of highly-cited global analysis turns out to be extremely small [1; 49]. A criterion
adopted in these analysis is the ability of the model calculation to align the \(\mathcal{F}t\)-values of different superallowed transitions, per request of the conserved vector current (CVC) hypothesis [50]. This criterion effectively ruled out all but one calculation, namely the WS result, which they used in their subsequent analysis. However, this strategy is not without controversy: for example, one cannot rule out the possibility that the CVC hypothesis is invalidated by physics beyond the Standard Model (BSM), or that there is constant shift to all values of \(\delta_{\rm C}\). It has also been pointed out that the theory framework on which the WS calculation is based contains several inconsistencies, e.g. not using the correct isospin operator [51; 52; 53], and correcting for these might lead to a substantial reduction of the \(\delta_{\rm C}\) values.
A major limitation of existing calculations of \(\delta_{\rm C}\) is the absence of direct constraints from measurable ISB observables which can be used to quantify the theory uncertainties. The most precisely studied ISB observable in nuclear systems is the isobaric multiplet mass equation (IMME) that describes the mass splitting between isobaric analog states [54; 55; 56; 57]; it was used in a number of studies to either fix the model parameters [35] or as a preliminary test of the methodology's applicability [58]. However, there is no overlap between the leading nuclear matrix elements that contribute to the IMME coefficients and to \(\delta_{\rm C}\), so the extent to which IMME constrains \(\delta_{\rm C}\) is not entirely clear. To overcome this limitation, we identified in Ref. [48] a set of ISB observables \(\Delta M^{(1)}_{A,B}\) constructed from the electroweak nuclear radii across the isotriplet, which depend on the same nuclear matrix elements as \(\delta_{\rm C}\). Measurements of the former from atomic spectroscopy, beta decay recoil effects and fixed-target scattering experiments allow one to constrain the latter. To illustrate this idea, we adopted a simple isovector monopole dominance picture to derive a proportionality relation between \(\Delta M^{(1)}_{A,B}\) and \(\delta_{\rm C}\). Despite being model-dependent, this simple picture offers a useful guidance for the precision target of future experiments.
In this work we further explore the idea in Ref.[48] in a model-independent way. We construct a set of functions of an energy variable \(\zeta\)\(F_{T_{z}}(\zeta)\) (\(T_{z}=-1,0,1\)) that depend on the nuclear matrix elements common to \(\delta_{\rm C}\) and \(\Delta M^{(1)}_{A,B}\). We show how the needed ISB observables can be derived from \(F_{T_{z}}\) and its derivatives. Therefore, if a theory approach can reliably calculate \(F_{T_{z}}\) as a function of \(\zeta\), it simultaneously predicts \(\Delta M^{(1)}_{A,B}\) and \(\delta_{\rm C}\) with a correlated degree of accuracy. A good agreement of the calculations with the experimental measurements for the former will imply the reliability of the theory prediction for the latter. In this sense, the approach advocated here directly constrains \(\delta_{\rm C}\) and its uncertainty by the experiment.
The content of this work is arranged as follows. In Section II we derive the leading perturbative expression of \(\delta_{\rm C}\) and argue that existing model calculations may contain large systematic uncertainties. In Section III we review the central idea in Ref.[48], namely the construction of the two ISB observables \(\Delta M^{(1)}_{A,B}\) from the measurable electroweak nuclear radii. In Section IV we define the functions \(F_{T_{z}}(\zeta)\) and demonstrate their connection to \(\delta_{\rm C}\) and \(\Delta M^{(1)}_{A,B}\). In Section V we discuss possible strategies to compute \(F_{T_{z}}(\zeta)\) as a function of \(\zeta\), which simultaneously predicts \(\Delta M^{(1)}_{A,B}\) and \(\delta_{\rm C}\). In Section VI we draw our conclusions.
## II ISB in perturbation theory
To discuss the perturbative expression of ISB observables, we split the full Hamiltonian as \(H=H_{0}+V\), where \(H_{0}\) is the unperturbed, isospin-conserving part and \(V\) is the ISB perturbation term. We label the eigenstates of \(H_{0}\) as \(|\alpha;T,T_{z}\rangle\) (with unperturbed energy \(E_{a,T}\)), where \(T,T_{z}\) are the isospin quantum numbers, and \(a\) represents all other quantum numbers unrelated to isospin. In particular, the ground state isotriplet that undergoes superallowed beta decay transitions is labelled as \(|g;1,T_{z}\rangle\).
The most commonly studied ISB observable is IMME,
\[E(a,T,T_{z})={\sf a}(a,T)+{\sf b}(a,T)T_{z}+{\sf c}(a,T)T_{z}^{2}\, \tag{4}\]
which takes its form based on the fact that any two-nucleon interaction can at most be isotensor, i.e. we can write \(V=V^{(1)}+V^{(2)}\), where the superscript denotes the isospin. The coefficients \({\sf b}\) and \({\sf c}\) characterize the strength of ISB effects. To first order in perturbation theory, they are related to the diagonal matrix element of \(V\):
\[{\sf b}\sim\langle a;T,T_{z}|V^{(1)}|a;T,T_{z}\rangle\,\ {\sf c}\sim\langle a;T,T_{z}|V^{(2)}|a;T,T_{z} \rangle. \tag{5}\]
Experimental measurements show in general \(|{\sf b}|\gg|{\sf c}|\) which indicates the dominance of isovector ISB effects. For instance, in \(J^{P}=0^{+}\), \(T=1\) isomultiplets, one observes that the ratio \(|{\sf b}/{\sf c}|\geq 15\) for \(A\geq 26\), and increases with increasing \(A\)[59].
On the other hand, \(\delta_{\rm C}\) depends on a completely different set of nuclear matrix elements than the IMME coefficients \({\sf b}\) and \({\sf c}\). To see this, we start with the exact formalism by Miller and Schwenk [51], and label the
eigenstates of \(H\) and \(H_{0}\) temporarily as \(|n\rangle\) and \(|n\rangle\) respectively. The full Fermi matrix element for a superallowed transition \(i\to f\) is given by
\[M_{F}=\langle f|\tau_{+}|i\rangle\, \tag{6}\]
with \(\tau_{+}\) the isospin raising operator. Similarly, the isospin-limit Fermi matrix element is \(M_{F}^{0}=(f|\tau_{+}|i\rangle\). The Wigner-Brillouin perturbation theory implies,
\[|n\rangle=\sqrt{{\cal Z}_{n}}\left[|n\rangle+\frac{1}{E_{n}-\Lambda_{n}H \Lambda_{n}}\Lambda_{n}V|n\rangle\right]\, \tag{7}\]
where \(E_{n}\) is the energy of the full state \(|n\rangle\), \(\Lambda_{n}=1-|n\rangle(n|\) projects out the unperturbed state \(|n\rangle\), and
\[{\cal Z}_{n}=\left[1+(n|V\Lambda_{n}\left(\frac{1}{E_{n}-\Lambda_{n}H\Lambda_{ n}}\right)^{2}\Lambda_{n}V|n\rangle\right]^{-1} \tag{8}\]
is a normalization factor to ensure \(\langle n|n\rangle=1\). Substituting Eq.(7) into Eq.(6) gives:
\[M_{F} = \sqrt{{\cal Z}_{i}{\cal Z}_{f}}\left[M_{F}^{0}+(f|V\Lambda_{f} \frac{1}{E_{f}-\Lambda_{f}H\Lambda_{f}}\right. \tag{9}\] \[\left.\times\tau_{+}\frac{1}{E_{i}-\Lambda_{i}H\Lambda_{i}} \Lambda_{i}V|i\rangle\right]\,\]
which is the central result of Ref.[51]. It is clear from the expression above that the deviation between \(M_{F}\) and \(M_{F}^{0}\) starts at \({\cal O}(V^{2})\). Concentrating on the \({\cal O}(V^{2})\) corrections in Eq.(9) and using the definition \(|M_{F}|^{2}=|M_{F}^{0}|^{2}(1-\delta_{\cal C})\), we get,
\[\delta_{\rm C}=\langle g;1,T_{zi}|V\Lambda_{i}\left(\frac{1}{E_{g,1}-\Lambda_{i}H_{0}\Lambda_{i}}\right)^{2}\Lambda_{i}V|g;1,T_{zi}\rangle\] \[+\langle g;1,T_{zf}|V\Lambda_{f}\left(\frac{1}{E_{g,1}-\Lambda_{f }H_{0}\Lambda_{f}}\right)^{2}\Lambda_{f}V|g;1,T_{zf}\rangle\] \[-\frac{2}{M_{F}^{0}}\langle g;1,T_{zf}|V\Lambda_{f}\frac{1}{E_{g,1}-\Lambda_{f}H\Lambda_{f}}\tau_{+}\] \[\times\frac{1}{E_{g,1}-\Lambda_{i}H\Lambda_{i}}\Lambda_{i}V|g;1,T_{zi}\rangle+{\cal O}(V^{3}). \tag{10}\]
We observe that due to the presence of the projection operators the leading expression of \(\delta_{\rm C}\) contains no diagonal nuclear matrix element of the form \(\langle g;1,T_{z}|V|g;1,T_{z}\rangle\), so it is orthogonal to the leading expressions of the IMME coefficients \(\{{\tt b},{\tt c}\}\). Subsequently, the ability of a model calculation to reproduce the IMME coefficients accurately does not guarantee its ability to determine \(\delta_{\rm C}\) with the same accuracy.
To proceed further we must invoke some general properties of the ISB interaction \(V\). We will assume that \(V\) is predominantly isovector, i.e. \(V\approx V^{(1)}\). The IMME coefficients suggest that, for a \(\sim 10\%\) precision goal this is a good assumption for \(A\geq 26\). With this, we insert a complete set of intermediate nuclear states \(\{|a;T,T_{z}\rangle\}\) to each term in Eq.(II) and apply the Wigner-Eckart theorem,
\[\langle a;T,T_{z}|V|g;1,T_{z}^{\prime}\rangle=C_{1,T_{z}^{\prime};1,0}^{1,1;T,T_{z}}\langle a;T||V||g;1\rangle\, \tag{11}\]
with \(C\)s the Clebsch-Gordan coefficients. It recasts \(\delta_{\rm C}\) in terms of the reduced matrix element \(\langle a;T||V||g;1\rangle\). Since \(V\) is an isovector, the intermediate states can only have \(T=0,1,2\); also, the \(a=g\), \(T=1\) intermediate states are excluded by the projection operators. With these we obtain:
\[\delta_{\rm C} = \frac{1}{3}\sum_{a}\frac{|\langle a;0||V||g;1\rangle|^{2}}{(E_{a,0}-E_{g,1})^{2}}+\frac{1}{2}\sum_{a\neq g}\frac{|\langle a;1||V||g;1\rangle|^ {2}}{(E_{a,1}-E_{g,1})^{2}} \tag{12}\] \[-\frac{5}{6}\sum_{a}\frac{|\langle a;2||V||g;1\rangle|^{2}}{(E_{a,2}-E_{g,1})^{2}}+{\cal O}(V^{3})\.\]
Within an isotriplet there are two superallowed transitions: \((T_{zi}=-1)\rightarrow(T_{zf}=0)\) and \((T_{zi}=0)\rightarrow(T_{zf}=+1)\). It turns out that Eq.(12) applies to both transitions, which means that \(\delta_{\rm C}\) for the superallowed beta decays within the same isotriplet are identical up to \({\cal O}(V^{2})\) assuming the dominance of isotriplet ISB interaction. This conclusion is model-independent as it straightforwardly follows from the Wigner-Eckart theorem, and serves as a useful consistency check of existing calculations. Interestingly enough, this simple conclusion has never been discussed in literature. As an example, we quote in Table I the WS [1] and RPA (with PKO1 parameterization) [46] calculation of \(\delta_{\rm C}\) for the \(T_{zi}=-1\) and \(T_{zi}=0\) transitions (which we denote as \(\delta_{C}^{-1}\) and \(\delta_{\rm C}^{0}\) respectively) within the same isotriplet, and define their relative difference: \(\Delta_{C}\equiv|2(\delta_{\rm C}^{-1}-\delta_{\rm C}^{0})/(\delta_{\rm C}^{- 1}+\delta_{\rm C}^{0})|\). We find that some of their results give \(\Delta_{C}\) as large as \(20\%\) or more for \(A\geq 26\). We conclude that even the most widely-adopted model calculation of \(\delta_{\rm C}\) is not free from potentially large systematic errors.
## III Electroweak nuclear radii probe ISB effects
The first step towards a systematic reevaluation of ISB corrections to superallowed beta decays is to identify new experimental observables that are able to directly constrain \(\delta_{\rm C}\). This idea is pioneered in Ref.[48], and we briefly review it below. A key object throughout the discussion is the isovector monopole operator defined as \(\vec{M}^{(1)}=\sum_{i}r_{i}^{2}\vec{\tilde{T}}(i)\), where \(\vec{\tilde{T}}\) is the isospin operator and \(i\) labels the nucleons in the nucleus. Rank-1 irreducible tensors in the isospin space can be formed as: \(M_{0}^{(1)}=M_{z}^{(1)}\), \(M_{\pm 1}^{(1)}=\mp(M_{x}^{(1)}\pm iM_{y}^{(1)})/\sqrt{2}\). For convenience, we may also define a corresponding isoscalar monopole operator as \(M^{(0)}=\sum_{i}r_{i}^{2}\).
In Ref.[48] we defined two ISB-sensitive combinations of experimental observables. The first one reads
\[\Delta M_{A}^{(1)}\equiv\langle f|M_{+1}^{(1)}|i\rangle+\langle f|M_{0}^{(1)}|f \rangle. \tag{13}\]
The first term on the right hand side comes from the measurement of the \(t\)-dependence of the \((T_{zi}=0)\to(T_{zf}=+1)\) superallowed beta decay form factor,
\[\bar{f}_{+}(t)=1-\frac{t}{6}\langle f|M^{(1)}_{+1}|i\rangle+{\cal O}(t^{2})\, \tag{14}\]
which corresponds to the charged weak radius. The second term combines the proton and neutron distribution radii of the \(T_{z}=+1\) daughter nucleus,
\[\langle f|M^{(1)}_{0}|f\rangle=\frac{N_{f}}{2}R^{2}_{n,f}-\frac{Z_{f}}{2}R^{2} _{p,f}. \tag{15}\]
Above, the root mean square (RMS) distribution radius of a nucleon in a nucleus \(\phi\) is defined as:
\[R_{p/n,\phi}=\sqrt{\frac{1}{X}\langle\phi|\sum_{i=1}^{A}r_{i}^{2}\left(\frac{1 }{2}\pm\hat{T}_{z}(i)\right)|\phi\rangle}\, \tag{16}\]
with \(-\) for the proton and \(+\) for the neutron, and \(X=Z_{\phi}\) or \(N_{\phi}\). These radii can be measured through fixed-target scattering experiments.
In the meantime, we recall that the nuclear charge radius, largely given by \(R_{p}\), is measurable via atomic spectroscopy for both stable and unstable nuclei. With this, one may construct another experimental observable by combining the charge radii across the isotriplet,
\[\Delta M^{(1)}_{B}\equiv\frac{1}{2}\left(Z_{1}R^{2}_{p,1}+Z_{-1}R^{2}_{p,-1} \right)-Z_{0}R^{2}_{p,0} \tag{17}\]
where the subscript \(-1,0,1\) denotes \(T_{z}\) of the nucleus.
It is easy to see using Wigner-Eckart theorem that both \(\Delta M^{(1)}_{A,B}\) vanish identically in the isospin limit: replacing the external states by isospin eigenstates, we get
\[\Delta M^{(1)}_{A} \to \langle g;;1,1|M^{(1)}_{+1}|g;1,0\rangle+\langle g;1,1|M^{(1)}_{0 }|g;1,1\rangle \tag{18}\] \[= 0\] \[\Delta M^{(1)}_{B} \to \sum_{T_{z}=\pm 1}\langle g;1,T_{z}|\frac{1}{4}M^{(0)}-\frac{1}{2}M^{(1 )}_{0}|g;1,T_{z}\rangle\] \[-\langle g;1,0|\frac{1}{2}M^{(0)}-M^{(1)}_{0}|g;1,0\rangle\] \[= 0\.\]
This qualifies both observables as clean probes of ISB effects. Their leading non-zero expression arises by expanding the external states to \({\cal O}(V)\) following Eq.(7). Assuming the isovector dominance in \(V\), as discussed in Sec.II, a straightforward derivation gives
\[\Delta M^{(1)}_{A}=-\frac{1}{3}\sum_{a}\frac{\langle a;0||M^{(1)} ||g;1\rangle^{*}\langle a;0||V||g;1\rangle}{E_{a,0}-E_{g,1}}\] \[-\frac{1}{2}\sum_{a\neq g}\frac{\langle a;1||M^{(1)}||g;1\rangle ^{*}\langle a;1||V||g;1\rangle}{E_{a,1}-E_{g,1}}\] \[-\frac{1}{6}\sum_{a}\frac{\langle a;2||M^{(1)}||g;1\rangle^{*} \langle a;2||V||g;1\rangle}{E_{a,2}-E_{g,1}}\] \[-\sum_{a}\frac{\langle a;2||V||g;1\rangle^{*}\langle a;2||M^{(1) }||g;1\rangle}{E_{a,2}-E_{g,1}}+{\cal O}(V^{2}) \tag{19}\]
and
\[\Delta M^{(1)}_{B}=\mathfrak{Re}\left\{-\frac{2}{3}\sum_{a}\frac {\langle a;0||M^{(1)}||g;1\rangle^{*}\langle a;0||V||g;1\rangle}{E_{a,0}-E_{g,1}}\right.\] \[+\sum_{a\neq g}\frac{\langle a;1||M^{(1)}||g;1\rangle^{*}\langle a ;1||V||g;1\rangle}{E_{a,1}-E_{g,1}} \tag{20}\] \[\left.-\frac{1}{3}\sum_{a}\frac{\langle a;2||M^{(1)}||g;1\rangle^ {*}\langle a;2||V||g;1\rangle}{E_{a,2}-E_{g,1}}\right\}+{\cal O}(V^{2})\,,\]
respectively. The reduced matrix elements of \(\vec{M}^{(1)}\) are defined as
\[\langle a;T^{\prime\prime},T^{\prime\prime}_{z}|M^{(1)}_{T_{z}}|g;1,T^{\prime }_{z}\rangle=C^{1,1;T^{\prime\prime},T^{\prime\prime}_{z}}_{1,T^{\prime}_{z} =1,T^{\prime\prime}_{z},1,T^{\prime\prime}_{z}}\langle a;T^{\prime\prime}||M^ {(1)}||g;1\rangle. \tag{21}\]
It is easy to check that the definition of \(\Delta M^{(1)}_{B}\) in Eq.(17) ensures the absence of terms \(\sim M^{(0)}\otimes V\) at \({\cal O}(V)\).
## IV Universal functions connecting all ISB observables
The dominant source of ISB is the Coulomb repulsion between protons, with its prevailing part coming from the one-body potential of a uniformly charged sphere of radius \(R_{C}\),
\[V_{C}\approx-\frac{Ze^{2}}{4\pi R_{C}^{3}}\sum_{i=1}^{A}\left(\frac{1}{2}r_{i} ^{2}-\frac{3}{2}R_{C}^{2}\right)\left(\frac{1}{2}-\hat{T}_{z}(i)\right). \tag{22}\]
For the isotriplets of interest we may take \(Z\approx A/2\), and \(R_{C}\) is related to the point-proton radius of the respective nucleus as \(R_{C}^{2}=(5/3)R_{p}^{2}\). Notice that the potential above assumes that all nucleons reside at \(r_{i}<R_{C}\). In reality, there are (small) corrections due to the non-zero nucleon wave functions at \(r_{i}>R_{C}\) where the potential behaves as \(1/r_{i}\). This residual effect could be estimated within nuclear models and included as a part of the systematic uncertainties in the theory analysis.
The ISB part of \(V_{C}\) involves only the \(\hat{T}_{z}(i)\) in the second round bracket, and is purely isovector. Furthermore, as far as the off-diagonal matrix elements are concerned, the \(R_{C}\) term in the first bracket does not contribute. One may therefore make the connection,
\[V\leftrightarrow(Ze^{2}/8\pi R_{C}^{3})M^{(1)}_{0}. \tag{23}\]
With this, both \(\delta M^{(1)}_{A,B}\) and \(\delta_{\rm C}\) share the same set of reduced matrix elements of the form \(|\langle a;T||M^{(1)}||g;1\rangle|^{2}\), which is the central result of Ref. [48]. The main difference is that \(\Delta M^{(1)}_{A,B}\) contain only one energy denominator because they arise from a first-order perturbation, while \(\delta_{\rm C}\) starts from second order and contains two energy denominators. We define the following _generating function_
\[\bar{\Gamma}_{T}(\zeta)\equiv\sum_{a\neq g}\frac{|\langle a;T||M^{(1)}||g;1 \rangle|^{2}}{E_{a,T}-\zeta}\,\ T=0,1,2 \tag{24}\]
with \(\zeta\) an energy variable. The value of \(F\) at \(\zeta=E_{g,1}\) is directly related to \(\Delta M^{(1)}_{A,B}\), while its derivative at \(\zeta=E_{g,1}\) is directly related to \(\delta_{\rm C}\).
To directly access the reduced matrix elements in Eq.(24) through nuclear theory calculations, we define a set of nuclear matrix elements for \(T_{z}=-1,0,1\) which will be the key objects of theory studies,
\[F_{T_{z}}(\zeta) \equiv \langle g;1,T_{z}|(M^{(1)}_{-1})^{\dagger}G(\zeta)M^{(1)}_{-1}|g; 1,T_{z}\rangle \tag{25}\] \[- \frac{|\langle g;1,T_{z}-1|M^{(1)}_{-1}|g;1,T_{z}\rangle|^{2}}{ \zeta-E_{g,1}}\,\]
with \(G(\zeta)=1/(\zeta-H_{0})\) the nuclear Green's function. The second term on the right hand side subtracts out the \(a=g\), \(T=1\) intermediate state contribution and exists only for \(T_{z}=1,0\) but not for \(T_{z}=-1\). Inserting a complete set of nuclear states to the first term, we get
\[F_{1}(\zeta) = -\frac{1}{3}\bar{\Gamma}_{0}(\zeta)-\frac{1}{2}\bar{\Gamma}_{1}( \zeta)-\frac{1}{6}\bar{\Gamma}_{2}(\zeta)\] \[F_{0}(\zeta) = -\frac{1}{2}\bar{\Gamma}_{1}(\zeta)-\frac{1}{2}\bar{\Gamma}_{2}(\zeta)\] \[F_{-1}(\zeta) = -\bar{\Gamma}_{2}(\zeta). \tag{26}\]
These three matrix elements can be solved for all three \(\bar{\Gamma}_{T}(\zeta)\). To connect to our ISB observables we expand the functions \(F_{T_{z}}\) around \(\zeta=E_{g,1}\),
\[F_{T_{z}}(\zeta)=\alpha_{T_{z}}+\beta_{T_{z}}(\zeta-E_{g,1})+{\cal O}((\zeta-E _{g,1})^{2})\, \tag{27}\]
where \(\alpha_{T_{z}}\), \(\beta_{T_{z}}\) are constant expansion coefficients. With these, we obtain
\[\Delta M^{(1)}_{A} = \frac{Ze^{2}}{8\pi R_{C}^{3}}\left\{\alpha_{1}+\alpha_{-1}\right\}\] \[\Delta M^{(1)}_{B} = \frac{Ze^{2}}{8\pi R_{C}^{3}}\left\{2\alpha_{1}-4\alpha_{0}+2 \alpha_{-1}\right\}\, \tag{28}\]
and
\[\delta_{\rm C}=-\left(\frac{Ze^{2}}{8\pi R_{C}^{3}}\right)^{2}\left\{\beta_{1 }-\beta_{-1}\right\}. \tag{29}\]
The interconnection between \(\delta_{\rm C}\) and the electroweak nuclear radii through the universal function \(F_{T_{z}}(\zeta)\) is largely model-independent and may be viewed as a kind of sum rule. A theory calculation of \(F_{T_{z}}(\zeta)\) that gives all the expansion coefficients coherently will be able to predict \(\delta_{\rm C}\), and at the same time receive direct experimental constraints from \(\Delta M^{(1)}_{A,B}\). This allows us to rigorously quantify the theory uncertainties in \(\delta_{\rm C}\).
## V Possible computational strategy
We now briefly discuss possible strategies to compute \(F_{T_{z}}(\zeta)\). The second term on the right hand side of Eq.(25) involves only a ground state matrix element \(\langle g;1,T_{z}-1|M^{(1)}_{-1}|g;1,T_{z}\rangle\) which is relatively straightforward to compute. The first term with the nuclear Green's function \(G(\zeta)\) is more challenging as it involves the inversion of the Hamiltonian matrix which is computationally demanding. There exist a number of methods to efficiently compute its matrix elements as a function of \(\zeta\), and here we provide one example based on the Lanczos algorithm [60; 61; 62].
We start by defining a properly-normalized starter state,
\[|\phi_{0}\rangle\equiv\frac{M^{(1)}_{-1}|g;1,T_{z}\rangle}{\sqrt{\langle g;1,T _{z}|(M^{(1)}_{-1})^{\dagger}M^{(1)}_{-1}|g;1,T_{z}\rangle}}. \tag{30}\]
With this, the first term of \(F_{T_{z}}(\zeta)\) can be written as
\[\langle g;1,T_{z}|(M^{(1)}_{-1})^{\dagger}G(\zeta)M^{(1)}_{-1}|g;1,T_{z}\rangle \tag{31}\] \[= \langle g;1,T_{z}|(M^{(1)}_{-1})^{\dagger}M^{(1)}_{-1}|g;1,T_{z} \rangle\langle\phi_{0}|G(\zeta)|\phi_{0}\rangle\.\]
Again, the coefficient \(\langle g;1,T_{z}|(M^{(1)}_{-1})^{\dagger}M^{(1)}_{-1}|g;1,T_{z}\rangle\) only involves the ground state matrix element, while \(\langle\phi_{0}|G(\zeta)|\phi_{0}\rangle\) is more complicated. To evaluate the latter, we construct a set of \(n\) orthonormal Lanczos basis \(\{|\phi_{i}\rangle\}_{i=0}^{n-1}\) through the following iteration:
\[|w_{i+1}\rangle\equiv b_{i+1}|\phi_{i+1}\rangle\equiv H_{0}|\phi_{i}\rangle-a_ {i}|\phi_{i}\rangle-b_{i}|\phi_{i-1}\rangle\, \tag{32}\]
where
\[a_{i}\equiv\langle\phi_{i}|H_{0}|\phi_{i}\rangle\,\ b_{i}\equiv\sqrt{\langle w _{i}|w_{i}\rangle} \tag{33}\]
are the so-called Lanczos coefficients, with \(b_{0}\equiv 0\) and \(|\phi_{-1}\rangle\equiv 0\). The Hamiltonian \(H_{0}\) is tridiagonalized under such basis, and the desired matrix element can be expressed as a continuous fraction involving the Lanczos coefficients,
\[\langle\phi_{0}|G(\zeta)|\phi_{0}\rangle=g_{0}(\zeta)\, \tag{34}\]
defined via the following recursion relation,
\[g_{i}(\zeta)=\frac{1}{\zeta-a_{i}-b_{i+1}^{2}g_{i+1}(\zeta)}\,\ i=0,1,...,n-2, \tag{35}\]
which terminates at \(g_{n-1}(\zeta)=1/(\zeta-a_{n-1})\). For completeness, we also provide the recursion relation of the first \(\zeta\)-derivative,
\[g^{\prime}_{i}(\zeta)=-g^{2}_{i}(\zeta)\left(1-b_{i+1}^{2}g^{\prime}_{i+1}( \zeta)\right)\,\ i=0,1,...,n-2, \tag{36}\]
with \(g^{\prime}_{n-1}(\zeta)=-g^{2}_{n-1}(\zeta)\).
The procedure above determines \(F_{T_{z}}(\zeta)\) completely in terms of two ground-state matrix elements \(\langle g;1,T_{z}|(M^{(1)}_{-1})^{\dagger}M^{(1)}_{-1}|g;1,T_{z}\rangle\), \(\langle g;1,T_{z}-1|M^{(1)}_{-1}|g;1,T_{z}\rangle\) and the Lanczos coefficients \(\{a_{i},b_{i}\}\), none of which requires a matrix inversion. In particular, the expansion
coefficients of our interest can be written as
\[\alpha_{T_{z}} = \bigg{\{}\langle g;1,T_{z}|(M_{-1}^{(1)})^{\dagger}M_{-1}^{(1)}|g;1, T_{z}\rangle g_{0}(\zeta)\] \[-\frac{|\langle g;1,T_{z}-1|M_{-1}^{(1)}|g;1,T_{z}\rangle|^{2}}{ \zeta-E_{g,1}}\bigg{\}}_{\zeta=E_{g,1}}\] \[\beta_{T_{z}} = \bigg{\{}\langle g;1,T_{z}|(M_{-1}^{(1)})^{\dagger}M_{-1}^{(1)}|g ;1,T_{z}\rangle g_{0}^{\prime}(\zeta) \tag{37}\] \[+\frac{|\langle g;1,T_{z}-1|M_{-1}^{(1)}|g;1,T_{z}\rangle|^{2}}{ (\zeta-E_{g,1})^{2}}\bigg{\}}_{\zeta=E_{g,1}}\,\]
with \(g_{0}(\zeta)\) and \(g_{0}^{\prime}(\zeta)\) entirely fixed by the Lanczos coefficients following Eqs. (35), (36). Within this formalism, \(\delta_{\rm C}\) is tightly constrained by the experimental observables. To make a prediction of \(\Delta M_{A,B}^{(1)}\), one needs to compute the two ground-state matrix elements and all the Lanczos coefficients. Once this is done, there is no more freedom left for \(\delta_{\rm C}\) which at this point can also be computed. The predicted values of \(\Delta M_{A,B}^{(1)}\) can be compared with the experiment, and the respective deviation and experimental uncertainty may be directly translated into the well-justified uncertainty estimate for \(\delta_{\rm C}\).
We stress that the strategy outlined above is, as such, model-independent. Putting it into practice requires microscopic nuclear theory calculations of the ground-state matrix elements and the Lanczos coefficients, preferably with ab-initio methods. For light nuclei, methods such as Quantum Monte Carlo [63; 64] and no-core shell model [65] are powerful tools; for medium-size nuclei, coupled-cluster theory [66], In-Medium Similarity Renormalization Group [67] and nuclear lattice effective field theory [68; 69; 70; 71] may be applicable. Notice that, for light nuclei (\(A\sim 10\)) some of our basic assumptions on ISB interactions (e.g. the isovector dominance) may not be as solid, but the definition of \(F_{T_{z}}(\zeta)\) through Eq.(25) is not affected by these assumptions and it can be computed nonetheless, which serves as important prototypes for future computations involving heavier nuclei. While the outlined strategy is not based on a model, it uses several approximations. Such approximations include the identification of the ISB potential \(V\) with the isovector monopole operator \(\vec{M}^{(1)}\), the assumption of the uniform-sphere proton distribution, and the neglect of the isotensor part of \(V\). The validity of these approximations should be subject of future studies.
## VI Conclusions
Despite the quoted high precision level of \(|V_{ud}|_{0^{+}}\) in the literature, it has now become increasingly transparent that there could be hidden systematic uncertainties at the order \(10^{-4}\) or larger, which were not reflected in the current error budget and are crucial for precision tests of SM at low energies. In particular, existing theory calculations of the ISB correction \(\delta_{\rm C}\) to the Fermi matrix element are model-dependent and, as we point out in this paper, may not be consistent with general constraints from isospin symmetry. We show that the ability of a nuclear theory approach to predict nuclear mass splittings does not imply the same predictive power for \(\delta_{\rm C}\): the former depend primarily on ground-state diagonal nuclear matrix elements, while the latter must involve excited states. On the other hand, the new ISB observables \(\Delta M_{A,B}^{(1)}\) introduced in Ref.[48] are constructed from measurable electroweak nuclear radii, and probe the same nuclear matrix elements as \(\delta_{\rm C}\). Therefore, it is more natural to gauge the theory accuracy for \(\delta_{\rm C}\) using \(\Delta M_{A,B}^{(1)}\), rather than the IMME coefficients.
Existing ab-initio studies of \(\delta_{\rm C}\) consist mainly of direct computations of the full Fermi matrix element in the presence of ISB interactions. In this work we propose an alternative approach. Based on the isovector dominance of ISB interactions, we define the functions \(F_{T_{z}}(\zeta)\) that involve matrix elements of isovector monopole operators and a single nuclear Green's function. We show that the coefficients of its expansion with respect to \(\zeta\) around the ground state energy \(E_{g,1}\) give simultaneously \(\Delta M_{A,B}^{(1)}\) and \(\delta_{\rm C}\). With that, we recast the problem of an experimental-verifiable theory calculation of \(\delta_{\rm C}\) in terms of the study of the \(\zeta\)-dependence of \(F_{T_{z}}(\zeta)\). The main difficulty in such a calculation is the inversion of a large Hamiltonian matrix in the nuclear Green's function \(G(\zeta)\), which could be bypassed using mathematical techniques such as the Lanczos algorithm we described in Section V. With this strategy, both \(\delta_{\rm C}\) and \(\Delta M_{A,B}^{(1)}\) are uniquely determined from a set of ground-state nuclear matrix elements and Lanczos coefficients, and share the same level of the theoretical accuracy.
Finally, we wish to point out a similarity between the new formalism for computing \(\delta_{\rm C}\) proposed in this work, and that for the nucleus-dependent radiative correction \(\delta_{\rm NS}\) introduced in Ref.[37]. The latter depends on the generalized Compton tensor, \(T^{\mu\nu}\sim\langle f|J_{a}^{\mu}G(\zeta)J_{b}^{\nu}|i\rangle\), where \(\{J_{a}^{\mu},J_{b}^{\nu}\}\) are electroweak current operators and \(G(\zeta)\) is the same nuclear Green's function that appears in this work. The only difference with \(F_{T_{z}}\) is that the isovector monopole operators are replaced by current operators. Therefore, methodologies applicable for ab-initio calculations of \(\delta_{\rm NS}\) will also apply to \(\delta_{\rm C}\). This newly-identified similarity may help to promote simultaneous theory progress in the two quantities that are crucial for a precise extraction of \(|V_{ud}|\), and further foster the potential of nuclear beta decay experiments for discovering or constraining new physics beyond the standard model.
###### Acknowledgements.
We thank Michael Gennari for many useful conversations. The work of C.Y.S. is supported in part by the U.S. Department of Energy (DOE), Office of Science, Office of Nuclear Physics, under the FRIB Theory Al
liance award DE-SC0013617, and by the DOE grant DE-FG02-97ER41014. The work of M.G. is supported in part by EU Horizon 2020 research and innovation programme, STRONG-2020 project under grant agreement No 824093, and by the Deutsche Forschungsgemeinschaft (DFG) under the grant agreement GO 2604/3-1.
|
2302.01341 | Rough cubic Pythagorean fuzzy sets in semigroup | In this paper, we intend the concept of rough cubic Pythagorean fuzzy ideals
in the semigroup. By using this notion, we discuss lower approximation and
upper approximation of cubic Pythagorean fuzzy left (right) ideals, bi-ideals,
interior ideals, and study some of their related properties in detail. | V. Chinnadurai, A. Arulselvam | 2023-02-02T09:53:10Z | http://arxiv.org/abs/2302.01341v1 | # Rough cubic Pythagorean fuzzy sets in semigroup
###### Abstract.
In this paper, we intend the concept of rough cubic Pythagorean fuzzy ideals in the semigroup. By using this notion, we discuss lower approximation and upper approximation of cubic Pythagorean fuzzy left (right) ideals, bi-ideals, interior ideals, and study some of their related properties in detail.
Keywords: Rough set, Pythagorean fuzzy set, cubic Pythagorean fuzzy set, Rough cubic pythagorean fuzzy ideals.
AMS Subject Classification: 03E72, 20M12, 08A72, 20M05, 34C41
\({}^{1}\) V. Chinnadurai, Professor, Department of Mathematics, Annamalai University, India.
e-mail: [email protected]; ORCID: [https://orcid.org/0000-0002-6047-6348](https://orcid.org/0000-0002-6047-6348).
\({}^{2}\) A. Arulselvam, Ph.D Research scholar, Department of Mathematics, Annamalai University, India.
e-mail: [email protected]; ORCID: [https://orcid.org/0000-0001-8383-2889](https://orcid.org/0000-0001-8383-2889). of the second author \(\lx@sectionsign\) Manuscript received: Month Day, Year; accepted: Month Day, Year.
TWMS Journal of Applied and Engineering Mathematics, Vol.xx, No.xx; (c) Isik University, Department of Mathematics, 20xx; all rights reserved.
## 2. Preliminaries
The basic concepts of Rough set(RS), Pythagorean fuzzy set(PFS), Cubic Pythagorean fuzzy set(CPFS), Rough Pythagorean fuzzy set(RCPFS) are referred [11], [12], [1], [7] are respectively.
**Definition 2.1**.: _[_2_]_ _Let \(X\) be a universe of discourse, An **intuitionistic fuzzy set**(IFS) \(A\) in \(X\) is an object having the form. \(A=\left\{z,\zeta_{A}(z),\eta_{A}(z)/z\in X\right\}\). where the mapping \(\zeta:X\rightarrow[0,1]\) and \(\eta:X\rightarrow[0,1]\) represent the degree of membership and non-membership of the object \(z\in X\) to the set \(A\) respectively with the condition \(0\leq\zeta_{A}(z)+\eta_{A}(z)\leq 1\). for all \(z\in X\) for the sake of simplicity an IFS is denoted by \(A=\left(\zeta_{A}(z),\eta_{A}(z)\right)\)._
**Definition 2.2**.: _Let \(\widetilde{P}=\left(\zeta_{\widetilde{p}},\eta_{\widetilde{p}}\right)=\left\{ \left\langle z,\zeta_{\widetilde{p}}(z),\eta_{\widetilde{p}}(z)\right\rangle /z\in S\right\}\) be the Interval Valued Pythagorean fuzzy set(IVPFS) of S, where \(\zeta_{\widetilde{p}}(z)=\zeta_{\widetilde{p}^{-}}(z),\zeta_{\widetilde{p}^{ +}}(z)\) and \(\eta_{\widetilde{p}}(z)=\eta_{\widetilde{p}^{-}}(z),\eta_{\widetilde{p}^{+}}(z)\). Then RIVPFS of S is denoted as \(\text{App}\left(\widetilde{p}\right)=\left(\text{App}(\widetilde{p}),\overline {App}(\widetilde{p})\right)\) the lower approximation is defined as \(\text{App}(\widetilde{p})=\left\{\left\langle z,\zeta_{\widetilde{p}}(z), \eta_{\widetilde{p}}(z)\right\rangle/z\in S\right\}\) where \(\zeta_{\widetilde{p}}=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\zeta_ {\widetilde{p}}(z^{{}^{\prime}})\) and \(\eta_{\widetilde{p}}=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\eta_{ \widetilde{p}}(z^{{}^{\prime}})\) with condition that \(0\leq\big{(}\zeta_{\widetilde{p}}(z)\big{)}^{2}+\big{(}\eta_{\widetilde{p}}(z) \big{)}^{2}\leq 1\) and the lower approximation is defined as \(\overline{App}(\widetilde{p})=\left\{\left\langle z,\overline{\zeta_{ \widetilde{p}}(z)},\overline{\eta_{\widetilde{p}}(z)}\right\rangle/z\in S\right\}\) where \(\overline{\zeta}_{\widetilde{p}}=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{ \omega}}\zeta_{\widetilde{p}}(z^{{}^{\prime}})\) and \(\overline{\eta}_{\widetilde{p}}=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{ \omega}}\eta_{\widetilde{p}}(z^{{}^{\prime}})\) with condition that \(0\leq\overline{\left(\zeta_{\widetilde{p}}(z)\right)}^{2}+\overline{\left( \eta_{\widetilde{p}}(z)\right)}^{2}\leq 1\)._
Throughout this paper \(S\) denotes the semigroup.
**Definition 2.3**.: _Let \(P_{1}^{\square}=\left(\zeta_{p_{1}^{\square}},\eta_{p_{1}^{\square}}\right)\) and \(P_{2}^{\square}=\left(\zeta_{p_{2}^{\square}},\eta_{p_{2}^{\square}}\right)\) be any two CPFS on S. Then, the composition of \(P_{1}^{\square}\) and \(P_{2}^{\square}\) is defined as \(P_{1}^{\square}\circ P_{2}^{\square}=\left(\zeta_{p_{1}^{\square}}\circ\zeta_ {p_{2}^{\square}},\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}\right)\) where, \(\left(\zeta_{p_{1}^{\square}}\circ\zeta_{p_{2}^{\square}}\right)(z)=\bigvee \limits_{z=z_{1}z_{2}}\left[\zeta_{p_{1}^{\square}}(z_{1})\bigwedge\zeta_{p_{2}^ {\square}}(z_{2})\right]\)\(\left(\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}\right)(z)=\bigwedge \limits_{z=z_{1}z_{2}}\left[\eta_{p_{1}^{\square}}(z_{1})\bigvee\eta_{p_{2}^{ \square}}(z_{2})\right]\)._
## 3. Rough cubic Pythagorean fuzzy sets (RCPFS) in semigroup
A equivalence relation \(\omega\) on \(S\) is said to be a congruence relation denoted as \(CR_{\omega}\) if for all \(x,z_{1},z_{2}\in S\) such that \(z_{1},z_{2}\in\omega\Rightarrow z_{1}x,z_{2}x\in\omega\) and \(xz_{1},xz_{2}\in\omega\) The congruence class of an object \(z\in S\) is denoted by \([z]_{\omega}\). For a \(CR_{\omega}\) on \(S\), we have \([z_{1}]_{\omega}[z_{2}]_{\omega}\subseteq[z_{1}z_{2}]_{\omega}\) and the \(CR_{\omega}\) on \(S\) is called complete if \([z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\). \(\forall z_{1},z_{2}\in S\).
**Definition 3.1**.: _Let \(P^{\square}=\left(\zeta_{p^{\square}},\eta_{p^{\square}}\right)=\left\{\left\langle z _{1},\left[\zeta_{\widetilde{p}}(z_{1}),\eta_{\widetilde{p}}(z_{1})\right], \left(\zeta_{p}(z_{1}),\eta_{p}(z_{1})\right)\right\rangle/z_{1}\in S\right\}\) be the CPFS in \(S\), where \(\zeta_{\widetilde{p}}(z_{1})=\left(\zeta_{p}^{-}(z_{1}),\zeta_{p}^{+}(z_{1})\right)\) and \(\eta_{\widetilde{p}}(z_{1})=\left(\eta_{p}^{-}(z_{1}),\eta_{p}^{+}(z_{1})\right)\). Then a RCPFS on \(S\) is denoted by \(\text{App}(P^{\square})=\left(\text{App}(P^{\square}),\overline{App}(P^{ \square})\right)\). The lower approximation is defined as \(\text{App}(P^{\square})=\left(\zeta_{\underline{p}^{\square}},\eta_{\underline {p}^{\square}}\right)=\left\{\overline{\left\langle z_{1},\left[\zeta_{ \widetilde{p}}(z_{1}),\eta_{\widetilde{p}}(z_{1})\right],\left(\zeta_{ \underline{p}}(z_{1}),\eta_{\underline{p}}(z_{1})\right)\right\rangle/z_{1}\in S\right\}\) where \(\zeta_{\widetilde{p}}(z)=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\zeta_ {\widetilde{p}}(z^{{}^{\prime}})\) and \(\eta_{\widetilde{p}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\eta_{ \widetilde{p}}(z^{{}^{\prime}})\)\(\zeta_{\underline{p}}(z)=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\zeta_ {p}(z^{{}^{\prime}})\) and \(\eta_{\underline{p}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\eta_{ p}(z^{{}^{\prime}})\) with the condition \(0\leq\left(\zeta_{\underline{p}}(z)\right)^{2}+\left(\eta_{\underline{p}}(z)\right)^{2}\leq 1\) and the upper approximation is defined as \(\overline{App}(P)=\left\{\left\langle z,\overline{\zeta_{p}}(z),\overline{ \eta_{p}}(z)\right\rangle/z\in S\right\}\). where \(\overline{\zeta_{\widetilde{p}}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{ \omega}}\zeta_{\widetilde{p}}(z^{{}^{\prime}})\) and \(\eta_{\widetilde{p}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\eta_{ p}(z^{{}^{\prime}})\) with the condition \(0\leq\left(\zeta_{\underline{p}}(z)\right)^{2}+\left(\eta_{\underline{p}}(z)\right)^{2}\leq 1\) and the upper approximation is defined as \(\overline{App}(P)=\left\{\left\langle z,\overline{\zeta_{p}}(z),\overline{ \eta_{p}}(z)\right\rangle/z\in S\right\}\). where \(\overline{\zeta_{\widetilde{p}}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{ \omega}}\zeta_{\widetilde{p}}(z^{{}^{\prime}})\) and
\(\overline{\eta_{\widetilde{p}}}(z)=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{\omega}} \eta_{\widetilde{p}}(z^{{}^{\prime}})\ \overline{\zeta_{p}}(z)=\bigvee\limits_{z^{{}^{\prime}}\in[z]_{\omega}}\zeta_{p }(z^{{}^{\prime}})\) and \(\overline{\eta_{\widetilde{p}}}(z)=\bigwedge\limits_{z^{{}^{\prime}}\in[z]_{ \omega}}\eta_{p}(z^{{}^{\prime}})\) with the condition \(0\leq\big{(}\overline{\zeta_{p}}(z)\big{)}^{2}+\big{(}\overline{\eta_{ \widetilde{p}}}(z)\big{)}^{2}\leq 1\)._
**Proposition 3.1**.: _The lower approximation and upper approximation of the CPFS \(P^{\square}\) on \(S\) are CPFS of a quotient set \(S/\omega\)_
Proof.: The membership and non-membership grades of lower approximation i.e., \(\underline{App}(P^{\square})\) from definition 3.1 is defined as.
\(\underline{\zeta_{\widetilde{p}}}(z_{1})=\bigwedge\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\zeta_{\widetilde{p}}(z^{{}^{\prime}}_{1})\) and \(\underline{\eta_{\widetilde{p}}}(z_{1})=\bigvee\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\eta_{\widetilde{p}}(z^{{}^{\prime}}_{1})\), \(\underline{\zeta_{p}}(z_{1})=\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}] _{\omega}}\zeta_{p}(z^{{}^{\prime}}_{1})\) and \(\underline{\eta_{\widetilde{p}}}(z_{1})=\bigvee\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\eta_{p}(z^{{}^{\prime}}_{1})\) Now, for all \(z_{1}\in[z_{1}]_{\omega}\),
we have \(\big{(}\zeta_{p^{\square}}(z_{1})\big{)}^{2}+\big{(}\eta_{p^{\square}}(z_{1}) \big{)}^{2}\)
\(\leq\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\Big{(}\zeta_{p^{ \square}}(z^{{}^{\prime}}_{1})\Big{)}^{2}+\bigvee\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\Big{(}1-\Big{(}\eta_{p^{\square}}(z^{{}^{\prime}}_{1}) \Big{)}^{2}\Big{)}\)
\(\leq\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\Big{(}\zeta_{p^{ \square}}(z^{{}^{\prime}}_{1})\Big{)}^{2}+\bigvee\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\Big{(}1-\Big{(}\eta_{p^{\square}}(z^{{}^{\prime}}_{1}) \Big{)}^{2}\Big{)}\)
\(=\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\Big{(}\zeta_{p^{ \square}}(z^{{}^{\prime}}_{1})\Big{)}^{2}+1-\bigwedge\limits_{z^{{}^{\prime}}_{1 }\in[z_{1}]_{\omega}}\Big{(}\eta_{p^{\square}}(z^{{}^{\prime}}_{1})\Big{)}^{2}\)
implies \(\big{(}\zeta_{p^{\square}}(z_{1})\big{)}^{2}+\big{(}\eta_{p^{\square}}(z_{1}) \big{)}^{2}\leq 1\)
Similarly, \(\overline{App}(P^{\square})\).
**Theorem 3.1**.: _Let us consider any two CPFSs \(P_{1}^{\square}=\big{\langle}\big{[}\zeta_{\widetilde{p_{1}}},\eta_{ \widetilde{p_{1}}}\big{]}\,,(\zeta_{p_{1}},\eta_{p_{1}})\big{\rangle}\) and \(P_{2}^{\square}=\big{\langle}\big{[}\zeta_{\widetilde{p_{2}}},\eta_{ \widetilde{p_{2}}}\big{]}\,,(\zeta_{p_{2}},\eta_{p_{2}})\big{\rangle}\) of \(S\) and \(\omega\) be the complete \(CR_{\omega}\) on \(S\). Then \(\underline{App}(P_{1}^{\square})\circ\underline{App}(P_{2}^{\square})\subseteq \underline{App}\left(P_{1}^{\square}\circ P_{2}^{\square}\right)\)_
Proof.: Since \(\omega\) is a complete \(CR_{\omega}\) on \(S\) so \([z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\) for all \(z_{1},z_{2}\in S\) As \(\underline{App}(P_{1}^{\square})=\big{\langle}\big{[}\zeta_{\widetilde{p_{1}}}, \eta_{\widetilde{p_{1}}}\big{]}\,,(\zeta_{p_{1}},\eta_{p_{1}})\big{\rangle}\) and \(\underline{App}(P_{2}^{\square})=\big{\langle}\big{[}\zeta_{\widetilde{p_{2}}}, \eta_{\widetilde{p_{2}}}\big{]}\,,(\zeta_{p_{2}},\eta_{p_{2}})\big{\rangle}\). Now, \(\underline{App}(P_{1}^{\square})\circ\underline{App}(P_{2}^{\square})=\big{(} \zeta_{\widetilde{p_{1}}^{\square}}\circ\zeta_{p_{2}^{\square}},\eta_{ \widetilde{p_{1}}^{\square}}\circ\eta_{p_{2}^{\square}}\big{)}\) and \(\underline{App}\left(P_{1}^{\square}\circ P_{2}^{\square}\right)=\Big{(}\Big{(} \zeta_{p_{1}^{\square}}\circ\zeta_{p_{2}^{\square}}\Big{)}\,,\Big{(}\eta_{p_{1}^{ \square}}\circ\eta_{p_{2}^{\square}}\Big{)}\Big{)}\).To show that \(\underline{App}(P_{1}^{\square})\circ\underline{App}(P_{2}^{\square})\subseteq \underline{App}\left(P_{1}^{\square}\circ P_{2}^{\square}\right)\), we have to prove that \(\Big{[}\underline{\zeta_{p_{1}^{\square}}\circ\underline{\zeta_{p_{2}^{ \square}}}}\Big{]}\,(z_{1})\leq\Big{(}\underline{\zeta_{p_{1}^{\square}}\circ \underline{\zeta_{p_{2}^{\square}}}}\Big{)}\,(z_{1})\) and \(\Big{[}\underline{\eta_{p_{1}^{\square}}\circ\underline{\eta_{p_{2}^{\square}}}} \Big{]}\,(z_{1})\geq\Big{(}\underline{\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{ \square}}}\Big{)}\,(z_{1})\) Now, for all \(z\in S\)
\(\leq\bigvee\limits_{z_{1}^{\prime}\in[z_{1}]_{\omega}}\eta_{p}(z^{{}^{\prime}}_{1})\) Now, for all \(z_{1}\in[z_{1}]_{\omega}\),
we have \(\big{(}\zeta_{p^{\square}}(z_{1})\big{)}^{2}+\big{(}\eta_{p^{\square}}(z_{1}) \big{)}^{2}\)
\(=\left\{\left\langle\left[\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{ \omega}}\zeta_{\widetilde{p}}(z^{{}^{\prime}}_{1})\right]^{2}+\left[ \bigvee\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\eta_{\widetilde{p}}(z^{{}^{ \prime}}_{1})\right]^{2}\right\rangle,\left(\bigwedge\limits_{z^{{}^{\prime}}_{1} \in[z_{1}]_{\omega}}\zeta_{p}(z^{{}^{\prime}}_{1})\right)^{2}+\left(\bigvee \limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\eta_{p}(z^{{}^{\prime}}_{1}) \right)^{2}\right\}\)
\(=\left\langle\left[\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\zeta_{ \widetilde{p}}(z^{{}^{\prime}}_{1}),\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{ \omega}}\zeta_{p}(z^{{}^{\prime}}_{1})\right]^{2}+\left[\bigvee\limits_{z^{{}^{ \prime}}_{1}\in[z_{1}]_{\omega}}\eta_{\widetilde{p}}(z^{{}^{\prime}}_{1}),\bigvee\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\eta_{p}(z^{{}^{ \prime}}_{1})\right]^{2}\right\rangle\)
\(=\bigwedge\limits_{z^{{}^{\prime}}_{1}\in[z_{1}]_{\omega}}\Big{[}\zeta_{p^{ \square}}(z^{{}^{\prime}}_
\[=\bigvee_{z=z_{1}z_{2}}\left[\left(\bigwedge_{M\in[z_{1}]_{\omega}} \zeta_{p_{1}^{\square}}(M)\right)\bigwedge\left(\bigwedge_{N\in[z_{2}]_{\omega}} \zeta_{p_{2}^{\square}}(N)\right)\right]\] \[=\bigvee_{z=z_{1}z_{2}}\left[\bigwedge_{M\in[z_{1}]_{\omega}N\in[z _{2}]_{\omega}}\left(\zeta_{p_{1}^{\square}}(M)\bigwedge\zeta_{p_{2}^{\square}}( N)\right)\right]\] \[\leq\bigvee_{z=z_{1}z_{2}}\left[\bigwedge_{MN\in[z_{1}z_{2}]_{ \omega}}\left(\zeta_{p_{1}^{\square}}(M)\bigwedge\zeta_{p_{2}^{\square}}(N) \right)\right]\hskip 28.452756pt\text{as }MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\] \[=\bigvee_{MN\in[z]_{\omega}}\left(\zeta_{p_{1}^{\square}}(M) \bigwedge\zeta_{p_{2}^{\square}}(N)\right)\] \[=\bigvee_{\lambda\in[z]_{\omega}}\left[\bigvee_{\lambda=MN} \left(\zeta_{p_{1}^{\square}}(M)\bigwedge\zeta_{p_{2}^{\square}}(N)\right)\right]\] \[=\bigvee_{\lambda\in[z]_{\omega}}\left[\left(\zeta_{p_{1}^{ \square}}\circ\zeta_{p_{2}^{\square}}\right)(\lambda)\right]\]
implies \(\left[\underbrace{\zeta_{p_{1}^{\square}}\circ\zeta_{p_{2}^{\square}}}_{p_{2} ^{\square}}\right](z)\leq\left[\underbrace{\zeta_{p_{1}^{\square}}\circ\zeta_{ p_{2}^{\square}}}_{p_{2}}\right](z)\).
Further
\(\left[\underbrace{\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}}_{p_{2}^{ \square}}\right](z)=\bigwedge_{z=z_{1}z_{2}}\left(\eta_{p_{1}^{\square}}(z_{1}) \vee\eta_{p_{2}^{\square}}(z_{2})\right)\)
\(=\bigwedge_{z=z_{1}z_{2}}\left[\left(\bigvee_{\widetilde{p_{1}}}(z_{1}),\eta_ {\widetilde{p_{1}}}(z_{1})\right)\overline{\bigvee\left(\eta_{\widetilde{p_{2 }}}(z_{2}),\eta_{\widetilde{p_{2}}}(z_{2})\right)}\right]\)
\(=\bigwedge_{z=z_{1}z_{2}}\left[\left(\bigvee_{M\in[z_{1}]_{\omega}}\eta_{ \widetilde{p_{1}}}(M),\bigvee_{M\in[z_{1}]_{\omega}}\eta_{p_{1}}(M)\right) \bigvee\left(\bigvee_{N\in[z_{2}]_{\omega}}\eta_{\widetilde{p_{2}}}(N), \bigvee_{N\in[z_{2}]_{\omega}}\eta_{p_{2}}(N)\right)\right]\)
\(=\bigwedge_{z=z_{1}z_{2}}\left[\bigvee_{M\in[z_{1}]_{\omega}N\in[z_{2}]_{\omega }}\left(\eta_{p_{1}^{\square}}(M)\bigvee\eta_{p_{2}^{\square}}(N)\right)\right]\)
\(\geq\bigwedge_{z=z_{1}z_{2}}\left[\bigvee_{MN\in[z_{1}z_{2}]_{\omega}}\left( \eta_{p_{1}^{\square}}(M)\bigvee\eta_{p_{2}^{\square}}(N)\right)\right]\) as \(MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\)
\(=\bigwedge_{MN\in[z]_{\omega}}\left(\eta_{p_{1}^{\square}}(M)\bigvee\eta_{p_{2 }^{\square}}(N)\right)\)
\(=\bigwedge_{\lambda\in[z]_{\omega}}\left[\bigwedge_{\lambda=MN}\left(\eta_{p_{ 1}^{\square}}(M)\bigvee\eta_{p_{2}^{\square}}(N)\right)\right]\)
\(=\bigwedge_{\lambda\in[z]_{\omega}}\left[\left(\eta_{p_{1}^{\square}}\circ \eta_{p_{2}^{\square}}\right)(\lambda)\right]\)
implies \(\left[\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}\right](z)\geq\left[ \eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}\right](z)\).
Hence, \(\underline{\underline{App}}(P_{1}^{\square})\circ\underline{\underline{App}}(P_ {2}^{\square})\subseteq\underline{\underline{App}}(P_{1}^{\square}\circ P_{2}^ {\square})\).
**Theorem 3.2**.: _Let \(P_{1}^{\square},P_{2}^{\square}\) be any two CPFSs on \(S\). Then \(P_{1}^{\square}=\left\langle\left[\zeta_{\widetilde{p_{1}}},\eta_{\widetilde{p_{ 1}}}\right],(\underline{\zeta_{p_{1}}},\eta_{p_{1}})\right\rangle\) and \(P_{2}^{\square}=\left\langle\left[\zeta_{\widetilde{p_{2}}},\eta_{\widetilde{p_{ 2}}}\right],(\zeta_{p_{2}},\eta_{p_{2}})\right\rangle\) of \(S\) and let \(\omega\) be the complete \(CR_{\omega}\) on \(S\). Then \(\underline{\underline{App}}(P_{1}^{\square})\circ\overline{\underline{App}}(P_ {2}^{\square})\subseteq\underline{\underline{App}}\left(P_{1}^{\square}\circ P_ {2}^{\square}\right)\)
Proof.: Since \(\omega\) is a complete \(CR_{\omega}\) on \(S\) so \([z_{1}]_{\omega}[z_{2}]_{\omega}\subseteq[z_{1}z_{2}]_{\omega}\) for all \(z_{1},z_{2}\in S\) As \(\overline{App}(P_{1}^{\square})=\left\langle\left[\zeta_{\widehat{p_{1}}}, \eta_{\widehat{p_{1}}}\right],\left(\zeta_{p_{1}},\eta_{p_{1}}\right)\right\rangle\) and \(\overline{App}(P_{2}^{\square})=\left\langle\left[\zeta_{\widehat{p_{2}}}, \eta_{\widehat{p_{2}}}\right],\left(\zeta_{p_{2}},\eta_{p_{2}}\right)\right\rangle\). Now, \(\overline{App}(P_{1}^{\square})\circ\overline{App}(P_{2}^{\square})=\left( \zeta_{p_{1}^{\square}}\circ\zeta_{p_{2}^{\square}},\eta_{p_{1}^{\square}}\circ \eta_{p_{2}^{\square}}\right)\) and \(\overline{App}\left(P_{1}^{\square}\circ P_{2}^{\square}\right)=\left(\left( \zeta_{p_{1}^{\square}}\circ\zeta_{p_{2}^{\square}}\right),\left(\eta_{p_{1}^{ \square}}\circ\eta_{p_{2}^{\square}}\right)\right)\).To show that \(\overline{App}(P_{1}^{\square})\circ\overline{App}(P_{2}^{\square})\subseteq \overline{App}\left(P_{1}^{\square}\circ P_{2}^{\square}\right)\), we have to prove that \(\left[\overline{\zeta_{p_{1}^{\square}}}\circ\overline{\zeta_{p_{2}^{\square} }}\right]\left(z_{1}\right)\leq\left(\overline{\zeta_{p_{1}^{\square}}\circ \zeta_{p_{2}^{\square}}}\right)\left(z_{1}\right)\) and \(\left[\overline{\eta_{p_{1}^{\square}}}\circ\overline{\eta_{p_{2}^{\square} }}\right]\left(z_{1}\right)\)\(\geq\left(\overline{\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}}\right) \left(z_{1}\right)\) Now, for all \(z\in S\)
\(\left[\overline{\zeta_{p_{1}^{\square}}}\circ\overline{\zeta_{p_{2}^{\square} }}\right]\left(z\right)=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left(\overline{ \zeta_{p_{1}^{\square}}}(z_{1})\wedge\overline{\zeta_{p_{2}^{\square}}}(z_{2} )\right)\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\left(\zeta_{ \widehat{p_{1}}}(z_{1}),\overline{\zeta_{p_{1}}}(z_{1})\right)\wedge\left( \overline{\zeta_{\widehat{p_{2}}}}(z_{2}),\overline{\zeta_{p_{2}}}(z_{2}) \right)\right]\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\left(\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}\zeta_{\widehat{p_{1}}}(M),\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}\zeta_{p_{1}}(M)\right)\wedge\left(\underset{N\in[z_{ 2}]_{\omega}}{\mathop{\sum}}\zeta_{\widehat{p_{2}}}(N),\underset{N\in[z_{2}]_{ \omega}}{\mathop{\sum}}\zeta_{p_{2}}(N)\right)\right]\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\underset{MN\in[z_{1}]_{ \omega}}{\mathop{\sum}}\left(\zeta_{p_{1}^{\square}}(M)\wedge\zeta_{p_{2}^{ \square}}(N)\right)\right]\) as \(MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\)
\(=\underset{MN\in[z]_{\omega}}{\mathop{\sum}}\left(\zeta_{p_{1}^{\square}}(M) \wedge\zeta_{p_{2}^{\square}}(N)\right)\)
\(=\underset{\lambda\in[z]_{\omega}}{\mathop{\sum}}\left(\zeta_{p_{1}^{ \square}}(M)\wedge\zeta_{p_{2}^{\square}}(N)\right)\)
\(=\underset{\lambda\in[z]_{\omega}}{\mathop{\sum}}\left[\left(\zeta_{p_{1}^{ \square}}\circ\zeta_{p_{2}^{\square}}\right)(\lambda)\right]\)
implies \(\left[\overline{\zeta_{p_{1}^{\square}}}\circ\overline{\zeta_{p_{2}^{\square} }}\right]\left(z\right)\leq\left[\overline{\zeta_{p_{1}^{\square}}\circ \zeta_{p_{2}^{\square}}}\right]\left(z\right)\).
Further
\(\left[\overline{\eta_{p_{1}^{\square}}}\circ\overline{\eta_{p_{2}^{\square}}} \right]\left(z\right)=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left(\overline{ \eta_{p_{1}^{\square}}}(z_{1})\vee\overline{\eta_{p_{2}^{\square}}}(z_{2})\right)\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\left(\overline{\eta_{ \widetilde{p_{1}}}}(z_{1}),\overline{\eta_{p_{1}}}(z_{1})\right)\vee\left( \overline{\eta_{\widetilde{p_{2}}}}(z_{2}),\overline{\eta_{p_{2}}}(z_{2}) \right)\right]\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\left(\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}\eta_{\widetilde{p_{1}}}(M),\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}\eta_{p_{1}}(M)\right)\vee\left(\underset{N\in[z_{2}]_{ \omega}}{\mathop{\sum}}\eta_{\widetilde{p_{2}}}(N),\underset{N\in[z_{2}]_{ \omega}}{\mathop{\sum}}\eta_{p_{2}}(N)\right)\right]\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\left(\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}\eta_{p_{1}^{\square}}(M)\right)\vee\left(\underset{N \in[z_{2}]_{\omega}}{\mathop{\sum}}\eta_{p_{2}^{\square}}(N)\right)\right]\)
\(=\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\underset{M\in[z_{1}]_{ \omega}}{\mathop{\sum}}N\in[z_{2}]_{\omega}\left(\eta_{p_{1}^{\square}}(M)\lor \eta_{p_{2}^{\square}}(N)\right)\right]\)
\(\geq\underset{z=z_{1}z_{2}}{\mathop{\sum}}\left[\underset{MN\in[z_{1}z_{2}]_{ \omega}}{\mathop{\sum}}\left(\eta_{p_{1}^{\square}}(M)\vee\eta_{p_{2}^{\square}}( N)\right)\right]\) as \(MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}=[z_{1}z_{2}]_{\omega}\)
\(=\underset{MN\in[z]_{\omega}}{\mathop{\sum}}\left(\eta_{p_{1}^{\square}}(M)\lor \eta_{p_{2}^{\square}}(N)\right)\)
\[=\bigwedge_{\lambda\in[z]_{\omega},\lambda=MN}\left(\eta_{p_{1}^{\square}}(M) \bigvee\eta_{p_{2}^{\square}}(N)\right)\] \[=\bigwedge_{\lambda\in[z]_{\omega}}\left[\bigwedge_{\lambda=MN} \left(\eta_{p_{1}^{\square}}(M)\bigvee\eta_{p_{2}^{\square}}(N)\right)\right]\] \[=\bigwedge_{\lambda\in[z]_{\omega}}\left[\left(\eta_{p_{1}^{ \square}}\circ\eta_{p_{2}^{\square}}\right)(\lambda)\right]\]
implies \(\left[\overline{\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}}\right](z) \geq\left[\overline{\eta_{p_{1}^{\square}}\circ\eta_{p_{2}^{\square}}}\right](z)\). Hence, \(\overline{App}(P_{1}^{\square})\circ\overline{App}(P_{2}^{\square})\subseteq \overline{App}(P_{1}^{\square}\circ P_{2}^{\square})\).
## 4. Rough cubic Pythagorean fuzzy ideals (RCPFI) in semigroup.
In this section, \(P_{LI}^{\square}\), \(P_{RI}^{\square}\), \(P_{I}^{\square}\), \(P_{BI}^{\square}\), \(P_{II}^{\square}\) cubic Pythagorean fuzzy left ideal, cubic Pythagorean fuzzy right ideal, cubic Pythagorean fuzzy ideal, cubic Pythagorean fuzzy bi-ideal and cubic Pythagorean fuzzy interior-ideal are respectively.
**Definition 4.1**.: _Let \(\omega\) be a \(CR_{\omega}\) on \(S\) and \(P^{\square}\) be a CPFS. Then \(P^{\square}\) is called lower (resp.upper) rough cubic Pythagorean fuzzy sub-semigroup of \(S\), if \(\underline{App}(P^{\square})\) (resp.\(\overline{App}(P^{\square})\)) is a cubic Pythagorean fuzzy sub-semigroup of \(S\). A cubic Pythagorean fuzzy set \(P^{\square}\) is known to be rough cubic Pythagorean fuzzy sub-semigroup of \(S\), if \(\underline{App}(P^{\square})\) and \(\overline{App}(P^{\square})\) are both Pythagorean fuzzy sub-semigroup of \(S\)._
**Definition 4.2**.: _Let \(\omega\) be a \(CR_{\omega}\) on \(S\) and \(P^{\square}\) be a CPFS. Then \(P^{\square}\) is called lower rough \(P_{LI}^{\square}\) (resp.\(P_{RI}^{\square}\),\(P_{I}^{\square}\)) of \(S\), if \(\underline{App}(P^{\square})\) is a \(P_{LI}^{\square}\) (resp.\(P_{RI}^{\square}\),\(P_{I}^{\square}\)) of \(S\) and \((i)\)\(\zeta_{\widehat{p}}(xy)\geq\zeta_{\widehat{p}}(y)\); \(\zeta_{\underline{p}}(xy)\leq\zeta_{\underline{p}}(y)\)\(\forall\)\(x,y\in\ S\)\((ii)\)\(\overline{\eta_{\widehat{p}}}(xy)\geq\overline{\eta_{\widehat{p}}}(y)\),\(\overline{\eta_{\widehat{p}}}(xy)\leq\overline{\eta_{\widehat{p}}}(y)\)\(\forall\)\(x,y\in\ S\)_
**Definition 4.3**.: _Let \(\omega\) be a \(CR_{\omega}\) on \(S\) and \(P^{\square}\) be a CPFS. Then \(P^{\square}\) is called upper rough \(P_{LI}^{\square}\) (resp.\(P_{RI}^{\square}\),\(P_{I}^{\square}\)) of \(S\), if \(\overline{App}(P^{\square})\) is a \(P_{LI}^{\square}\) (resp.\(P_{RI}^{\square}\),\(P_{I}^{\square}\)) of \(S\) and \((i)\)\(\overline{\zeta_{\widehat{p}}}(xy)\geq\overline{\zeta_{\widehat{p}}}(y)\); \(\overline{\zeta_{p}}(xy)\leq\overline{\zeta_{p}}(y)\)\(\forall\)\(x,y\in\ S\)\((ii)\)\(\overline{\eta_{\widehat{p}}}(xy)\geq\overline{\eta_{\widehat{p}}}(y)\),\(\overline{\eta_{\widehat{p}}}(xy)\leq\overline{\eta_{\widehat{p}}}(y)\)\(\forall\)\(x,y\in\ S\)\((i)\)\(\overline{\eta_{\widehat{p}}}(xy)\geq\overline{\eta_{\widehat{p}}}(y)\),\(\overline{\eta_{\widehat{p}}}(xy)\leq\overline{\eta_{\widehat{p}}}(y)\)\(\forall\)\(x,y\in\ S\)_
**Definition 4.4**.: _Let \(P^{\square}\) be a CPFS and \(\omega\) be a \(CR_{\omega}\) on \(S\). Then \(P^{\square}\) is called lower(resp. upper) rough \(P_{BI}^{\square}\) of \(S\), if \(\underline{App}(P^{\square})\) (resp. \(\overline{App}(P^{\square})\)) is a \(P_{BI}^{\square}\) of \(S\) and \((i)\)\(\zeta_{\widehat{p}}(xyz)\geq min\left\{\zeta_{\widehat{p}}(x),\zeta_{\widehat{p}} (z)\right\}\)\(\forall x,y,z\in\ S.\)\((ii)\)\(\eta_{\widehat{p}}(xyz)\geq min\left\{\eta_{\widehat{p}}(x),\eta_{\widehat{p}}(z)\right\}\)\(\forall x,y,z\in\ S.\)\((iii)\)\(\zeta_{p}(xyz)\leq max\left\{\zeta_{p}(x),\zeta_{p}(z)\right\}\)\(\forall x,y,z\in\ S.\)\((iv)\)\(\eta_{p}(xyz)\leq max\left\{\eta_{p}(x),\eta_{p}(z)\right\}\)\(\forall x,y,z\in\ S.\)_
**Definition 4.5**.: _Let \(P^{\square}\) be a CPFS and \(\omega\) be a \(CR_{\omega}\) on \(S\). Then \(P^{\square}\) is called lower(resp. upper) rough \(P_{II}^{\square}\) of \(S\), if \(\underline{App}(P^{\square})\) (resp. \(\overline{App}(P^{\square})\)) is a \(P_{II}^{\square}\) of \(S\) and \((i)\)\(\zeta_{\widehat{p}}(xyz)\geq\zeta_{\widehat{p}}(y)\)\(\forall x,y,z\in\ S.\)\((ii)\)\(\eta_{\widehat{p}}(xyz)\geq\eta_{\widehat{p}}(y)\)\(\forall x,y,z\in\ S.\)\((iii)\)\(\zeta_{p}(xyz)\leq\zeta_{p}(y)\)\(\forall x,y,z\in\ S.\)\((iv)\)\(\eta_{p}(xyz)\leq\eta_{p}(y)\)\(\forall x,y,z\in\ S.\)_
**Theorem 4.1**.: _Let \(\omega\) is a \(CR_{\omega}\) on \(S\) and \(P^{\square}\) be a cubic Pythagorean fuzzy sub-semigroup of \(S\). Then \(\overline{App}(P^{\square})\) is a cubic Pythagorean fuzzy sub-semigroup of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on \(S\), for all \(z_{1},z_{2}\in S\), we have \([z_{1}][z_{2}]\subseteq[z_{1}z_{2}]_{\omega}\). Now, we have to show that \(\overline{App}(P^{\square})=\left(\overline{\zeta_{p^{\square}}},\overline{ \eta_{p^{\square}}}\right)\) is a cubic Pythagorean fuzzy sub-semigroup of \(S\).
\[\frac{S}{\zeta_{\widehat{p}}}(z_{1},z_{2}) =\bigvee_{z_{3}\in[z_{1}z_{2}]_{\omega}}\zeta_{\widehat{p}}(z_{3}) \geq\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widehat{p}}(z_{3})\] \[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widehat{p} }(MN)\] \[\geq\bigvee_{M\in[z_{1}]_{\omega},N\in[z_{2}]_{\omega}}\big{[}\zeta _{\widehat{p}}(M)\bigwedge\zeta_{\widehat{p}}(N)\big{]}\] \[=\left[\bigvee_{M\in[z_{1}]_{\omega}}\big{[}\zeta_{\widehat{p}}(M )\big{]}\right]\bigwedge\left[\bigvee_{N\in[z_{2}]_{\omega}}\zeta_{\widehat{p }}(N)\right]\] \[\text{implies }\overline{\zeta_{\widehat{p}}}(z_{1},z_{2})\geq min \left\{\overline{\zeta_{\widehat{p}}}(z_{1}),\overline{\zeta_{\widehat{p}}}(z_ {2})\right\}\] \[\leq\bigvee_{z_{3}\in[z_{1}]_{\omega}}\zeta_{p}(z_{3})\] \[=\bigvee_{M\in[z_{1}]_{\omega}}\zeta_{\widehat{p}}(M),\overline {\zeta_{\widehat{p}}}(N)\big{\}}\] \[\leq\bigvee_{M\in[z_{1}]_{\omega}}\left[\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{}\big{[}\big{[}\big{[}\big{}\big{[} \big{[}\big{[}\big{}\big{[}\big{}\big{[}\big{}\big{[}\big{[}\big{[}\big{}\big{[} \big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{} \big{[}\big{[}\big{}\big{[}\big{[}\big{[}\big{}\big{[}\big{}\big{[}\big{[}\big{} \big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{} \big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{[}\big{}\big{[}\big{} \big{[}\big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{[}\big{[}\big{ [}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{}\big{[}\big{[}\big{} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[}\big{[} \big{
\[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde{p}}(MN)\geq\bigvee _{N\in[z_{2}]_{\omega}}\zeta_{\widetilde{p}}(N)\]
implies \(\overline{\zeta_{\widetilde{p}}}(z_{1}z_{2})\geq\overline{\zeta_{\widetilde{p}} }(z_{2})\)
\[\overline{\zeta_{p}}(z_{1}z_{2}) =\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(z_{3 })\leq\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(z_{3})\] \[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(MN)\leq \bigvee_{N\in[z_{2}]_{\omega}}\zeta_{p}(N)\]
implies \(\overline{\zeta_{p}}(z_{1}z_{2})\leq\overline{\zeta_{p}}(z_{2})\)
Next
\[\overline{\eta_{\widetilde{p}}}(z_{1}z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\eta_{\widetilde{p}}(z_ {3})\geq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p }}(z_{3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde {p}}(MN)\geq\bigwedge_{N\in[z_{2}]_{\omega}}\eta_{\widetilde{p}}(N)\]
implies \(\overline{\eta_{\widetilde{p}}}(z_{1}z_{2})\geq\overline{\eta_{\widetilde{p}} }(z_{2})\)
\(\overline{\eta_{p}}(z_{1}z_{2})=\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\eta_{ p}(z_{3})\leq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{p}(z_{3})\)
\[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{p}(MN)\leq\bigwedge_{ N\in[z_{2}]_{\omega}}\eta_{p}(N)\]
implies \(\overline{\eta_{p}}(z_{1}z_{2})\leq\overline{\eta_{\widetilde{p}}}(z_{2})\)
implies that \(\overline{App}(P^{\square})\) is a \(P^{\square}_{LI}\) of \(S\). Similarly, \(\overline{App}(P^{\square})\) is a \(P^{\square}_{RI}\) of \(S\).
**Theorem 4.3**.: _Let \(\omega\) is a \(CR_{\omega}\) on \(S\) and let \(P^{\square}\) be a cubic Pythagorean fuzzy sub-semigroup of \(S\). Then \(\underline{App}(P^{\square})\) is a cubic Pythagorean fuzzy sub-semigroup of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on \(S\), then for all \(z_{1},z_{2}\in S\), \([z_{1}][z_{2}]=[z_{1}z_{2}]_{\omega}\). It is required to show that \(\underline{App}(P^{\square})=\big{(}\underline{\zeta_{p^{\square}}},\underline {\eta_{p^{\square}}}\big{)}\) is a cubic Pythagorean fuzzy sub-semigroup of \(S\), consider
\[\underline{\zeta_{\widetilde{p}}}(z_{1},z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\zeta_{\widetilde{p}}(z _{3})\geq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{ \widetilde{p}}(z_{3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde {p}}(MN)\] \[\geq\bigwedge_{M\in[z_{1}]_{\omega},N\in[z_{2}]_{\omega}}\big{[} \zeta_{\widetilde{p}}(M)\bigwedge\zeta_{\widetilde{p}}(N)\big{]}\] \[=\left[\bigwedge_{M\in[z_{1}]_{\omega}}\big{[}\zeta_{\widetilde{p} }(M)\big{]}\right]\bigwedge\left[\bigwedge_{N\in[z_{2}]_{\omega}}\zeta_{ \widetilde{p}}(N)\right]\]
implies \(\underline{\zeta_{\widetilde{p}}}(z_{1},z_{2})\geq min\left\{\underline{ \zeta_{\widetilde{p}}}(z_{1}),\underline{\zeta_{\widetilde{p}}}(z_{2})\right\}\)
\(\underline{\zeta_{p}}(z_{1},z_{2})=\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_ {\omega}}\zeta_{p}(z_{3})\)
\(\underline{\zeta_{p}}(z_{1},z_{2})\leq max\left\{\underline{\zeta_{p}}(M), \underline{\zeta_{p}}(N)\right\}\)
Further
\[\underline{\eta_{\widetilde{p}}}(z_{1},z_{2})=\bigvee_{z_{3}\in[z_{1}z_{2}]_{ \omega}}\eta_{\widetilde{p}}(z_{3})\geq\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2 }]_{\omega}}\eta_{\widetilde{p}}(z_{3})\]
\[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p}}(MN)\] \[\geq\bigvee_{M\in[z_{1}]_{\omega},N\in[z_{2}]_{\omega}}\left[\eta_{ \widetilde{p}}(M)\bigwedge\eta_{\widetilde{p}}(N)\right]\] \[=\left[\bigvee_{M\in[z_{1}]_{\omega}}\left[\eta_{\widetilde{p}}( M)\right]\right]\bigwedge\left[\bigvee_{N\in[z_{2}]_{\omega}}\eta_{\widetilde{p}}(N)\right]\] \[\text{implies }\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}) \geq min\left\{\underline{\eta_{\widetilde{p}}}(z_{1}),\underline{\eta_{ \widetilde{p}}}(z_{2})\right\}\] \[\leq\bigvee_{2_{3}\in[z_{1}z_{2}]_{\omega}}\eta_{p}(z_{3})\] \[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{p}(MN)\] \[\leq\bigvee_{MN\in[z_{1}z_{2}]_{\omega}}\eta_{p}(MN)\] \[=\left(\bigvee_{M\in[z_{1}]_{\omega}}\eta_{p}(M)\right)\bigwedge \left(\bigvee_{N\in[z_{2}]_{\omega}}\eta_{p}(N)\right)\] \[\text{implies }\underline{\eta_{p}}(z_{1}z_{2})\leq max\left\{ \underline{\eta_{p}}(M),\underline{\eta_{p}}(N)\right\}.\]
**Theorem 4.4**.: _Let \(\omega\) be a \(CR_{\omega}\) on \(S\), and \(P^{\square}\) is a \(P^{\square}_{LI}\) (resp. \(P^{\square}_{RI}\)) of \(S\). Then \(\underline{App}(P^{\square})\) is a \(P^{\square}_{LI}\) (resp. \(P^{\square}_{RI}\)) of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on \(S\), we have for all \(z_{1},z_{2}\in S\) it follows that \([z_{1}][z_{2}]\subseteq[z_{1}z_{2}]_{\omega}\). We need to show that \(\underline{App}(P^{\square})=\left(\underline{\zeta_{p^{\square}}},\underline{ \eta_{p^{\square}}}\right)=\left\langle\left[\underline{\zeta_{\widetilde{p}}},\underline{\eta_{\widetilde{p}}}\right],\left(\underline{\zeta_{p}}, \underline{\eta_{p}}\right)\right\rangle\) is a \(P^{\square}_{LI}\) of \(S\).
Consider
\[\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\zeta_{\widetilde{p}}(z_ {3})\geq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde {p}}(z_{3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde {p}}(MN)\geq\bigwedge_{N\in[z_{2}]_{\omega}}\zeta_{\widetilde{p}}(N)\] \[\text{implies }\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2})\geq \underline{\zeta_{\widetilde{p}}}(z_{2})\] \[\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\overline{\zeta_{p}}(z_ {3})\leq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(z_{3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(MN) \leq\bigwedge_{N\in[z_{2}]_{\omega}}\zeta_{p}(N)\] \[\text{implies }\underline{\zeta_{p}}(z_{1}z_{2})\leq\underline{\zeta_{ p}}(z_{2})\] \[\text{Next}\] \[\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}) =\bigvee_{z_{3}\in[z_{1}z_{2}]_{\omega}}\eta_{\widetilde{p}}(z_{3} )\geq\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p}}(z_ {3})\] \[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p} }(MN)\geq\bigvee_{N\in[z_{2}]_{\omega}}\eta_{\widetilde{p}}(N)\] \[\text{implies }\underline{\eta_{\widetilde{p}}}(z_{1}z_{2})\geq \underline{\eta_{\widetilde{p}}}(z_{2})\] \[\underline{\eta_{p}}(z_{1}z_{2}) =\bigvee_{\begin{subarray}{c}z_{3}\in[z_{1}z_{2}]_{\omega}\\ MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}\end{subarray}}\eta_{p}(MN)\leq\bigvee_{ N\in[z_{2}]_{\omega}}\eta_{p}(z_{3})\] \[\text{implies }\underline{\eta_{p}}(z_{1}z_{2})\leq\underline{\eta_{p}}(z_ {2})\] \[\text{implies }\underline{\eta_{\widetilde{p}}}(z_{1}z_{2})\leq max \left\{\underline{\eta_{p}}(M),\underline{\eta_{p}}(N)\right\}.\]
**Theorem 4.5**.: _Let \(\omega\) be a \(CR_{\omega}\) on \(S\), and \(P^{\square}\) is a \(P^{\square}_{LI}\) (resp. \(P^{\square}_{RI}\)) of \(S\). Then \(\underline{App}(P^{\square})\) is a \(P^{\square}_{LI}\) (resp. \(P^{\square}_{RI}\)) of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on \(S\), we have for all \(z_{1},z_{2}\in S\) it follows that \([z_{1}][z_{2}]\subseteq[z_{1}z_{2}]_{\omega}\). We need to show that \(\underline{App}(P^{\square})=\left(\underline{\zeta_{\widetilde{p^{\square}}} },\underline{\eta_{p^{\square}}}\right)=\left\langle\left[\underline{\zeta_{ \widetilde{p}}},\underline{\eta_{\widetilde{p}}}\right],\left(\underline{\zeta_{ p}},\underline{\eta_{p}}\right)\right\rangle\) is a \(P^{\square}_{LI}\) of \(S\).
Consider
\[\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\zeta_{\widetilde{p}}(z_ {3})\geq\bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde{p}}(z_ {3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{\widetilde{p}} (MN)\geq\bigwedge_{N\in[z_{2}]_{\omega}}\zeta_{\widetilde{p}}(N)\] \[\text{implies }\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2})\geq\underline{ \zeta_{\widetilde{p}}}(z_{2})\] \[\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}) =\bigwedge_{z_{3}\in[z_{1}z_{2}]_{\omega}}\zeta_{p}(z_{3})\leq \bigwedge_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(z_{3})\] \[=\bigwedge_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\zeta_{p}(MN) \leq\bigwedge_{N\in[z_{2}]_{\omega}}\zeta_{p}(N)\] \[\text{implies }\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2})\leq \underline{\zeta_{\widetilde{p}}}(z_{2})\] \[\text{Next}\] \[\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}) =\bigvee_{z_{3}\in[z_{1}z_{2}]_{\omega}}\eta_{\widetilde{p}}(z_ {3})\geq\bigvee_{z_{3}\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p}}(z_ {3})\] \[=\bigvee_{MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}}\eta_{\widetilde{p}} (MN)\geq\bigvee_{N\in[z_{2}]_{\omega}}\eta_{\widetilde{p}}(N)\] \[\text{implies }\underline{\eta_{\widetilde{p}}}(z_{1}z_{2})\geq\underline{ \eta_{\widetilde{p}}}(z_{2})\] \[\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}) =\bigvee_{\begin{subarray}{c}z_{3}\in[z_{1}z_{2}]_{\omega}\\ MN\in[z_{1}]_{\omega}[z_{2}]_{\omega}\end{subarray}}\eta_{p}(MN)\leq\bigvee_{ N\in[z_{2}]_{\omega}}\eta_{p}(z
**Theorem 4.5**.: _Let \(\omega\) be a \(CR_{\omega}\) on semigroup \(S\). If \(P^{\square}\) is a \(P^{\square}_{BI}\) of \(S\). Then \(\overline{App}(P^{\square})\) is a \(P^{\square}_{BI}\) of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on the semigroup \(S\), we have for all \(z_{1},z_{2},z_{3}\in S\)
\([z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}\subseteq[z_{1}z_{2}z_{3}]_{\omega}\), now show that \(\overline{App}(P^{\square})=\left(\overline{\zeta_{p^{\square}}},\overline{ \eta_{p^{\square}}}\right)\) is a \(P^{\square}_{BI}\) of \(S\). Consider the following
\[\begin{split}&\overline{\zeta_{\tilde{p}}}(z_{1}z_{2}z_{3})= \bigvee_{z\in[z_{1}z_{2}z_{3}]_{\omega}}\zeta_{\tilde{p}}(z)\geq\bigvee_{z\in[ z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\zeta_{\tilde{p}}(z)\\ &=\bigvee_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega} }\zeta_{\tilde{p}}(abc)=\bigvee_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in [z_{3}]_{\omega}}\zeta_{\tilde{p}}(abc)\\ &\geq\bigvee_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\left\{ \zeta_{\tilde{p}}(a)\bigwedge\zeta_{\tilde{p}}(c)\right\}\\ &=\left\{\bigvee_{a\in[z_{1}]_{\omega}}\zeta_{\tilde{p}}(a) \right\}\bigwedge\left\{\bigvee_{c\in[z_{3}]_{\omega}}\zeta_{\tilde{p}}(c) \right\}\end{split}\]
implies \(\overline{\zeta_{\tilde{p}}}(z_{1}z_{2}z_{3})\geq min\left\{\overline{\zeta_{ \tilde{p}}}(z_{1}),\overline{\zeta_{\tilde{p}}}(z_{3})\right\}\)
\(\overline{\zeta_{p}}(z_{1}z_{2}z_{3})=\bigvee_{z\in[z_{1}z_{2}z_{3}]_{\omega} }\zeta_{\tilde{p}}(z)\leq\bigvee_{z\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}] _{\omega}}\zeta_{p}(z)\\ &=\bigvee_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega} }\zeta_{p}(abc)=\bigvee_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in[z_{3}]_{ \omega}}\zeta_{p}(abc)\\ &\leq\bigvee_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\left\{ \zeta_{p}(a)\bigvee\zeta_{p}(c)\right\}\\ &=\left\{\bigvee_{a\in[z_{1}]_{\omega}}\zeta_{p}(a)\right\} \bigvee\left\{\bigvee_{c\in[z_{3}]_{\omega}}\zeta_{p}(c)\right\}\\ &\leq\bigvee_{a\in[z_{1}]_{\omega}}\left\{\zeta_{p}(a)\bigvee \zeta_{p}(c)\right\}\\ &=\left\{\bigvee_{a\in[z_{1}]_{\omega}}\zeta_{p}(a)\right\} \bigvee\left\{\bigvee_{c\in[z_{3}]_{\omega}}\zeta_{p}(c)\right\}\\ \end{split}\]
implies \(\overline{\zeta_{p}}(z_{1}z_{2}z_{3})\leq max\left\{\overline{\zeta_{p}}(z _{1}),\overline{\zeta_{p}}(z_{3})\right\}\)
Next
\[\begin{split}&\overline{\eta_{\tilde{p}}}(z_{1}z_{2}z_{3})= \bigwedge_{z\in[z_{1}z_{2}z_{3}]_{\omega}}\eta_{\tilde{p}}(z)\geq\bigwedge_{z \in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\eta_{\tilde{p}}(z)\\ &=\bigwedge_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega} }\eta_{\tilde{p}}(abc)=\bigwedge_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in [z_{3}]_{\omega}}\eta_{\tilde{p}}(abc)\\ &\geq\bigwedge_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\left\{ \eta_{\tilde{p}}(a)\bigwedge\eta_{\tilde{p}}(c)\right\}\\ &=\left\{\bigwedge_{a\in[z_{1}]_{\omega}}\eta_{\tilde{p}}(a) \right\}\bigwedge\left\{\bigwedge_{c\in[z_{3}]_{\omega}}\eta_{\tilde{p}}(c) \right\}\\ \end{split}\]
implies \(\overline{\eta_{\tilde{p}}}(z_{1}z_{2}z_{3})\geq min\left\{\overline{\eta_{ \tilde{p}}}(z_{1}),\overline{\eta_{\tilde{p}}}(z_{3})\right\}\)
\(\overline{\eta_{p}}(z_{1}z_{2}z_{3})=\bigwedge_{z\in[z_{1}z_{2}z_{3}]_{\omega} }\eta_{\tilde{p}}(z)\leq\bigwedge_{z\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{ \omega}}\eta_{p}(z)\\ &\leq\bigwedge_{a\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{ \omega}}\eta_{p}(abc)=\bigwedge_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in [z_{3}]_{\omega}}\eta_{p}(abc)\\ &\leq\bigwedge_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\left\{\eta _{p}(a)\bigvee\eta_{p}(c)\right\}\\ &=\left\{\bigwedge_{a\in[z_{1}]_{\omega}}\eta_{p}(a)\right\} \bigvee\left\{\bigwedge_{c\in[z_{3}]_{\omega}}\eta_{p}(c)\right\}\\ \end{split}\)
implies \(\overline{\eta_{p}}(z_{1}z_{2}z_{3})\leq max\left\{\overline{\eta_{p}}(z_{1}), \overline{\eta_{p}}(z_{3})\right\}\)
**Theorem 4.6**.: _Let \(\omega\) be a complete \(CR_{\omega}\) on semigroup \(S\). Let \(P^{\square}\) is a \(P^{\square}_{BI}\) of \(S\). Then \(\underline{App}(P^{\square})\) is a \(P^{\square}_{BI}\) of \(S\)._
Proof.: Since \(\omega\) is a \(CR_{\omega}\) on the semigroup \(S\), we have for all \(z_{1},z_{2},z_{3}\in S\)
\([z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}\subseteq[z_{1}z_{2}z_{3}]_{\omega}\), we show that \(\underline{App}(P^{\square})=\left(\underline{\zeta_{p^{\square}}},\underline{ \eta_{p^{\square}}}\right)\) is a \(P^{\square}_{BI}\) of \(S\).
Consider the following
\[\begin{split}\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}z_{3})& =\bigwedge_{z\in[z_{1}z_{2}z_{3}]_{\omega}}\zeta_{\widetilde{p}}(z)\geq \bigwedge_{z\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\zeta_{ \widetilde{p}}(z)\\ &=\bigwedge_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{ \omega}}\zeta_{\widetilde{p}}(abc)=\bigwedge_{a\in[z_{1}]_{\omega}b\in[z_{2}]_ {\omega}\in[z_{3}]_{\omega}}\zeta_{\widetilde{p}}(abc)\\ &\geq\bigwedge_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\big{\{} \zeta_{\widetilde{p}}(a)\bigwedge\zeta_{\widetilde{p}}(c)\big{\}}\\ &=\left\{\bigwedge_{a\in[z_{1}]_{\omega}}\zeta_{\widetilde{p}}(a) \right\}\bigwedge\left\{\bigwedge_{c\in[z_{3}]_{\omega}}\zeta_{\widetilde{p}}( c)\right\}\\ \text{implies }\underline{\zeta_{\widetilde{p}}}(z_{1}z_{2}z_{3}) \geq min\left\{\zeta_{\widetilde{p}}(z_{1}),\zeta_{\widetilde{p}}(z_{3}) \right\}\\ \underline{\zeta_{p}}(z_{1}z_{2}z_{3})&=\bigwedge_{z \in[z_{1}z_{2}z_{3}]_{\omega}}\zeta_{\widetilde{p}}(z)\leq\bigwedge_{z\in[z_{1 }]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\zeta_{p}(z)\\ &=\bigwedge_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{ \omega}}\zeta_{p}(abc)=\bigvee_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in[z_ {3}]_{\omega}}\zeta_{p}(abc)\\ &\leq\bigwedge_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\left\{ \zeta_{p}(a)\bigvee\zeta_{p}(c)\right\}\\ &=\left\{\bigwedge_{a\in[z_{1}]_{\omega}}\zeta_{p}(a)\right\} \bigvee\left\{\bigwedge_{c\in[z_{3}]_{\omega}}\zeta_{p}(c)\right\}\\ \text{implies }\underline{\zeta_{p}}(z_{1}z_{2}z_{3}) \leq max\left\{\underline{\zeta_{p}}(z_{1}),\underline{\zeta_{p}}(z_{3}) \right\}\\ \text{Next}\end{split}\]
\[\begin{split}\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}z_{3}) &=\bigvee_{z\in[z_{1}z_{2}z_{3}]_{\omega}}\eta_{\widetilde{p}}(z) \geq\bigvee_{z\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\eta_{ \widetilde{p}}(z)\\ &=\bigvee_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega} }\eta_{\widetilde{p}}(abc)=\bigvee_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c \in[z_{3}]_{\omega}}\eta_{\widetilde{p}}(abc)\\ &\geq\bigvee_{a\in[z_{1}]_{\omega}c\in[z_{3}]_{\omega}}\big{\{} \eta_{\widetilde{p}}(a)\bigwedge\eta_{\widetilde{p}}(c)\big{\}}\\ &=\left\{\bigvee_{a\in[z_{1}]_{\omega}}\eta_{\widetilde{p}}(a) \right\}\bigwedge\left\{\bigvee_{c\in[z_{3}]_{\omega}}\eta_{\widetilde{p}}(c) \right\}\\ \text{implies }\underline{\eta_{\widetilde{p}}}(z_{1}z_{2}z_{3}) \geq min\left\{\eta_{\widetilde{p}}(z_{1}),\eta_{\widetilde{p}}(z_{3}) \right\}\\ \underline{\eta_{p}}(z_{1}z_{2}z_{3})&=\bigvee_{z\in [z_{1}z_{2}z_{3}]_{\omega}}\eta_{\widetilde{p}}(z)\leq\bigvee_{z\in[z_{1}]_{ \omega}[z_{2}]_{\omega}[z_{3}]_{\omega}}\eta_{p}(z)\\ &=\bigvee_{abc\in[z_{1}]_{\omega}[z_{2}]_{\omega}[z_{3}]_{\omega} }\eta_{p}(abc)=\bigvee_{a\in[z_{1}]_{\omega}b\in[z_{2}]_{\omega}c\in[z_{3}]_{ \omega}}\eta_{p}(abc)\\ &\leq\bigvee_{a\in[z_{1}]_{\omega}\omega}\left\{\eta_{p}(a) \bigvee\eta_{p}(c)\right\}\\ &=\left\{\bigvee_{a\in[z_{1}]_{\omega}}\eta_{p}(a)\right\} \bigvee\left\{\bigvee_{c\in[z_{3}]_{\omega}}\eta_{p}(c)\right\}\\ \text{implies }\underline{\eta_{p}}(z_{1}z_{2}z_{3})\leq max \left\{\underline{\eta_{p}}(z_{1}),\underline{\eta_{p}}(z_{3})\right\}\\ \end{split}\]
**Theorem 4.7**.: _Let \(\omega\) be a \(CR_{\omega}\) on semigroup \(S\). If \(P^{\square}\) is a \(P^{\square}_{II}\) of \(S\). Then \(\overline{App}(P^{\square})\) is a \(P^{\square}_{II}\) of \(S\)._
Proof.: Proof directly follow from theorem 4.5
**Theorem 4.8**.: _Let \(\omega\) be a complete \(CR_{\omega}\) on semigroup \(S\). Let \(P^{\square}\) is a \(P^{\square}_{II}\) of \(S\). Then \(\underline{App}(P^{\square})\) is a \(P^{\square}_{II}\) of \(S\)._
Proof.: Proof directly follow from theorem 4.6
## 5. Conclusions
Cubic Pythagorean fuzzy sets are the generalization of cubic sets. In this paper, we have presented the concept of rough cubic Pythagorean fuzzy sets in semigroups, which can handle the vagueness in a proactive way than cubic sets. Then, we have extended the notion of rough cubic Pythagorean fuzzy sets to the lower and upper approximations of Pythagorean fuzzy left (right)ideals, bi-ideals, interior ideals in semigroups and also discussed some of its related properties. We aim to extend this work to some algebraic structures namely gamma semigroup, Po-gamma-semigroup, and subtraction semigroup.
|
2308.14071 | Supporting Passive Users in mmWave Networks | The interference from active to passive users is a well-recognized challenge
in millimeter-wave (mmWave) communications. We propose a method that enables to
limit the interference on passive users (whose presence may not be detected
since they do not transmit) with a small penalty to the throughput of active
users. Our approach abstracts away (in a simple, yet informative way) the
physical layer component and it leverages the directivity of mmWave links and
the available network path diversity. We provide linear programming
formulations, lower bounds on active users rates, numerical evaluations, and we
establish a connection with the problem of (information theoretically) secure
communication over mmWave networks. | Mine Gokce Dogan, Martina Cardone, Christina Fragouli | 2023-08-27T11:05:10Z | http://arxiv.org/abs/2308.14071v1 | # Supporting Passive Users in mmWave Networks
###### Abstract
The interference from active to passive users is a well-recognized challenge in millimeter-wave (mmWave) communications. We propose a method that enables to limit the interference on passive users (whose presence may not be detected since they do not transmit) with a small penalty to the throughput of active users. Our approach abstracts away (in a simple, yet informative way) the physical layer component and it leverages the directivity of mmWave links and the available network path diversity. We provide linear programming formulations, lower bounds on active users rates, numerical evaluations, and we establish a connection with the problem of (information theoretically) secure communication over mmWave networks.
+
Footnote †: The research carried out at UCLA was supported in part by the U.S. National Science Foundation (NSF) awards ECCS-2229560 and CNS-2146838. The work of M. Cardone was supported in part by the NSF under Grants CCF-2045237 and CNS-2146838.
## I Introduction
The fact that active users can destructively interfere with the operation of passive users is a well-recognized challenge in millimeter-wave (mmWave) communications. MmWave infrastructure is increasingly deployed to support a large variety of active user applications, such as virtual reality communications, wearable devices, vehicular networks, and 5G communication systems [1, 2, 3, 4]. However, the same spectrum is shared by a number of passive users (i.e., users that do not transmit and can be significantly impacted) such as the Global Positioning System (GPS), passive remote sensing systems, and satellites that study Earth exploration, weather monitoring, and radio-astronomy [5, 6, 7, 8, 9]. In this paper, we propose and evaluate an approach that aims to provide suitable guidelines on how to support a resilient coexistence between passive and active users over mmWave networks.
Supporting the coexistence of passive and active users is a challenging task, for good reasons: by definition, passive users do not transmit and hence, their presence might not be detected. Moreover, they may be mobile and intermittent, changing their location and their time periods of operation. The question we ask in this paper is, can we guarantee a certain amount of interference-free operation to passive users, while not significantly impacting the experienced performance (communication rates) of the active users?
Our main observation is that, perhaps this is possible over mmWave networks that provide sufficient path diversity. In particular, we propose to constrain each active user to only transmit for up to a desired fraction of time \(\theta\) over each link. Due to the directivity of mmWave communications, this translates to that, with very high probability, each passive user will enjoy interference-free operation for a \((1-\theta)\) time fraction. It is not difficult to see that if the capacity of a mmWave network is \(\mathsf{C}\), then we can certainly achieve the rate \(\theta\mathsf{C}\) with this operation. The interesting part is that, provided that there exists sufficient path diversity over the network, it may be possible to achieve much higher rates by appropriately designing scheduling and routing schemes. For instance, our numerical evaluations indicate that for \(\theta=0.2\) over randomly generated networks, we can almost always achieve \(85\%\) of the unrestricted (oblivious to passive users) network capacity.
Technically, we build on the so-called 1-2-1 network model that offers a simple yet informative model for mmWave networks [10, 11, 12]. The model abstracts away the physical layer component and it focuses on capturing an inherent and dominant characteristic of mmWave communications, that is, directivity: mmWave requires beamforming with narrow beams to compensate for the high path loss incurred by isotropic transmissions. Because of this, both the mmWave transmitter and receiver employ antenna arrays to electronically steer and direct their beams towards each other. This will activate the link between them for communication, which was termed as 1-2-1 link by the authors in [10]. In particular, [10] proved that the capacity of a Gaussian noise 1-2-1 network can be approximated to within a constant (i.e., which only depends on the number of network nodes) additive gap and its optimal beam schedule can be computed in polynomial-time. We leverage the results in [10] to develop efficient transmission and scheduling mechanisms that offer suitable guidelines on how to support the coexistence with passive users in mmWave networks. We analyze the impact of passive users on the approximate network capacity and provide guarantees on the achieved rate. Our main contributions are as follows:
* For arbitrary mmWave networks, we formulate the problem of finding the maximum rate achieved while limiting the interference at every node as a Linear Program (LP) and show that it efficiently (i.e., in polynomial time in the network size) finds a beam schedule that supports passive users to desired thresholds.
* For arbitrary mmWave networks, we derive lower bounds on the active user rates. We also derive lower bounds on the necessary and sufficient number of paths that achieve a target rate while supporting passive users in arbitrary mmWave networks with unit capacity links. We provide
numerical evaluations over randomly generated networks for unequal link capacities.
* We identify a connection between the passive user problem and the (information theoretically) secure communication problem. For arbitrary mmWave networks with unit capacity links, we prove a reduction between these two problems and provide guarantees on the achieved rates in both problems.
**Related Work.** A multitude of works in the literature study scheduling and routing in wireless networks by exploring multi-path diversity [13, 14, 15, 16]. However, these works do not consider passive users and thus, their proposed approaches do not provide interference mitigation between passive and active users. Several works study passive users in passive remote sensing systems and for satellites that study Earth exploration and radio-astronomy [17, 18, 19, 20]. They study the characterization of these wireless systems and propose routing algorithms or interference mitigation techniques to support passive users. However, these works focus on traditional wireless networks and they do not consider the scheduling constraints of mmWave communications. There exist studies that address path selection and rate allocation problems in multi-hop mmWave networks [21, 22, 23, 24, 25]. However, these works do not consider passive users. Closer to our work, there exist studies that focus on passive users in directional communication networks, such as mmWave and Terahertz (THz) networks. These works study possible interference scenarios and propose interference mitigation techniques such as employing highly directive and electrically steerable antennas [5], using spread spectrum techniques [8], or sharing the spectrum between active and passive users [6, 7, 9]. However, these works do not propose routing algorithms for path selection and rate allocation, and they do not provide information-theoretical guarantees on the active user rates. Differently, in our work we leverage the directivity of mmWave links and path diversity to develop efficient scheduling mechanisms that support passive users, and derive theoretical guarantees on the active user rates.
**Paper Organization.** Section II provides background on the 1-2-1 network model. Section III presents the proposed scheduling mechanisms that support passive users, and it provides lower bounds on the active user rates and numerical evaluations. Section IV introduces a connection between the passive user problem and the (information theoretically) secure communication problem. Section V concludes the paper.
## II System Model and Background
**Notation.**\([a:b]\) is the set of integers from \(a\) to \(b>a\); \(|\cdot|\) denotes the cardinality for sets and the absolute value for scalars; \(\mathds{1}_{P}\) is the indicator function.
We consider an \(N\)-relay Gaussian noise Full-Duplex (FD) 1-2-1 network model where \(N\) relays assist the communication between the source node (node \(0\)) and the destination node (node \(N+1\)). At any particular time, each relay can simultaneously transmit and receive by using a single transmit beam and a single receive beam. Thus, at any particular instance, a relay node can transmit to at most one node and it can receive from at most one node. The source (respectively, destination) can transmit to (respectively, receive from) \(M\) other nodes i.e., on \(M\) outgoing links (respectively, on \(M\) incoming links), simultaneously. Formally, in a Gaussian FD 1-2-1 network, the received signal at node \(j\in[1\!:\!N\!+\!1]\) can be written as,
\[Y_{j}=\sum_{i\in[0:N]\setminus\{j\}}h_{ji}\mathds{1}_{\{i\in S_{j},r,j\in S_ {i,t}\}}X_{i}+Z_{j}, \tag{1}\]
where: (i) \(X_{i}\) is the channel input at node \(i\in[0\!:\!N]\) with power constraint \(\mathbb{E}\left[|X_{i}|^{2}\right]\leq P\); (ii) \(h_{ji}\in\mathbb{C}\) is the complex channel coefficient1 from node \(i\) to node \(j\); (iii) \(S_{i,t}\) and \(S_{j,r}\) represent the node(s) towards which node \(i\) is beamforming its transmissions and the node(s) towards which node \(j\) is pointing its receive beam(s); and (iv) \(Z_{j}\) indicates the additive white Gaussian noise at node \(j\); noises across the network are independent and identically distributed as \(\mathcal{CN}(0,1)\).
Footnote 1: The channel coefficients are assumed to be constant for the whole transmission duration and known by the network.
**Remark 1**: _Although 1-2-1 networks capture the essence of mmWave communications and enable to derive useful insights on near-optimal information flow algorithms, the model makes a number of simplifying assumptions that include: 1) not considering the overhead of channel knowledge and of beam-steering2, and 2) assuming no interference among communication links (a reasonable assumption for relays spaced further apart than the beam width). However, in [27] the authors relaxed this last assumption and considered networks where nodes are equipped with imperfect beams that have side-lobe leakage. They showed that even with imperfect side-lobe attenuation, the 1-2-1 model is a viable approximation when certain operational conditions on the beamforming pattern are satisfied. Thus, for networks satisfying the conditions introduced in [27], the results naturally extend._
Footnote 2: Following beam alignment, the channel time variations are reduced significantly [26] and hence, the channel state changes much slower than the rate of communication.
We next discuss some known capacity results for Gaussian FD 1-2-1 networks for the case \(M=1\).
**Capacity of Gaussian FD 1-2-1 networks.** In [10], it was shown that the memoryless channel model in (1) allows to upper bound the channel capacity using the information-theoretic cut-set upper bound. The authors showed that the unicast capacity of an \(N\)-relay Gaussian FD 1-2-1 network can be approximated to within an additive gap that only depends on the number of nodes in the network. In particular, the following LP was proposed to compute the unicast approximate capacity and its optimal schedule,
\[\begin{split}&\mathrm{P1}:\,\overline{\mathsf{C}}=\max_{x_{p},p\in \mathcal{P}}\sum_{p\in\mathcal{P}}x_{p}\mathsf{C}_{p}\\ &(\mathrm{P1}a)\,\,x_{p}\geq 0,\qquad\qquad\qquad\qquad\forall p \!\in\!\mathcal{P},\\ &(\mathrm{P1}b)\,\,\sum_{p\in\mathcal{P}_{i}}x_{p}f^{p}_{p,\text{tr }(i),i}\!\leq\!1,\quad\forall i\!\in\![0\!:\!N],\\ &(\mathrm{P1}c)\,\,\sum_{p\in\mathcal{P}_{i}}x_{p}f^{p}_{i,p,p(p) }\!\leq\!1,\quad\forall i\!\in\![1\!:\!N\!+\!1],\end{split} \tag{2}\]
where: (i) \(\overline{\mathsf{C}}\) is the approximate capacity; (ii) \(\mathcal{P}\) is the collection of all paths connecting the source to the destination; (iii) \(\mathsf{C}_{p}\) is the capacity of path \(p\); (iv) \(\mathcal{P}_{i}\subseteq\mathcal{P}\) is the set of paths that pass through node \(i\) where \(i\in[0\!:\!N\!+\!1]\); (v) \(p.\mathit{nx}(i)\) (respectively, \(p.\mathit{pr}(i)\)) is the node that follows (respectively, precedes) node \(i\) in path \(p\); (vi) \(x_{p}\) is the fraction of time that path \(p\) is used; and (vii) \(f_{j,i}^{p}\) is the optimal activation time for the link of capacity \(\ell_{ji}\) when path \(p\) is operated, i.e., \(f_{j,i}^{p}=\mathsf{C}_{p}/\ell_{ji}\). Here, \(\ell_{ji}\) denotes the capacity of the link going from node \(i\) to node \(j\) where \((i,j)\in[0\!:\!N\!]\times[1\!:\!N\!+\!1]\).
Although the number of variables in the LP \(\mathrm{P}1\) (particularly, the number of paths) can be exponential in the number of nodes, this LP can indeed be solved in polynomial time through the following equivalent LP as proved in [10]. We refer readers to [10] for a more detailed description.
\[\mathrm{P}2:\overline{\mathsf{C}}=\max_{\lambda,\mathsf{F}}\! \!\!\!\sum_{\mathsf{j}=0}^{\mathsf{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Since, as proved in [10], the LP \(\mathrm{P1}\) in (2) and the LP \(\mathrm{P2}\) in (3) are equivalent LPs, we can use an optimal solution of the LP \(\mathrm{P1}\) to derive a tighter bound on \(\mathsf{C}\). We let \(\mathcal{P}^{\star}\) denote the set of active paths in the optimal solution of the LP \(\mathrm{P1}\) (without considering passive users) and \(x_{p}^{\star}\) denote the optimal activation time of the path \(p\in\mathcal{P}^{\star}\). We also let \(\mathcal{E}_{p}\) denote the set of links of path \(p\in\mathcal{P}^{\star}\). The following proposition (proof in Appendix A) presents a tighter lower bound on \(\mathsf{C}\) than (5) and (6) by leveraging the paths in \(\mathcal{P}^{\star}\).
**Proposition 1**: _For an \(N\)-relay Gaussian FD 1-2-1 network with an arbitrary topology, the 1-2-1 passive capacity \(\mathsf{C}\) can be lower bounded as follows,_
\[\mathsf{C}\geq\sum_{p\in\mathcal{P}^{\star}}\min\left(x_{p}^{\star},\frac{ \tilde{\theta}_{p}x_{p}^{\star}}{\tilde{\lambda}_{p}}\right)\mathsf{C}_{p}, \tag{7}\]
_where \(\tilde{\theta}_{p}\!=\!\min_{(i,j)\in\mathcal{E}_{p}}\theta_{ji}\) and \(\tilde{\lambda}_{p}\!=\!\max_{(i,j)\in\mathcal{E}_{p}}\lambda_{ji}^{\star}\)._
**Remark 4**: _We note that \(\tilde{\theta}_{p}\!\!=\!\min_{(i,j)\in\mathcal{E}_{p}}\theta_{ji}\geq\tilde{\theta}\) and \(\tilde{\lambda}_{p}\!\!=\!\max_{(i,j)\in\mathcal{E}_{p}}\lambda_{ji}^{\star}\leq\tilde{\lambda}\). This readily implies that \(\tilde{\theta}/\tilde{\lambda}\) in (6) is smaller than or equal to \(\tilde{\theta}_{p}/\tilde{\lambda}_{p}\ \forall p\in\mathcal{P}^{\star}\). Thus, the bound in (7) is a tighter bound than the one in (6)._
Proposition 1 shows that the paths activated by an optimal solution of the LP \(\mathrm{P1}\) in (2) can be leveraged to achieve the lower bound in (7). In the example below, we highlight that we can indeed achieve a higher rate than the lower bound in (7) by distributing the traffic across a larger number of paths.
_Example 1_. Consider the network with \(N=5\) relay nodes in Fig. 1. There exist \(5\) paths connecting the source (node \(0\)) to the destination (node \(6\)), namely \(p_{1}:0\to 1\to 6,\ p_{2}:0\to 2\to 6,\ p_{3}:0\to 3\to 6,\ p_{4}:0\to 4\to 6\), and \(p_{5}:0\to 5\to 6\). We consider unit capacity links except for the links in \(p_{1}\) for which the link capacities are equal to \(2\). We assume \(M=1\) and \(\theta_{ji}=0.2\ \forall(i,j)\!\in\!\![0\!:\!N]\!\times\![1\!:\!N\!+\!1]\). The optimal solution of the LP \(\mathrm{P1}\) in (2) (without the passive users constraint in (4)) activates only path \(p_{1}\) to achieve the approximate capacity \(\overline{\mathsf{C}}=2\). We can reduce the activation time of \(p_{1}\) to \(0.2\) in order to satisfy the constraint in (4), and this would achieve the rate \(0.4\), which is equal to the lower bound in Proposition 1. However, if we perform equal time sharing across all of the \(5\) paths, each path is activated for \(0.2\) fraction of time, and the constraint in (4) is still satisfied. This solution achieves a rate equal to \(1.2\). \(\square\)
In Example 1, we showed that we can leverage the path diversity in a network to achieve a higher rate than the lower bound in Proposition 1. A question that naturally arises is: how many paths would be sufficient to achieve a certain target rate while limiting the interference on the passive users? Or, are there any intrinsic properties of the paths that should be leveraged?
### _Number of Paths for Target Rates_
Here, we provide an answer to the questions above. Towards this end, we let \(H_{e}\) (respectively, \(H_{v}\)) denote the maximum number of edge-disjoint (respectively, vertex-disjoint) paths connecting the source to the destination in the network. The next theorem (proof in Appendix B) provides lower bounds on \(H_{e}\) and \(H_{v}\) that ensure target rates for the active users.
**Theorem 1**: _Consider an \(N\)-relay Gaussian FD 1-2-1 network with an arbitrary topology and unit capacity links, and let \(\theta\) be the threshold on the activation times of the links in the network. Then, the LP \(\mathrm{P2}\) in (3) (without the passive users constraint in (4)) outputs \(\overline{\mathsf{C}}\), and the following holds: \(\bullet\) For \(M=1\): The rate \(\theta_{e}\overline{\mathsf{C}}\) can be achieved if and only if_
\[H_{e}\geq\frac{\theta_{c}}{\theta}\overline{\mathsf{C}}, \tag{8}\]
_where \(0\leq\theta_{c}\leq 1\), and \(\overline{\mathsf{C}}=1\). \(\bullet\) For \(M>1\): The rate \(\theta_{c}\overline{\mathsf{C}}\) can be achieved whenever_
\[H_{v}\geq\frac{\theta_{c}}{\theta}\overline{\mathsf{C}}, \tag{9}\]
_where \(0\leq\theta_{c}\leq 1\), and \(\overline{\mathsf{C}}=\min\left(M,H_{v}\right)\)._
**Remark 5**: _Theorem 1 can be directly extended to the case in which there exists a threshold \(\theta_{ji}\) on the activation time of the link going from node \(i\) to node \(j\), \(\forall(i,j)\!\in\!\![0\!:\!N]\!\times\![1\!:\!N\!+\!1]\). To this end, we can simply replace \(\theta\) in (8) and (9) with the minimum threshold value \(\hat{\theta}\) which was defined in (5), and find lower bounds on the number of paths to use for target rates._
The lower bounds in Theorem 1 were derived for networks with unit capacity links. We next numerically evaluate the 1-2-1 passive capacity \(\mathsf{C}\) and \(H_{e}\) for networks with _unequal_ link capacities. Towards this end, we randomly generated a network with \(N=10\) relay nodes and performed \(1000\) trials over this network. In each trial, we generated a different set of link capacities from the Gaussian distribution with mean \(1\) and variance \(0.1\). We assumed that \(M=1\) and we set the threshold on the link activation times equal to \(\theta=0.2\). In Fig. 2, we plotted the achieved percentage of the approximate capacity \(\overline{\mathsf{C}}\) over \(1000\) trials while satisfying the constraint in (4). From Fig. 2, we note that, on average, the passive capacity \(\mathsf{C}\) is approximately \(85\%\) of \(\overline{\mathsf{C}}\). This shows that, although the activation times of the links in the network can be at most \(0.2\), our LP formulation finds a schedule that, in every trial, achieves a rate much higher than the naive lower bound in (5) equal to \(0.2\overline{\mathsf{C}}\). Thus, we decrease the penalty on the active users throughput. In Fig. 3, we show the maximum number of edge-disjoint paths activated by an optimal solution of the LP \(\mathrm{P2}\) in (3) with the constraint in (4). Although we consider unequal link capacities, the maximum number of active edge-disjoint paths varies closely around the bound in (8) for \(\theta_{c}=1\). Our evaluation shows that if the
Fig. 1: An mmWave network example with \(5\) relay nodes.
link capacities have a small variance, the lower bound in (8) still gives a reasonable estimate on the required number of edge-disjoint paths to achieve \(\overline{\mathsf{C}}\) (or the target rate \(\theta_{c}\overline{\mathsf{C}}\)).
## IV Connection to Secure Communication
In this section, we present a connection between the passive user problem investigated in Section III, and the (information theoretically) secure communication problem. In particular, we show a reduction between these two problems and provide guarantees on the achieved rates.
In the (information theoretically) secure communication problem over 1-2-1 networks [28], an arbitrary 1-2-1 network with unit capacity links is considered, where a passive eavesdropper wiretaps any \(K\) links of the network. The authors showed that the source can securely communicate with the destination at high rates by leveraging directivity and multipath diversity in mmWave networks. Particularly, the source can vary which paths it operates over time and this is possible thanks to the fact that we may have several possible choices of paths to achieve the unsecure capacity. Thus, in the secure communication problem over 1-2-1 networks, the traffic is distributed across multiple paths to achieve a high secure rate. This is similar to the passive user problem where we again distribute the traffic across multiple paths to ensure that the activation time of each link in the network is below a certain threshold. Thus, we here aim to perform a reduction between the passive user problem and the secure communication problem. Particularly, we leverage a high-performing (e.g., sometimes capacity achieving) scheme for one problem and see what rate it can guarantee for the other.
In the secure communication problem, we consider the case where a passive eavesdropper can wiretap any \(K\) links of the network. In the passive user problem, recall that \(\theta_{ji}\) is the threshold on the activation time of the link going from node \(i\) to node \(j\), \(\forall(i,j)\!\in\!\!\left[0:N\right]\!\times\!\!\left[1:N\!+\!1\right]\) (see (4)). We let \(H_{e}\) denote the maximum number of edge-disjoint paths connecting the source to the destination, and we denote the corresponding set of edge-disjoint paths by \(p_{[1:H_{e}]}\subseteq\mathcal{P}\). Similarly, we let \(H_{v}\) be the maximum number of vertex-disjoint paths connecting the source to the destination, and we denote the corresponding set of vertex-disjoint paths by \(p_{[1:H_{v}]}\subseteq\mathcal{P}\). Theorem 2 (proof in Appendix C) formally presents our results.
**Theorem 2**: _Consider an \(N\)-relay Gaussian FD 1-2-1 network with an arbitrary topology and unit capacity links. Let \(H_{e}\) (respectively, \(H_{v}\)) denote the maximum number of edge-disjoint (respectively, vertex-disjoint) paths connecting the source to the destination. Then the following holds: \(\bullet\) By using the paths activated by an optimal passive user scheme and the corresponding path activation times, we can guarantee a secure rate \(R\) such that,_
\[R=\mathsf{C}-\max_{b\in\mathcal{B}}\sum_{(i,j)\in b}\theta_{ji}, \tag{10}\]
_where \(\mathsf{C}\) denotes the 1-2-1 passive capacity and \(\mathcal{B}\) is the set of all combinations of \(K\) links. \(\bullet\) By leveraging the paths activated by a secure communication scheme and the corresponding path activation times, we can guarantee a rate \(R\) in the passive user problem such that,_
\[R=\left\{\begin{array}{ll}\sum_{p\in p_{[1:H_{e}]}}\min\left(\frac{1}{H_{e }},\theta_{p}\right)&\mbox{if }M=1,\\ \sum_{p\in p_{[1:H_{v}]}}\min\left(\frac{M}{H_{v}},\theta_{p}\right)&\mbox{if }M>1, \end{array}\right. \tag{11}\]
_where \(\theta_{p}\!\!=\!\min_{(i,j)\in\mathcal{E}_{p}}\theta_{ji}\)._
We now show that the connection presented in Theorem 2 is useful as there exist scenarios in which the same set of paths characterize both the passive and secure capacities. Toward this end, we consider the case \(M=1\) and for the passive user problem, we assume that \(\theta\) is the threshold on the activation times of the links in the network. For \(M=1\), the optimal passive user scheme is found by solving the LP \(\mathrm{P}2\) in (3) with the constraint in (4). As we showed in Appendix B, there exists an optimal solution that activates the maximum number of edge-disjoint paths. In [28], the optimal secure scheme that achieves the secure capacity also activates the maximum number of edge-disjoint paths. Thus, the set of edge-disjoint paths \(p_{[1:H_{e}]}\) characterize both the passive capacity and the secure capacity. For example, consider the network in Fig. 1 with unitary link capacities and \(\theta=0.2\). There exist \(H_{e}=5\) edge-disjoint paths in the network. An optimal solution for the passive user problem activates all five paths such that the activation time of each path is equal to \(0.2\) and the passive
Fig. 3: Maximum number of edge-disjoint paths.
Fig. 2: Achieved percentage of the approximate capacity.
capacity is \(\mathsf{C}=1\). This is equal to the rate \(R\) in (11). The optimal secure scheme in [28] also activates all five paths such that the activation time of each path is equal to \(0.2\) and the secure capacity is equal to \(1-K/H_{e}=1-0.2K\). This is equal to the rate \(R\) in (10). Thus, there exist scenarios where the lower bounds in Theorem 2 are tight and exactly equal to the capacity. In the general case where each link \((i,j)\) has a different threshold \(\theta_{ji}\), the set of edge-disjoint paths \(p_{[1:H_{e}]}\) and equal time sharing across these paths with activation time \(1/H_{e}\) characterize both the passive capacity and the secure capacity if \(1/H_{e}\leq\hat{\theta}\), where \(\hat{\theta}\) is defined in (5). In the above discussion, we have considered the case \(M=1\); however, it is possible to extend the analysis when \(M>1\), i.e., also for this case there are scenarios in which the same set of paths characterize both the passive and the secure capacities.
## V Conclusions
We have proposed and evaluated an approach that aims to support resilient coexistence of passive and active users in mmWave networks. Our aim was to guarantee a certain amount of interference-free operation to passive users, while not significantly impacting the rates of the active users. We formulated the problem of finding the maximum rate achieved in arbitrary mmWave networks, while limiting the interference at every node, as an LP. We derived lower bounds on the rates of the active users and on the number of paths that can achieve a target rate, while supporting passive users. We numerically evaluated our results, which showcase the effectiveness of our approach, e.g., \(85\%\) of the unrestricted (oblivious to passive users) network capacity can be achieved even when \(\theta=0.2\). Finally, we established a connection between the passive user problem and the problem of (information theoretically) secure communication in mmWave networks. In particular, we performed a reduction between the two problems, and we showed that there are scenarios in which the same set of paths can characterize both the passive and the secure capacities.
|
2303.16514 | Axion Like Particle Search at Higgs Factories | We study the potential of the future Higgs factories, including the ILC,
CEPC, and FCC-ee with $\sqrt{s}$ = 240-250 GeV on discovering axion-like
particles (ALPs) through various production channels in the leptonic final
states, $e^+e^- \to f\bar{f} a$, where $f=e,\mu,\nu$. We show that the $e^+e^-
\to e^+e^- a$ with $a \to \gamma\gamma$ provides the best bounds for the
$g_{a\gamma\gamma}$ and $g_{aZZ}$ couplings, while $e^+e^- \to \nu\bar{\nu}a$,
with $a \to \gamma\gamma$ offers the best bounds for the $g_{aZZ}$ and
$g_{aZ\gamma}$ couplings. The $e^+e^- \to \mu^+\mu^- a$ with $ a \to
\gamma\gamma$ provides intermediate sensitivity to the $g_{aZZ}$ coupling. Our
estimates of the bounds for the $g_{a\gamma\gamma}$, $g_{aZ\gamma}$, and
$g_{aZZ}$ couplings as a function of ALP mass ($M_a$) ranging from 0.1 GeV to
100 GeV provide valuable insights for future experiments aiming to detect ALPs.
We find that $g_{a\gamma\gamma}$ around $1.5\times10^{-4}~\rm GeV^{-1}$ for
$M_a = 0.1-6$ GeV is currently not ruled out by any other experiments. | Kingman Cheung, C. J. Ouseph | 2023-03-29T07:46:30Z | http://arxiv.org/abs/2303.16514v1 | # Axion Like Particle Search at Higgs Factories
###### Abstract
We study the potential of the future Higgs factories, including the ILC, CEPC, and FCC-ee with \(\sqrt{s}=240\)-250 GeV on discovering axion-like particles (ALPs) through various production channels in the leptonic final states, \(e^{+}e^{-}\to f\bar{f}a\), where \(f=e,\mu,\nu\). We show that the \(e^{+}e^{-}\to e^{+}e^{-}a\) with \(a\to\gamma\gamma\) provides the best bounds for the \(g_{a\gamma\gamma}\) and \(g_{aZZ}\) couplings, while \(e^{+}e^{-}\to\nu\bar{\nu}a\), with \(a\to\gamma\gamma\) offers the best bounds for the \(g_{aZZ}\) and \(g_{aZ\gamma}\) couplings. The \(e^{+}e^{-}\to\mu^{+}\mu^{-}a\) with \(a\to\gamma\gamma\) provides intermediate sensitivity to the \(g_{aZZ}\) coupling. Our estimates of the bounds for the \(g_{a\gamma\gamma}\), \(g_{aZ\gamma}\), and \(g_{aZZ}\) couplings as a function of ALP mass (\(M_{a}\)) ranging from 0.1 GeV to 100 GeV provide valuable insights for future experiments aiming to detect ALPs. We find that \(g_{a\gamma\gamma}\) around \(1.5\times 10^{-4}\) GeV\({}^{-1}\) for \(M_{a}=0.1-6\) GeV is currently not ruled out by any other experiments.
Introduction
The strong CP problem in the standard model (SM) is a long-standing problem [1]. The best solution comes by introducing a global \(U(1)_{PQ}\) symmetry, which was spontaneously broken down by a dynamical axion field. The resulting pseudo-Nambu-Goldstone boson is known as the QCD axion [1; 2; 3]. It can also serve as a dark matter candidate [4; 5; 6].
Nonobservation of the neutron electric dipole moment demands the breaking scale of the PQ symmetry to be very high with \(f_{a}>10^{9}\) GeV, implying a tiny mass to the axion and very small couplings to the SM particles. If we do not require the pseudo-Nambu-Goldstone boson to be the solution of the strong CP problem, the mass of the axion is not restricted by the breaking scale \(f_{a}\). Such a hypothetical particle, coined as axion-like particle (ALP), is also a pseudoscalar boson.
However, the ALP remains one of the possible dark matter candidates. The axion mass and couplings to SM particles can extend over many orders of magnitude, which are only constrained by astrophysical and cosmological observations, as well as collider experiments. In this work, we consider the potential sensitivities on the parameter space of the ALP model by searching for such ALPs in the proposed Higgs factories, including the International Linear Collider (ILC) [7], CEPC [8], and FCC-ee [9] with \(\sqrt{s}=240-250\) GeV. We consider the following leptonic production channels \(e^{+}e^{-}\to f\bar{f}a\) with \(f=e,\mu,\nu\). Given the center-of-mass energy is only 250 GeV, we consider the ALP mass in the range of \(0.1-100\) GeV. Typical Feynman diagrams for production can be found in Fig. 1.
We focus on the diphoton decay mode of the ALP, which is shown to be dominant. Thus, we have rather clean final states \(f\bar{f}(\gamma\gamma)\) with \(f=e,\mu,\nu\). The SM background is calculated and found to be small. Finally, we show the sensitive regions of the couplings. The organization of this work is as follows. In the next section, we describe the model and existing constraints. In Sec. III, we show the signal-background analysis. We calculate the sensitivities of the ALP couplings in Sec. IV. We summarize in Sec. V.
Theoretical Setup
### Model
The axion, as a pseudo-Goldstone boson, has derivative couplings to fermions, as well as \(CP\)-odd couplings to the gauge field strengths. Before rotating the \(B\) and \(W^{i}\) fields to the physical \(\gamma,\ Z,\ W^{\pm}\), the interactions of the axion are given by [10; 11; 12]
\[\mathcal{L}=\mathcal{L}_{f}+\mathcal{L}_{g}+\mathcal{L}_{BB}+\mathcal{L}_{WW} \tag{1}\]
where,
\[\mathcal{L}_{f}=-\frac{ia}{f_{a}}\sum_{f}g_{af}\ m_{f}^{diag}\bar{f}\gamma_{5}f\]
\[\mathcal{L}_{g}=-C_{g}\frac{a}{f_{a}}G^{A}_{\mu\nu}\tilde{G}^{\mu\nu,A}\]
\[\mathcal{L}_{BB}=-C_{BB}\frac{a}{f_{a}}B_{\mu\nu}\tilde{B}^{\mu\nu}\]
\[\mathcal{L}_{WW}=-C_{WW}\frac{a}{f_{a}}W^{i}_{\mu\nu}\tilde{W}^{\mu\nu,i}.\]
where \(a\) represents the ALP field, \(f_{a}\) is the ALP decay constant, \(A=1,....8\) is the \(SU(3)\) color index and \(i=1,2,3\) is the \(SU(2)\) index. The \(B,W^{3}\) fields rotated into \(\gamma,Z\) by
\[\begin{pmatrix}W^{3}_{\mu}\\ B_{\mu}\end{pmatrix}=\begin{pmatrix}c_{w}&s_{w}\\ -s_{w}&c_{w}\end{pmatrix}\begin{pmatrix}Z_{\mu}\\ A_{\mu}\end{pmatrix}. \tag{2}\]
where \(c_{w}\),\(s_{w}\) are cosine and sine of the Weinberg angle. The axion interactions with the fermion and the physical gauge bosons are given by
\[\begin{split}\mathcal{L}=-\frac{ia}{f_{a}}\sum_{f}g_{af}m_{f}^{ diag}\bar{f}\gamma_{5}f-C_{g}\frac{a}{f_{a}}G^{A}_{\mu\nu}\tilde{G}^{\mu\nu A }-\frac{a}{f_{a}}\big{[}(C_{BB}c_{w}^{2}+C_{WW}s_{w}^{2})F_{\mu\nu}\tilde{F}_{ \mu\nu}+\\ (C_{BB}s_{w}^{2}+C_{WW}c_{w}^{2})Z_{\mu\nu}\tilde{Z}_{\mu\nu}+2(C_{WW}-C_{ BB})c_{w}s_{w}F_{\mu\nu}\tilde{Z}_{\mu\nu}+C_{WW}W^{+}_{\mu\nu}\tilde{W}^{-\mu\nu} \big{]}\end{split} \tag{3}\]
The dimensionful couplings associated with ALP interactions from 3 is given by;
\[g_{a\gamma\gamma}=\frac{4}{f_{a}}(C_{BB}c_{w}^{2}+C_{WW}s_{w}^{2}), \tag{4}\]
\[g_{aWW}=\frac{4}{f_{a}}C_{WW}, \tag{5}\]
\[g_{aZZ}=\frac{4}{f_{a}}(C_{BB}s_{w}^{2}+C_{WW}c_{w}^{2}), \tag{6}\]
\[g_{aZ\gamma}=\frac{8}{f_{a}}s_{w}c_{w}(C_{WW}-C_{BB})\,. \tag{7}\]
### Existing Constraints on ALPs
The experimental bounds on the couplings of ALP to gluons, photons, weak gauge bosons, and fermions have been thoroughly investigated in numerous sources [13; 14; 15; 16; 17; 18; 19; 20; 21], including their effects at colliders when \(f_{a}\) is approximately at the TeV scale [19; 22]. Moreover, more recent works had constrained the coupling of ALPs to the \(W\pm\) gauge boson can be found in Ref. [23; 24].
* The LEP and the current LHC experiments can probe a significant region of parameter space for ALPs with mass \(M_{a}\geq 5GeV\). The LEP utilized the \(e^{+}e^{-}\rightarrow\gamma a\), \((a\rightarrow\gamma\gamma)\) and \(Z\to a\gamma\) processes [22] to search for ALPs, while ATLAS amd CMS employed the process \(\gamma\gamma\to a\rightarrow\gamma\gamma\) in PbPb collisions at the LHC [25]. In addition, the rare decay of the Higgs boson \(h\to Za,\ (a\rightarrow\gamma\gamma)\) and \(h\to aa\rightarrow(\gamma\gamma)(\gamma\gamma)\) at the LHC [26] had been utilized to explore the ALP-photon coupling \(g_{a\gamma\gamma}\) in relation to the ALP mass \(M_{a}\).
* The ALPs with masses below the MeV scale has been extensively studied in cosmological and astrophysical observations, which have resulted in numerous constraints on ALP couplings, including BBN, CMB, and Supernova 1987A [27; 15]. Furthermore, light ALPs can potentially become the cold dark matter [4; 5; 6], which could lead to their detection through various astrophysical and terrestrial anomalies [28], such as the unexpected X-ray emission line at around 3.5 keV [29] and the excess of electronic recoil events in XENON1T [30]. These results demonstrated the importance of further exploration and investigation into the properties and behavior of ALPs.
* In the mass range of MeV to GeV, ALPs can significantly impact low-energy observables in particle physics. Recent studies in the intensity frontier [31; 32; 24; 33] have explored numerous potential search avenues. Examples include lepton-flavor-violating decays [34], rare meson decays [31; 35; 24; 36], and ALP production in beam dump experiments [37]. Furthermore, this range of ALPs has been proposed as a possible explanation for the muon anomalous magnetic moment [38; 32] and may also provide a feasible solution to the Koto anomaly [39]. These findings highlight the importance of continued research into ALPs and their potential implications in particle physics.
* The search for the process \(e^{+}e^{-}\to\gamma a\) with \(a\to\gamma\gamma\) has recently been conducted by Belle II [40] for the ALP mass ranging between 0.1 GeV and 10 GeV. The data utilized in this search corresponded to an integrated luminosity of \((445\pm 3)pb^{-1}\), and the mass range explored was 0.2 GeV \(<M_{a}<\) 9.7 GeV.
* The process \(e^{+}e^{-}\to e^{+}e^{-}a\) with \(a\to\gamma\gamma\) at ILC has recently been studied in Ref. [41; 42; 43]. Ref. [42] showed that the ILC running at \(\sqrt{s}=250\) GeV or \(\sqrt{s}=500\) GeV can discover ALPs in this range of masses with significantly smaller couplings to the SM than previous experiments, down to \(g_{aBB}=10^{-3}\) TeV\({}^{-1}\). Ref. [41] showed that with more than \(10^{9}\)\(Z\) bosons produced in the Giga-Z mode of the future ILC experiment equipped with the high granular nature of the detector, one can discover of the ALPs coupled to hypercharge with couplings down to nearly \(10^{-5}\) GeV\({}^{-1}\) over the mass range from 0.4 to 50 GeV.
A few proposals of Higgs factories are put forward, including the ILC [7], CEPC [8], and FCC-ee [9], running at center-of-energies at \(\sqrt{s}=240-250\) GeV with the nominal luminosities shown in Table 1. One of the main goals is to carry out the precision study of the Higgs boson couplings. We investigate the potential search for ALPs \(e^{+}e^{-}\) collisions at the Higgs factories. Without loss of generality, we choose \(\sqrt{s}=250\) GeV and a conservative integrated luminosity of 2 ab\({}^{-1}\).
At the Higgs factories, the leptonic processes that we consider are \(e^{+}e^{-}\to f\bar{f}a\) where \(f=e\), \(\mu\), or \(\nu\), followed by \(a\to\gamma\gamma\). This study explores the effects of the coupling \(g_{a\gamma\gamma},\,g_{a\gamma Z},\,g_{aZZ},\,g_{aWW}\) on the production rates of the ALP. Typical contributing Feynman diagrams are shown in Fig. 1. Among the diagrams, there are \(s\)- and \(t\)-channel diagrams with the ALP bremsstrahlung off an internal \(\gamma,Z\), or \(W\) propagator.
\begin{table}
\begin{tabular}{c c c} \(e^{+}e^{-}\) Collider & \(\sqrt{s}\) (GeV) & Integrated Luminosity (fb\({}^{-1}\)) \\ \hline ILC & 250 & 2000 \\ CEPC & 240 & 5600 \\ FCC-ee & 250 & 5000 \\ \end{tabular}
\end{table}
Table 1: A few proposals of \(e^{+}e^{-}\) colliders running as a Higgs factory, at which the center-of-mass energy and integrated luminosity are shown.
## III Signal versus background
We use MadGraph5aMC@NLO [44; 45] to generate events for the production of ALPs at \(e^{+}e^{-}\) collisions. We consider the following channels for detecting the ALP signal:
* \(e^{+}e^{-}\rightarrow~{}e^{+}e^{-}a\) with \(a\rightarrow\gamma\gamma\) To obtain the production cross-sections of the ALP with mass from \(M_{a}=0.1\) GeV to 100 GeV, we apply the following initial cuts on the transverse momentum \(p_{T}^{e}\) and rapidity \(|\eta^{e}|\) of the electrons in the final state, as well as the transverse momentum \(p_{T}^{\gamma}\) and rapidity \(|\eta^{\gamma}|\) of the photons in final state.
* \(p_{T\,\rm min}^{e}=10\) GeV
* \(|\eta_{\rm max}^{e}|=1.83~{}~{}(|\cos\theta_{e}|<0.95)\)
* \(p_{T\,\rm min}^{\gamma}=10\) GeV
* \(|\eta_{\rm max}^{\gamma}|=2.5\)
* \(e^{+}e^{-}\rightarrow~{}\mu^{+}\mu^{-}a\) with \(a\rightarrow\gamma\gamma\) The final state consisting of muons (\(\mu\pm\)) and a pair of photons from the ALP decay
Figure 1: Typical Feynman diagrams for production of axion-like particles \(a\) via the process \(e^{+}e^{-}\to f\bar{f}a\) at \(e^{+}e^{-}\) collisions, where \(f=e,\mu,\nu\).
are selected using the same cuts as the electron case * \(p_{T\,{\rm min}}^{\mu}=10\) GeV * \(|\eta_{\rm max}^{\mu}|=1.83\quad(|\cos\theta_{\mu}|<0.95)\) * \(p_{T\,{\rm min}}^{\gamma}=10\) GeV * \(|\eta_{\rm max}^{\gamma}|=2.5\)
* \(e+e^{-}\to\nu\bar{\nu}a\) with \(a\to\gamma\gamma\) Here the ALP is produced along with neutrinos in the final states are selected using the following cuts on the rapidity and transverse momentum of the photon and missing transverse energy of \(E_{T}\).
* \(/E_{T}^{\rm min}=20\) GeV
* \(p_{T\,{\rm min}}^{\gamma}=10\) GeV
* \(|\eta_{\rm max}^{\gamma}|=2.5\)
The corresponding irreducible background is also subject to the same cuts as discussed above. We use \(C_{WW}=2\), \(C_{BB}=1\), and \(f_{a}=1\) TeV in calculating the ALP cross sections, so the corresponding coupling strengths \(g_{a\gamma\gamma}\), \(g_{aZ\gamma}\), \(g_{aZZ}\), and \(g_{aWW}\) are obtained using Eqs. (4) - (7) and listed in Table 2. Note that we have chosen different values for \(C_{WW}\) and \(C_{BB}\), otherwise \(g_{aZ\gamma}\) would vanish.
We generated \(10^{5}\) events using MadGraph5aMC@NLO. The scattering cross-section associated with the process \(e^{+}e^{-}\to f\bar{f}a\) is presented in Fig.2, where \(f=e,\mu,\nu\). We have
computed the cross-sections using the coupling strengths listed in Table 2. Among the three signal processes, \(e^{+}e^{-}\to\nu\bar{\nu}a\) has the largest cross-sections, as it consists of three flavors of neutrinos. On the other hand, \(e^{+}e^{-}\to\mu^{+}\mu^{-}a\) gives the smallest cross sections. For \(M_{a}\) ranging from 0.1 GeV to 10 GeV, the cross-section curves remain flat. As \(M_{a}\) increases from 10 GeV, the cross sections gradually decrease, because the final state phase space becomes limited with increasing ALP mass. This pattern is consistent across all three channels.
To suppress the irreducible background, we apply a cut on the transverse momentum of the photon pair. In Fig. 3, we compare the transverse momentum of the photon pair for \(M_{a}=0.1-100\) GeV with the corresponding background. A selection cut of \(p_{T_{\gamma\gamma}}>50\) GeV can suppress the SM background.
Figure 2: The ALP signal and SM cross-sections at the Higgs factory with \(\sqrt{s}=250\) GeV. Signal cross-sections are calculated with the coupling strengths listed in Table. 2: \(g_{a\gamma\gamma}=4.88\times 10^{-3}\) GeV\({}^{-1}\), \(g_{aZ\gamma}=1.38\times 10^{-3}\) GeV\({}^{-1}\), \(g_{aZZ}=7.11\times 10^{-3}\) GeV\({}^{-1}\), and \(g_{aWW}=8\times 10^{-3}\) GeV\({}^{-1}\)).
## IV Sensitivity on the ALP model
The number of signal events \(N_{T}\) at \(e^{+}e^{-}\) colliders with \(\sqrt{s}=250\) GeV is estimated as
\[N_{T}=\sigma(e^{+}e^{-}\to\ f\bar{f}\ a)\times B(a\to\ \gamma\gamma)\times \frac{N(p_{T_{\gamma\gamma}}>50\ \mbox{GeV})}{N_{\rm sim}}\times{\cal L}\;, \tag{8}\]
Figure 3: Transverse momentum \(p_{T_{\gamma\gamma}}\) distributions of the photon pair for the signal processes with \(M_{a}=0.1-100\) GeV and the corresponding SM background at \(e^{+}e^{-}\) colliders with \(\sqrt{s}=250\) GeV.
where \(\sigma(e^{+}e^{-}\to~{}f\bar{f}~{}a)\) is the ALP production cross-section, \(B(a\to~{}\gamma\gamma)\) is the branching ratio of the ALP to a pair of photons (see Appendix), \(N(p_{T_{\gamma\gamma}}>50\,\)GeV) is the number of events surviving the \(p_{T_{\gamma\gamma}}>50\) GeV cut, and \(N_{\rm sim}\) is the total number of events simulated. In this study, we generated \(N_{\rm sim}=10^{5}\) events using MadGraph5aMC@NLO and \(\cal L\) is the integrated luminosity, which we conservatively choose \({\cal L}=2~{}ab^{-1}\). Similarly, the number of background events \(N_{T}^{\rm SM}\) is estimated as
\[N_{T}^{\rm SM}=\sigma(e^{+}e^{-}\to~{}f\bar{f}~{}\gamma\gamma)\times\frac{N(p_{ T_{\gamma\gamma}}>50~{}{\rm GeV})}{N_{sim}}\times{\cal L}\;. \tag{9}\]
The number of signal events \(N_{T}\) is proportional to the square of the ALP coupling strength \(g\). In this study, we consider all possible ALP interactions encoded in Eq. (3), from all possible channels of ALP production listed in Fig. 1. The bound on the ALP coupling as a function of ALP mass can be obtained by requiring the significance \(Z>2\). The significance \(Z\) is defined as [46]:
\[Z=\sqrt{2\,.\Big{[}(s+b)\,\ln\Big{(}\frac{(s+b)(b+\sigma_{b}^{2})}{b^{2}+(s+b) \sigma_{b}^{2}}\Big{)}-\frac{b^{2}}{\sigma_{b}^{2}}\,ln\Big{(}1+\frac{\sigma_{ b}^{2}s}{b(b+\sigma_{b}^{2})}\Big{)}\Big{]}}\,, \tag{10}\]
where the numbers of signal and background events are represented by \(s\) and \(b\), respectively. The systematic uncertainty associated with the SM background \(b\) is denoted by \(\sigma_{b}\). A significance value of \(Z=2\) is considered, which corresponds to 95% confidence level (C.L.). In the following subsections, we discuss the sensitivity of the ALP couplings from ALP production with three different leptonic final states at the Higgs factory.
### \(e^{+}e^{-}\to e^{+}e^{-}a,\,a\to\gamma\gamma\)
The process of ALP production, in conjunction with a pair of electrons mediated by \(\gamma\) and \(Z\) bosons, is illustrated in Fig.1. In this process, the effective couplings of the ALP to \(ZZ\), \(\gamma\gamma\), and \(\gamma Z\) are associated with the dimensional couplings \(g_{aZZ}\), \(g_{a\gamma\gamma}\), and \(g_{aZ\gamma}\), respectively. The numbers of signal and background events are estimated using Eqs. (8) and (9), and are shown in the left panel of Fig. 4.
The combination of production via photon fusion followed by the ALP decay into diphoton yields the highest number of signal events for the specified value of \(g_{a\gamma\gamma}\) coupling listed in Table. 2. The number of ALP events from the ALP-\(ZZ\) vertex is intermediate, while the ALP-\(Z\gamma\) vertex gives the least number of events, even the SM event rate is higher than the
latter one. The kinks in the number of signal event curves arise from the branching ratio of the ALP into diphoton \(a\to\gamma\gamma\).
We then estimate the sensitivity in the ALP couplings versus the ALP mass, especially \(g_{a\gamma\gamma}\) and \(g_{aZZ}\) using Eq. (10). We account for the systematic uncertainty associated with the background estimation by including assuming an uncertainty of \(\sigma_{b}=10\%\). The bounds on the ALP couplings \(g_{a\gamma\gamma}\) (blue) and \(g_{aZZ}\) (orange) as a function of the ALP mass \(M_{a}\) are shown in the right panel of Fig. 4. It is easy to see that the sensitivity of the \(g_{a\gamma\gamma}\) coupling is a few times better than the \(g_{aZZ}\) coupling. At lighter ALP mass of \(M_{a}=0.1\) GeV, the sensitivity of \(g_{a\gamma\gamma}\) can reach down to \(\sim 1.5\times 10^{-4}\) GeV\({}^{-1}\), while \(g_{aZZ}\) reaches down to \(\sim 4.3\times 10^{-4}\) GeV\({}^{-1}\). The sensitivity curves stay more or less flat until \(M_{a}=30\) GeV with some irregularities due to the branching ratio into diphoton. As \(M_{a}\) increases beyond 30 GeV, the sensitivity is largely worsened due to smaller phase space for the production of heavier ALPs. At \(M_{a}=100\) GeV, the bounds on \(g_{a\gamma\gamma}\) and \(g_{aZZ}\) are reduced to approximately \(2.5\times 10^{-4}\) GeV\({}^{-1}\) and \(7.5\times 10^{-4}\) GeV\({}^{-1}\), respectively.
### \(e^{+}e^{-}\to\mu^{+}\mu^{-}a,~{}~{}a\to\gamma\gamma\)
Here we consider the associated production of the ALP with a \(\mu^{+}\mu^{-}\) pair. This process only arises from \(s\)-channel diagrams listed in Fig. 1. The left panel of Fig. 5 shows the number of ALP events arising from various ALP vertices. The highest number of ALP events comes from ALP production associated with the ALP-\(ZZ\) vertex. The numbers of ALP events produced via the ALP-\(Z\gamma\) and ALP-\(\gamma\gamma\) vertices are lower than that of the SM. The right panel of Fig.5 shows the sensitivty reach of the \(g_{aZZ}\) coupling as a function of ALP mass \(M_{a}\). The effect of the diphoton branching ratio also reflects in the sensitivity curves. At \(M_{a}=0.1\) GeV, \(g_{aZZ}\) can be probed down to \(\sim 3.4\times 10^{-4}\) GeV\({}^{-1}\). The sensitivity of \(g_{aZZ}\) weakens with the increment of the ALP mass, especially for \(M_{a}\) above 30 GeV.
Comparing the bounds of \(g_{aZZ}\) obtained in the channels \(e^{+}e^{-}\to\mu^{+}\mu^{-}a\) (Fig. 5) and \(e^{+}e^{-}\to e^{+}e^{-}a\) (Fig. 4), we can see that \(g_{aZZ}\) from the muon channel performs better than the electron channel over the entire ALP mass range. This is simply because the background in the muon channel is only a fraction of the electron channel.
### \(e^{+}e^{-}\to\nu\bar{\nu}a,~{}a\to\gamma\gamma\)
As already shown in Fig. 2, the channel \(e^{+}e^{-}\to\nu\bar{\nu}a\) with \(a\to\gamma\gamma\) has the largest cross-sections compared to the other two processes. This process also presents an opportunity to investigate the ALP-\(WW\) vertex. In addition to the ALP-\(WW\) vertex, the ALP-\(ZZ\) and ALP-\(Z\gamma\) vertices also make contributions, which are depicted in Fig. 1.
The number of ALP events from the ALP-\(ZZ\) vertex is higher than that from the other two vertices. The ALP production rate from the ALP-\(WW\) vertex is the lowest and is even lower than that of the SM.
The bounds on \(g_{aZZ}\) and \(g_{aZ\gamma}\) couplings are shown in the right panel of Fig.6. In this case, the \(g_{aZ\gamma}\) coupling has a better bound compared to the \(g_{aZZ}\) coupling. At \(M_{a}\)=0.1 GeV, the \(g_{aZ\gamma}\) coupling can reach down to \(\sim 10^{-4}\) GeV\({}^{-1}\), while the \(g_{aZZ}\) coupling reaches down to \(1.8\times 10^{-4}~{}\) GeV\({}^{-1}\). Similar to previous cases, the sensitivity of the couplings weakens as the ALP mass \(M_{a}\) increases.
When comparing the bounds of \(g_{aZZ}\) coupling obtained from all different channels the best sensitivity comes from \(e^{+}e^{-}\to\nu\bar{\nu}a\,,~{}~{}a\to\gamma\gamma\). At \(M_{a}\)=0.1 GeV, the \(g_{aZZ}\) coupling
reaches down to \(1.8\times 10^{-4}\)GeV\({}^{-1}\). The \(e^{+}e^{-}\to e^{+}e^{-}a,\quad a\to\gamma\gamma\) channel offers the least sensitivity (for \(M_{a}\)=0.1 GeV \(g_{aZZ}\) coupling only reaches down to \(4.3\times 10^{-4}\)GeV\({}^{-1}\)). The limit from the \(e^{+}e^{-}\to\mu^{+}\mu^{-}a,\quad\ a\to\gamma\gamma\) channel is intermediate (for \(M_{a}\)=0.1 GeV the \(g_{aZZ}\) coupling reaches down to \(3.4\times 10^{-4}\)GeV\({}^{-1}\)). This trend is visible across the entire ALP mass range from \(M_{a}=0.1\) GeV to 100 GeV.
## V Conclusions
In this study, we have explored the sensitivity potential of the future Higgs factories, including ILC, CEPC, and FCC-ee, on probing dimensionful coupling constants \(g_{a\gamma\gamma}\), \(g_{aZ\gamma}\), \(g_{aWW}\), and \(g_{aZZ}\) of the axion-like particle, via the processes \(e^{+}e^{-}\to f\bar{f}a\) (\(f=e,\mu,\nu\)) followed by \(a\to\gamma\gamma\). We used a center-of-mass energy \(\sqrt{s}=250\) GeV with an integrated
Figure 7: Summary plot of the sensitivity of \(g_{a\gamma\gamma}\) that we can achieve at the Higgs factory \(\sqrt{s}=250\) GeV with an integrated luminosity 2 ab\({}^{-1}\), and compared with other existing constraints. Existing constraints in the figure include PrimEx [47], BES III [48], Belle II[40], LEP [22], OPAL [49], CMS [50], ATLAS [51] and LHC [49] (extracted from the GitHub page [52]).
luminosity 2 ab\({}^{-1}\).
Our results have shown that the channel \(e^{+}e^{-}\to e^{+}e^{-}a,\ \ a\to\gamma\gamma\) provides the best bound for the \(g_{a\gamma\gamma}\) coupling, while the process \(e^{+}e^{-}\to\nu\bar{\nu}a,\ \ a\to\gamma\gamma\) offers the best bound for the \(g_{aZZ}\) and \(g_{aZ\gamma}\) couplings.
Without loss of generality, we have used \(C_{WW}=2\) and \(C_{BB}=1\) such that \(g_{a\gamma\gamma}\), \(g_{aZ\gamma}\), \(g_{aWW}\), and \(g_{aZZ}\) are related to one another shown in Eqs. (4) - (7), and they are all nonzero. We can easily extend the analysis to independent coupling strengths.
Finally, we show in Fig. 7 the summary plot of the sensitivity of \(g_{a\gamma\gamma}\) that we can achieve at the Higgs factories, and compared with other existing constraints. The sensitivity can improve down to about \(1.5\times 10^{-4}\ \mathrm{GeV}^{-1}\) over the mass range of \(M_{a}=0.1-6\ \mathrm{GeV}\), as well as a small corner at \(M_{a}\simeq 70-100\ \mathrm{GeV}\).
Our estimates of the bounds for the \(g_{a\gamma\gamma}\), \(g_{aZ\gamma}\), and \(g_{aZZ}\) couplings as a function of ALP mass (\(M_{a}\)) ranging from 0.1 GeV to 100 GeV provide valuable insights for future experiments aiming to detect ALPs.
## Appendix A Partial Decay Widths and Branching Ratios of the ALP
The two-body partial decay widths of the ALP to photons and fermions are given below. The branching ratios are evaluated with \(C_{BB}=1\), \(C_{WW}=2\), \(c_{a\phi}=1\) and \(f_{a}=1000\ GeV\). Here \(M_{l}\) and \(M_{q}\) are the masses of charged leptons and quarks.
\[\Gamma(a\to\gamma\gamma) = \frac{M_{a}^{6}(C_{BB}c_{w}^{2}+C_{WW}s_{w}^{2})^{2}}{4f_{a}^{2} \pi|M_{a}^{3}|}\, \tag{10}\] \[\Gamma(a\to l\bar{l}) = \frac{c_{a\phi}^{2}M_{a}^{2}\sqrt{M_{a}^{2}-4M_{a}^{2}M_{l}^{2}} \ vev^{2}\ y_{l}^{2}}{16\pi f_{a}^{2}|M_{a}^{3}|}\] (11) \[\Gamma(a\to q\bar{q}) = \frac{3\ c_{a\phi}^{2}M_{a}^{2}\sqrt{M_{a}^{2}-4M_{a}^{2}M_{q}^{2 }}\ vev^{2}\ y_{q}^{2}}{16\pi f_{a}^{2}|M_{a}^{3}|} \tag{12}\]
## Acknowledgement
Special thanks to Zeren Simon Wang and Nguyen Tran Quang Thong for an enlightening discussion. The work was supported in part by NSTC under the grant number MOST-110
2112-M-007-017-MY3.
|
2310.18401 | Alleviating the present tension between T2K and NO$ν$A with neutrino
New Physics at source | Since neutrino oscillation was observed, several experiments have been built
to measure its parameters. NO$\nu$A and T2K are two long-baseline experiments
dedicated to measuring mainly the mixing angle $\theta_{23}$, the charge-parity
conjugation phase $\delta_{\rm CP}$, and the mass ordering. However, there is a
tension in current data. The T2K allowed region is almost excluded by the
NO$\nu$A result at the $90\%$ confidence level. We propose a non-standard
interaction (NSI) in neutrino production to relieve this tension. The NSI is
computed through quantum field theory (QFT) formalism, where we derive
perturbative analytical formulae considering NSI in the pion decay. Within this
new approach, we can alleviate NO$\nu$A and T2K tension for a NSI complex
parameters of order $10^{-3}$. We show the new phase has a degeneracy to the
Dirac CP phase of the form $\delta_{\rm CP} \pm \phi= 1.5\pi$ being a possible
source of violation of charge-parity symmetry. | Adriano Cherchiglia, Pedro Pasquini, O. L. G. Peres, F. F. Rodrigues, R. R. Rossi, E. S. Souza | 2023-10-27T18:00:05Z | http://arxiv.org/abs/2310.18401v1 | # Alleviating the present tension between T2K and NO\(\nu\)A with neutrino New Physics at source
###### Abstract
Since neutrino oscillation was observed, several experiments have been built to measure its parameters. NO\(\nu\)A and T2K are two long-baseline experiments dedicated to measuring mainly the mixing angle \(\theta_{23}\), the charge-parity conjugation phase \(\delta_{\rm CP}\), and the mass ordering. However, there is a tension in current data. The T2K allowed region is almost excluded by the NO\(\nu\)A result at the \(90\%\) confidence level. We propose a non-standard interaction (NSI) in neutrino production to relieve this tension. The NSI is computed through quantum field theory (QFT) formalism, where we derive perturbative analytical formulae considering NSI in the pion decay. Within this new approach, we can alleviate NO\(\nu\)A and T2K tension for a NSI complex parameters of order \(10^{-3}\). We show the new phase has a degeneracy to the Dirac CP phase of the form \(\delta_{\rm CP}\pm\phi=1.5\pi\) being a possible source of violation of charge-parity symmetry.
pacs: 14.60.Pq,14.60.St,13.15.+g _Introduction.--_ The neutrino oscillation phenomenon provides evidence of physics beyond the Standard Model. Since its discovery, several experiments have measured neutrino oscillation parameters [1; 2; 3]. One not yet measured is the charge-parity (CP) conjugation phase \(\delta_{\rm CP}\) that quantifies the asymmetry between particle and anti-particle. The two long-baseline accelerator experiments, NO\(\nu\)A and T2K, were designed to measure this parameter.
NO\(\nu\)A and T2K have recently released new data, revealing a tension within the allowed parameter regions [4; 5]. In the standard three-neutrino oscillation scenario, the preferred region at a \(90\%\) confidence level for T2K data is nearly excluded by the NO\(\nu\)A data in the \(\sin^{2}\theta_{23}\) vs. \(\delta_{\rm CP}\) parameter space. These results could indicate physics beyond the Standard Model. Numerous studies have been dedicated to explaining this tension, exploring various new physics scenarios [6; 7; 8; 9; 10; 11; 12; 13].
We propose a novel approach that includes non-standard interactions in neutrino production specifically via pion decay. By adopting an effective field theory approach [14], we can straightforwardly modify the rate of pion decay to include these non-standard interactions during production. We have derived for the first time a perturbative analytical expression for neutrino oscillation in matter considering this new interaction at the source.
The new coupling constant may be complex, which introduces a new charge-parity violation phase. We investigate the interplay between the two phases: one originating from Pontecorvo-Maki-Nakagawa-Sakata (PMNS) neutrino mixing matrix [15; 16] and the other from the effects of the new interaction.
In this Letter, we demonstrate that the tension is alleviated even if only one new complex parameter can be non-zero. We have determined that the absolute value of the new interaction parameter is of order \(10^{-3}\).
_New physics in neutrino sector from a EFT perspective.--_ We consider the non-standard interactions on neutrino production essentially following the formalism introduced in [14]. The new physics is described in the Wilson coefficients of four-fermion effective interactions between neutrinos (\(\nu_{\beta}\)), charged leptons (\(l_{\alpha}\)) and quarks (\(q_{i}\)), \(\sim\overline{q}_{i}\Gamma^{ij}_{A}q_{j}\bar{\ell}_{\alpha}\Gamma^{\prime \alpha\beta}_{A}P_{L}\nu_{\beta}\), where \(i,j=u,d,c,s,\ldots\) and \(\alpha,\beta=e,\mu,\tau\). The index \(A\) corresponds to the Lorentz indices of the interaction. All possible combinations of vertex structure are encoded in \(\Gamma,\Gamma^{\prime}\). Typically, neutrinos are produced via pion decay, and only vector, axial, and pseudo-scalar couplings with \(q_{i}=u\), \(q_{j}=d\) contribute. The most interesting case corresponds to the latter, which is given by [14]
\[\mathcal{L}_{\rm P}\supset\sqrt{2}\,G_{F}V^{\rm CKM}_{ud}\epsilon_{\alpha \beta}\left(\bar{u}\gamma^{5}d\right)\left(\bar{\ell}_{\alpha}P_{L}\nu_{\beta }\right)+{\rm h.c.}\, \tag{1}\]
where \(V^{\rm CKM}\) is the Cabibbo-Kobayashi-Maskawa (CKM) matrix [18]\(G_{F}\) is the Fermi constant. Furthermore, \(\epsilon_{\alpha\beta}\) are complex Wilson coefficients describing the new interaction's magnitude relative to the Weak interaction. The new interaction in Eq. (1) creates another vertex for neutrino production beyond the traditional one [19]. With the new vertex, the total matrix element is a combination of the standard model ampli
Figure 1: Quantum field theory computation of neutrino oscillation probability [14; 17].
tude (\(\mathcal{A}_{L}^{\rm S}\)) and the new physics amplitude (\(\mathcal{A}_{P}^{\rm S}\)),
\[\mathcal{M}_{\alpha k}^{S}=U_{\alpha k}^{*}\mathcal{A}_{L}^{S}+\left[\epsilon\,U \right]_{\alpha k}^{*}\mathcal{A}_{P}^{\rm S}. \tag{2}\]
The upper index, \(S\), for source, indicates that the process occurs only in production, since typically there is no pseudo-scalar detection process [14]. Therefore, there are no relevant effects on the detection for the new interactions. It should be emphasized that the neutrino mass eigenstates are exclusively encoded in the PMNS mixing matrices [15; 16], so that the amplitudes \(A_{L/P}^{\rm S}\) depend solely on the neutrino flavor. Notice that off-diagonal terms of \(\epsilon_{\alpha\beta}\) violate lepton flavor number.
_Neutrino event rate in the QFT formalism.--_ The event rate is the physical observable in neutrino oscillation experiments. In the formalism of Quantum Field Theory (QFT), neutrino production, propagation, and detection are considered single processes. Therefore, the neutrino oscillation is quantified by a single tree diagram, as illustrated in Figure (1) by the decay \(\pi^{+}\to\mu^{+}+\nu\) (production) followed by the detection \(\nu+n\to p+e^{-}\). The time direction is from bottom to top. In the production and detection processes, the initial states are the pion and the neutron. The detected particles (e.g., charged leptons and protons) are regarded as final states [19]. The neutrino participates in the process as an intermediate state, where the uncertainties of the initial state result in the superposition of neutrino massive states [20].
In this formalism, the neutrino event rate, including NSI in production is [14]
\[R_{\alpha\beta}^{\rm NSI}= \kappa\!\sum_{kj}\!e^{-i\Delta_{kj}}\mathcal{U}_{\beta k}\mathcal{ U}_{\beta j}^{*}\] \[\times\int d\Pi_{S}\mathcal{M}_{\alpha k}^{S}\overline{\mathcal{M }}_{\alpha j}^{S}\times\int d\Pi_{D}|\mathcal{A}_{L}^{D}|^{2}\, \tag{3}\]
where \(\alpha\) and \(\beta\) denote produced and detected flavor states, respectively, \(\kappa\) is a constant that includes the kinematical factors and target size, \(\Delta_{kj}\equiv\frac{\Delta m_{kj}^{2}L}{2E_{\nu}}\), with \(E_{\nu}\) being the neutrino energy, \(L\) the source-detector distance, and \(\Delta m_{kj}^{2}\equiv m_{k}^{2}-m_{j}^{2}\) the neutrino mass squared difference and the amplitude \(\mathcal{M}_{\alpha k}^{S}\) is given in Eq. (2). The integrals are over the phase space elements for source (\(S\)) and detection (\(D\)). We denote by \(\mathcal{U}\) the PMNS mixing matrix [15; 16] in constant matter [21; 22].
The events rate Eq.(3) is associated to the oscillation probability by the definition: \(P_{\alpha\beta}^{\rm NSI}\equiv R_{\alpha\beta}^{\rm NSI}/\phi_{\alpha}^{\rm SM }\sigma_{\beta}^{\rm SM}\), corresponding to the transition \(\nu_{\alpha}\to\nu_{\beta}\). It is conveniently written as
\[P_{\alpha\beta}^{\rm NSI} =\!\sum_{kj}e^{-i\Delta_{kj}}\!\left[(\mathbb{1}-p_{\alpha} \epsilon)\mathcal{U}\right]_{\alpha k}^{*}\!\left[(\mathbb{1}-p_{\alpha} \epsilon)\mathcal{U}\right]_{\alpha j}\!\mathcal{U}_{\beta k}\mathcal{U}_{ \beta j}^{*}\, \tag{4}\]
where \(p_{\alpha}=m_{\pi}/(m_{\alpha}(m_{u}+m_{d}))\), e.g. \(p_{\mu}\sim 27\) and \(p_{e}\sim 5500\) represents a chiral enhancement compared with the standard model rate [23].
In the end, the effect of NSI consists of substituting the matrix \(\mathcal{U}_{\alpha i}\) by \([(\mathbb{1}-p_{\alpha}\epsilon)\mathcal{U}]_{\alpha i}\). Although we have named Eq. (4) as the probability for the sake of resemblance to the traditional form, the presence of NSI makes the expression effectively unitarity-violating.
In order to analyze the impact of individual NSI parameters on the oscillation probability, we consider two scenarios corresponding to a new source for muon or electron neutrinos. In the EFT formalism, they are implemented by allowing only one non-zero Wilson coefficient at a time, \(\epsilon_{\mu e}\) or \(\epsilon_{e\mu}\) respectively. For the experimental analyses of interest, the parameter \(\epsilon_{\mu e}\) will modify the signal, and \(\epsilon_{e\mu}\) will affect the background. In the following, we will discuss the \(\epsilon_{\mu e}\) scenario to exemplify the perturbative formalism. Because the initial state in pion decay is muonic neutrino, we need to calculate the probability \(P_{\mu\beta}\). We write for the _first time_ an analytical formula for Eq. (4) in terms of the evolution operator \(S^{\rm OSC}\equiv e^{-iHt}\) for the neutrino Hamiltonian, defined in a standard oscillation scenario. Therefore
\[P_{\mu\beta}^{\rm NSI}=\left|S_{\beta\mu}^{\rm OSC}-p_{\mu} \epsilon_{\mu e}^{*}S_{\beta e}^{\rm OSC}\right|^{2}, \tag{5}\]
where the complex coefficient is explicitly \(\epsilon_{\mu e}\equiv|\epsilon_{\mu e}|e^{i\phi_{\mu e}}\). The advantage of writing the probability above is that there is in the literature the analytical expression for \(S^{\rm OSC}\) with matter effects [24]. It can also be straightforwardly generalized to other NSI scenarios, and other conversion/survival rates.
The most important equation of this paper is the \(\nu_{\mu}\to\nu_{e}\) probability with NSI, using an analytical expression _in matter_. We have derived it by employing a perturbative approach [24], where the leading terms are given by
\[P_{\mu e}^{\rm NSI} =4\frac{s_{13}^{2}s_{23}^{2}}{(1-r_{a})^{2}}\sin^{2}\frac{(1-r_{ a})\Delta L}{2}+\frac{8J_{r}r_{\Delta}}{r_{a}(1-r_{a})}\cos\left(\delta_{\rm CP}+ \frac{\Delta L}{2}\right)\sin\frac{r_{a}\Delta L}{2}\sin\frac{(1-r_{a})\Delta L}{2}\] \[+p_{\mu}^{2}|\epsilon_{\mu e}|^{2}+4p_{\mu}|\epsilon_{\mu e}| \frac{s_{13}s_{23}}{1-r_{a}}\sin\left(\frac{(1-r_{a})\Delta L}{2}\right)\sin \left(\delta_{\rm CP}-\phi_{\mu e}+\frac{(1-r_{a})\Delta L}{2}\right)+\mathcal{ O}(r_{\Delta},s_{13}^{2}). \tag{6}\]
From the phenomenological nature of the parameters \(r_{\Delta}\equiv\Delta m_{21}^{2}/\Delta m_{31}^{2}\simeq\zeta\) and \(\sin\theta_{13}\simeq\sqrt{\zeta}\), with \(\zeta\sim\mathcal{O}(10^{-2})\). We also define \(\Delta=\Delta m_{31}^{2}/2E_{\nu}\), \(L\) is the distance between the source and detector, \(r_{a}=a/\Delta\) with \(a=\sqrt{2}G_{F}N_{e}\) being the matter potential and the Jarskolg factor [25]\(J_{r}=c_{12}s_{12}c_{23}s_{23}s_{13}\) in shorthand notation \(s_{ij}=\sin\theta_{ij}\) and \(c_{ij}=\cos\theta_{ij}\). The probability for the antineutrino retains the form of Eq. (6) with the replacements
\(\delta_{\rm CP}\rightarrow-\delta_{\rm CP}\), \(\phi_{\mu e}\rightarrow-\phi_{\mu e}\) and \(a\rightarrow-a\).
The analytical formulae are very useful to identify sources of CP violation. In the standard oscillation scenario, we recall that the survival probability (\(P_{\alpha\alpha}=|S_{\alpha\alpha}^{\rm OSC}|^{2}\) for neutrinos of flavor \(\alpha\)) is a CP-conserving quantity. Thus, CP-violating effects can only come from processes involving the conversion of flavor between neutrinos (given by \(P_{\beta\alpha}=|S_{\alpha\beta}^{\rm OSC}|^{2}\) with \(\beta\neq\alpha\)). In the presence of NSI, this reasoning does not hold, as can be easily checked by considering \(\beta=\mu\) in Eq. (5). The case with \(\beta=e\) is even more instructive. First, it follows directly from Eq. (5) that terms quadratic dependent on \(|\epsilon_{\mu e}|\) will not depend on \(\delta_{\rm CP}\). Secondly, the leading order terms given in Eq. (6) show that the presence of NSI induces a CP-violation in terms of the difference of phases \((\delta_{\rm CP}-\phi_{\mu e})\). Since the ratio between the standard term in the first line of Eq. (6) to the last term is of order \(\zeta\), for \(p_{\mu}|\epsilon_{\mu e}|\sim 27|\epsilon_{\mu e}|>\zeta\sim{\cal O}(10^{-2})\), the NSI term may dominate, implying that the experiment may be more sensible to the difference \((\delta_{\rm CP}-\phi_{\mu e})\) rather than the standard CP phase itself. We will show this tendency when presenting our numerical results.
Finally, the perturbative formula is in good agreement with the exact one. Indeed, most of the energy range of the experiments discussed here exhibits an error of less than one percent [19], including the region of interest for the NO\(\nu\)A and T2K experiments.
_Experimental and simulation details.--_ We analyze the effects of NSI in neutrino production by pion decay through two long-baseline experiments: NO\(\nu\)A (NuMI Off-axis \(\nu_{e}\) Appearance) and T2K (Tokai-to-Kamioka).
The NO\(\nu\)A experiment [26; 27; 28] measures muonic neutrino disappearance and electronic neutrino appearance. Its beam is located in the Fermilab laboratory in United States and it travels 810 Km to the detector in Minnesota. Neutrinos go through a matter density of \(\rho_{\rm NO\nu A}=2.84\) g/cm\({}^{3}\). We adopt the configuration of \(13.6\times 10^{20}\) protons on target (POT) for neutrinos and \(12.5\times 10^{20}\) POT for antineutrinos. The mass of the target detector is 14 kt and the neutrino energy range is from 1 up to 5 GeV, with energy spectra peaked at 2.1 GeV.
The T2K experiment [29; 30; 31] also measures muonic neutrino disappearance and electronic neutrino appearance. The beam is produced at J-PARC lab in Japan and travels 295 Km to the Super-Kamiokande detector. The matter density in this experiment is \(\rho_{\rm T2K}=2.6\) g/cm\({}^{3}\). The T2K flux has \(14.7\times 10^{20}\) POT for neutrino mode and \(16.4\times 10^{20}\) POT for antineutrino mode. The detector has a target mass of 22.5 kt, and the neutrino energy range is from 0.1 up to 1.25 GeV, with energy spectra peaked at 0.6 GeV.
We use GLoBES [32; 33] to simulate the number of detected events, according to the Eq. (3) and to perform the statistical analysis. We fix the solar parameters to their best-fit values [34]\(\Delta m^{2}_{21}=7.53\times 10^{-5}\)eV\({}^{2}\), and \(\sin^{2}\theta_{12}=0.307\), minimizing the \(\chi^{2}\) function over all the other relevant parameters. We put a Gaussian prior on the reactor angle \(\sin^{2}2\theta_{13}=0.083\pm 0.0031\) because it is well measured by other experiments [35; 36; 37]. We then present in the following sections, a quantitative analysis of our model, and the allowed region for oscillation and NSI parameters, for NO\(\nu\)A and T2K individually as well as combined.
_Alleviating the T2K and NO\(\nu\)A tension.--_ The NSI changes the neutrino oscillation probability, as seen in Eq. (6). In particular, it modifies the dependence on CP-violation parameters. In the standard oscillation scenario, a common way to illustrate the impact of the still unknown \(\delta_{\rm CP}\) parameter is to consider the bi-probability idea [38], the plane antineutrino - neutrino probability. We adopt the same idea here, but for the NSI scenario.
In Figure 2 we illustrate the influence of the complex NSI, by showing the bi-probability plot for the conversion \(\nu_{\mu}\rightarrow\nu_{e}\) for neutrinos by the one for antineutrinos. The ellipses are generating varying the value of the CP phase, with the remaining parameters being the combined best-fit values for NOvA and T2K. We consider the two possible mass ordering, the so-called normal ordering (NO) and inverted ordering (IO) [39]. In the left (right) panel we use the \(L\) and \(E_{\nu}\) typical parameters for the NO\(\nu\)A (T2K) experiment. We also show as dots the best fit value for \(\delta_{\rm CP}\). In the standard oscillation scenario, the best-fit parameters are \(\sin^{2}\theta_{23}=0.56\), \(\Delta m^{2}_{31}=2.49(-2.38)\times\ 10^{-3}\) eV\({}^{2}\), and the CP phase \(\delta_{\rm CP}/\pi=1.22(1.50)\), for NO (IO). In the presence of NSI these best-fit parameters become \(\sin^{2}\theta_{23}=0.47,\Delta m^{2}_{31}=2.50(-2.38)\times 10^{-3}\) eV\({}^{2}\) and the CP phase \(\delta_{\rm CP}/\pi=1.23(1.54),\epsilon_{\mu e}/10^{-3}=2.13(1.22)\) and the CP phase of NSI parameter \(\phi_{\mu e}/\pi=1.58(1.54)\), for NO (IO). The estimated values of the probabilities, with uncertainties, is represented by the black cross.
For the best-fit values of the NSI parameters, we notice the ellipses change appreciably even though \(\epsilon_{\mu e}\) is of order \(10^{-3}\). The noticeable changes are due to the chiral enhancement term presented in the pion decay \(p_{\mu}\sim 27\), which is always multiplying by \(|\epsilon_{\mu e}|\), see Eq. (5). In addition, the phase \(\phi_{\mu e}\) introduces a new source of CP violation. The main message from Figure 2 is: for both NO\(\nu\)A and T2K, the presence of NSI allows the best-fit values (solid circles) to be closer to the
Figure 2: Bi-probability plot for NO\(\nu\)A in the left panel and for T2K in the right panel, while varying \(\delta_{\rm CP}\) with (solid lines) and without (dashed lines) NSI, in the NO (blue lines) and IO (pink lines). The other parameters are fixed as explained in the text. Filled (hollow) dots denote the best-fit values of solid (dashed) lines, while crosses represent estimated values of the probabilities with statistical uncertainties sourced from [9].
experimental result, in particular for NO. As we now discuss, this will be essential to alleviate the tension between these two experiments.
Data from NO\(\nu\)A and T2K, neutrino and antineutrino appearance, disagree when considering the standard neutrino oscillation model. Each experiment individually prefers NO, but when combined, the preference is for IO. As our results will show, by combining both experiments, T2K dominates over NO\(\nu\)A. In Figure 3, we show the allowed region with NSI in \(\delta_{\rm CP}\) and \(\sin^{2}\theta_{23}\) parameter space for NO\(\nu\)A (T2K) in blue (pink) with \(90\%\) of C.L., for NO and also the combined analysis in black lines. On the left-hand side we show the standard oscillation scenario. In the middle panel, we have the effects of NSI considering only the parameter \(\epsilon_{e\mu}\) and in the right-hand side only \(\epsilon_{\mu e}\). In both cases, the regions overlap completely for NO with 90% of C.L., alleviating the tension between the experiments. Indeed, our analyses were quantified using the GLoBES software [32, 33], whose results are summarized in Table 1.
A fair estimate for the compatibility of a given model for different data sets is given by the parameter goodness of fit [40, 41]. The parameter goodness of fit (PG) is defined as \(\chi^{2}_{\rm PG}\equiv\chi^{2}_{\rm min}-\sum_{k}(\chi^{2}_{k})_{\rm min}\), where \(\chi^{2}_{\rm min}\) and \((\chi^{2}_{k})_{\rm min}\) are the global minimum and the local minimum. It is illustrative to notice the p-values of the different scenarios in Table 1. If the NSI contribution is absent, the p-value for NO is only \(4\%\), which allows to exclude at \(95\%\) C.L. the standard hypothesis for NO. On the other hand, IO is strongly favoured in this case. It clearly shows the nature of the present tension among the T2K and NO\(\nu\)A experiments, since each of them, individually, prefer NO. By including the NSI parameter the p-value for both NO and IO is close to each other with a slight preference for IO. Although it is not possible to define a preference for the neutrino mass ordering based on the combination of both experiments, the tension is lifted since NO is not disfavoured anymore.
As seen in Table 2, the best-fit for the combined analysis has \(\delta_{\rm CP}\) different than zero and the NSI phase. It is then natural to ask how sensitive are the experiments to claim that CP is violated in the leptonic sector. In Figure 4, we show the allowed regions with 68 and 90 % C.L. in the parameter space of phases for normal ordering. The left (right) panel corresponds to the parameter space \(\delta_{\rm CP}\) vs. \(\phi_{e\mu}\) (\(\delta_{\rm CP}\) vs. \(\phi_{\mu e}\)). As anticipated from Eq. (6), for \(|\epsilon_{\mu e}|\sim{\cal O}(10^{-3})\) the conversion probability has a dependence on the phase difference \(\delta_{\rm CP}-\phi_{\mu e}\), which explains the tendency seen on right panel
\begin{table}
\begin{tabular}{c|c c}
**NO** (\(\epsilon_{e\mu}\neq 0\)) & NO\(\nu\)A & T2K & NO\(\nu\)A + T2K \\ \hline \(\sin^{2}\theta_{23}\) & 0.57 & 0.52 & 0.56 \\ \(\delta_{\rm CP}/\pi\) & 0.03 & 1.49 & 1.59 \\ \(|\epsilon_{e\mu}|/10^{-3}\) & 1.31 & 1.60 & 1.07 \\ \(\phi_{e\mu}/\pi\) & -0.80 & -0.17 & -0.20 \\ ( \(\delta_{\rm CP}+\phi_{e\mu}\))/\(\pi\) & 1.23 & 1.32 & 1.39 \\ \hline \end{tabular}
\begin{tabular}{c|c c}
**NO** (\(\epsilon_{\mu e}\neq 0\)) & NO\(\nu\)A & T2K & NO\(\nu\)A + T2K \\ \hline \(\sin^{2}\theta_{23}\) & 0.57 & 0.51 & 0.47 \\ \(\delta_{\rm CP}/\pi\) & 0.01 & 1.44 & 1.23 \\ \(|\epsilon_{\mu e}|/10^{-3}\) & 1.49 & 1.21 & 2.13 \\ \(\phi_{\mu e}/\pi\) & 0.56 & -0.73 & -0.42 \\ ( \(\delta_{\rm CP}-\phi_{\mu e}\))/\(\pi\) & -0.55 & 0.17 & -0.35 \\ \hline \end{tabular}
\end{table}
Table 1: We present the results of the standard oscillation model and for the production NSI of the values of \(\chi^{2}\) minimum for the individual datasets of NO\(\nu\)A and T2K and the combined analysis. The values of PG test are listed for the four free parameters in the standard oscillation scenario and six with NSI.
Figure 3: Allowed region for T2K (pink), NO\(\nu\)A (blue) and for combined analysis (black line), for NO in the \(\sin^{2}\theta_{23}\) vs. \(\delta_{\rm CP}\) space, for 90\(\%\) confidence level. In the left panel we show the standard oscillation scenario, in the middle panel we show the case with \(\epsilon_{e\mu}\neq 0\) and in the right panel the case with \(\epsilon_{\mu e}\neq 0\). The dots are the respective best-fit values, see Table 2.
of Figure 4. For \(\epsilon_{e\mu}\), the left panel of Figure 4, there is a dependence on the sum of phases, which is now much more evident. We should emphasize again that \(\epsilon_{e\mu}\) modifies the survival probability, which, in the standard case, does not induce any CP violation. Once the NSI in production is taken into account, it is possible to observe CP violation in the scenario \(\epsilon_{e\mu}\) at 90 % C.L. for the standard phase \(\delta_{\rm CP}\) but also for the sum \(\delta_{\rm CP}+\phi_{e\mu}\). For the other scenario, it is not possible to claim CP violation on the leptonic sector at 90 % C.L., only at 68 % C.L.
Finally, we contrast the parameter region allowed by NO\(\nu\)A and T2K data against constraints from other experiments. The same Lagrangian shown in Eq. (1) can induce changes in the pion leptonic decay rate, which is one the best-measured values [34]. We show in Figure 5 the allowed region in the real vs. imaginary part of the NSI parameter space, for \(\epsilon_{e\mu}\) in the left and \(\epsilon_{\mu e}\) in the right panel, in blue. We also show in pink the region allowed by the constraints on pion decay, which is the process with the most stringent bounds to our scenario with NSI [23; 42]. The allowed region from neutrino experiments alone is dramatically reduced for the case \(\epsilon_{e\mu}\). Nevertheless, we should emphasize that the standard oscillation scenario (\(\epsilon_{e\mu}=0\)) is excluded at 90\(\%\) C.L. For the case \(\epsilon_{\mu e}\), the main effect is to constrain the real part of the NSI parameter. Including data from the neutrino experiments reduces the allowed region in the imaginary axis from pion decay experiments alone. Previous constraints were obtained from neutrino oscillation experiments to be \(\epsilon_{\mu e}<4\times 10^{-3}\)[43; 44] and \(\epsilon_{\mu e}<2.6\times 10^{-3}\)[14] and our bounds are more stringent.
_Discussion & Conclusion.--_ Neutrino oscillation is a unique probe to BSM interactions. Long-baseline neutrino oscillation experiments are particularly sensitive to non-standard neutrino interaction (NSI). We showed that a new pseudo-scalar four-fermion interaction between quarks and leptons modifies the neutrino production. In this scenario there is new source of CP violation from the complex NSI parameter, \(\epsilon_{\mu e}\) or \(\epsilon_{e\mu}\). It impacts the T2K and NO\(\nu\)A analyses, being compatible with constraints from other experiments.
The new interaction at the source (NSI) contains an extra source of CP violation through a CP violation phase. We have found for the _first time_ an analytical formula for neutrino propagation in matter that is completely in agreement with the numerical solution.
For the scenario with \(\epsilon_{e\mu}\), we show that it is possible not only to alleviate the T2K-NO\(\nu\)A tension as shown in Figure 3 by \(2.5\sigma\) C.L. but also to claim CP violation in the leptonic sector at 90 % C.L. We also predicted the correlation for the sum of the phases \(\delta_{\rm CP}+\phi_{e\mu}=1.5\pi\). For the scenario with \(\epsilon_{\mu e}\), there is also a correlation between the new phase \(\phi_{\mu e}\) with the standard Dirac CP phase roughly as \(\delta_{\rm CP}-\phi_{\mu e}=1.5\pi\), which is predicted by the analytical formula in Eq. (6).
Our allowed region for the NSI parameters as shown in Figure 5 is compatible with bounds of very precise measurement of the pion decay rate and indicate a non-zero parameter for NSI. We have found that the NSI parameter is non-null at \(3.0\sigma(1.5\sigma)\) for \(\epsilon_{e\mu}\) (\(\epsilon_{\mu e}\)).
The non-zero value of the NSI parameter opens a new window to understand the source of CP violation and it can be tested in future neutrino oscillation experiments.
A.C. acknowledges support from National Council for Scientific and Technological Development - CNPq through projects 166523/2020-8 and 201013/2022-3. P.S.P. acknowledges support by the National Natural Science Foundation of China (12375101, 12090060 and 12090064) and the SJTU Double First Class start-up fund (WF220442604). P.S.P. also acknowledges support by the Grant-in-Aid for Innovative Areas No. 19H05810. O.L.G.P. acknowledges support for FAPESP funding Grant 2014/19164-6 and 2022/08954-2 and National Council for Scientific and Technological Development - CNPq grant 306565/2019-6 and 306405/2022-9. This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. E. S. S. acknowledges support from National Council for Scientific and Technological
Figure 5: Bounds for \(\epsilon_{e\mu}\) (\(\epsilon_{\mu e}\)) in the left (right) panel, coming from our analysis and 90 % C.L. and the constraints from \(\pi\)-decay rate is showed by the pink curve at 90% C.L.. The legends are the same for both panels. The best-fit values are represented by dots.
Development - CNPq through project 140484/2023-0.
|
2308.02613 | Interoperable synthetic health data with SyntHIR to enable the
development of CDSS tools | There is a great opportunity to use high-quality patient journals and health
registers to develop machine learning-based Clinical Decision Support Systems
(CDSS). To implement a CDSS tool in a clinical workflow, there is a need to
integrate, validate and test this tool on the Electronic Health Record (EHR)
systems used to store and manage patient data. However, it is often not
possible to get the necessary access to an EHR system due to legal compliance.
We propose an architecture for generating and using synthetic EHR data for CDSS
tool development. The architecture is implemented in a system called SyntHIR.
The SyntHIR system uses the Fast Healthcare Interoperability Resources (FHIR)
standards for data interoperability, the Gretel framework for generating
synthetic data, the Microsoft Azure FHIR server as the FHIR-based EHR system
and SMART on FHIR framework for tool transportability. We demonstrate the
usefulness of SyntHIR by developing a machine learning-based CDSS tool using
data from the Norwegian Patient Register (NPR) and Norwegian Patient
Prescriptions (NorPD). We demonstrate the development of the tool on the
SyntHIR system and then lift it to the Open DIPS environment. In conclusion,
SyntHIR provides a generic architecture for CDSS tool development using
synthetic FHIR data and a testing environment before implementing it in a
clinical setting. However, there is scope for improvement in terms of the
quality of the synthetic data generated. The code is open source and available
at https://github.com/potter-coder89/SyntHIR.git. | Pavitra Chauhan, Mohsen Gamal Saad Askar, Bjørn Fjukstad, Lars Ailo Bongo, Edvard Pedersen | 2023-08-04T14:02:15Z | http://arxiv.org/abs/2308.02613v1 | # Interoperable synthetic health data with SyntHIR to enable the development of CDSS tools
###### Abstract
There is a great opportunity to use high-quality patient journals and health registers to develop machine learning-based Clinical Decision Support Systems (CDSS). To implement a CDSS tool in a clinical workflow, there is a need to integrate, validate and test this tool on the Electronic Health Record (EHR) systems used to store and manage patient data. However, it is often not possible to get the necessary access to an EHR system due to legal compliance. We propose an architecture for generating and using synthetic EHR data for CDSS tool development. The architecture is implemented in a system called SyntHIR. The SyntHIR system uses the Fast Healthcare Interoperability Resources (FHR) standards for data interoperability, the Gretel framework for generating synthetic data, the Microsoft Azure FHIR server as the FHIR-based EHR system and SMART on FHIR framework for tool transformability. We demonstrate the usefulness of SyntHIR by developing a machine learning-based CDSS tool using data from the Norwegian Patient Register (NPR) and Norwegian Patient Prescriptions (NorPD). We demonstrate the development of the tool on the SyntHIR system and then lift it to the Open DIPS environment. In conclusion, SyntHIR provides a generic architecture for CDSS tool development using synthetic FHIR data and a testing environment before implementing it in a clinical setting. However, there is scope for improvement in terms of the quality of the synthetic data generated. The code is open source and available at [https://github.com/potter-coder89/SyntHIR.git](https://github.com/potter-coder89/SyntHIR.git).
1UiT The Arctic University of Norway
2DIPS AS
[email protected]
## Introduction
With the increasing adoption of EHR systems, there is an increased volume of digitized health records. The influx of digitized data has led to significant interest in developing machine learning-based CDSS tools to assist physicians with decision-making for disease diagnosis, prognosis, treatment, and follow-up. Some machine learning-based CDSS tools demonstrated usefulness for clinicians. Medicals (by Siemens Healthineers 2017) clinical decision support provides an advanced use of radiology images using an evidence-based clinical practice for improving the quality of patient care. Infera (CDS 1997) CDS provides clinicians with peer-reviewed, evidence-based clinical recommendations in real-time by analyzing the patient's EHR data. Medi-span assists clinicians and pharmacists in prescribing and dispensing decisions with information about avoidable medication errors, inappropriate dosing, and adverse events (by Wolters Kluwer 2017). HERA-MI is used for early-stage breast cancer diagnosis (Hera-MI 2017), and COHESIC provides personalized care with decision intelligence for cardiovascular care (Cohesic 2016). There are still underutilized data despite the development and implementation of numerous clinical decision support systems, and using artificial intelligence (AI) has considerable potential to enable the development of the next generation of CDSS tools. Therefore several current research utilizes machine learning for CDSS tools. However, this research typically focuses on the development of novel methods and models. Thus, the challenge of implementing a useful tool, including testing and evaluating these solutions in the clinical environment, is typically not addressed (Sutton et al. 2020; William J. Gordon and Bates 2020; Mathews et al. 2020). Hence, there is a need to simplify CDSS tool development, testing, validation, and clinical deployment.
There are three primary challenges to the development and implementation of CDSS tools. (1) Clinical data sharing across organisations and institutions is very restricted due to regulatory compliances such as General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPPA) in the United States. This makes validation of CDSS tools with new unseen data difficult. (2) The structure of health data sources and formats of the EHR systems are not readily available to the healthcare scientific community to translate the machine learning model into a CDSS tool. So, there is a need to develop and test the CDSS tool implementing the model with the interfaces and infrastructure of an EHR system. (3) The EHR systems are not easily accessible for researchers and developers, thereby making it difficult to test the CDSS tool before deploying it in the clinical context.
The lack of accessibility to health data has increased the interest of both healthcare researchers and application developers in generating and using synthetic data. There are commercial platforms for generating synthetic data, such as YData(YData 2020) and Syntega(Mendelevitch and Lesh 2021), and also open-source solutions like Gretel (Gretel 2019), Synthea (Synthea 2019) and ChatGPT (OpenAI 2023). The data exchange formats with the majority of these tools are incompatible with FHIR-based EHR systems since they need to adhere to the standards of the FHIR framework.
Therefore, it requires a large amount of manual work of data wrangling to use the generated synthetic data in an EHR system. There is a need to incorporate the generated synthetic data as a part of big frameworks needed to aid in developing, testing, and validating CDSS tools.
CDSS tools are increasingly adopted in EHR systems globally. Sutton et al. [14] reviewed various CDSS tools and listed their pitfalls. An important drawback is the lack of transportability and interoperability due to the diversity of clinical tool development frameworks and clinical data formats. Historically, different hospitals used various EHR Systems leading to interoperability issues since they employed different data structures and data formats for storing, managing and accessing patient records. In 2014, Health Level Seven (HL7) International released the FHIR framework, which defines a standard for healthcare information exchange (HL7-FHIR 2011). The FHIR framework has seen widespread adoption in the last decade for the storage and retrieval of medical records and for managing administrative and financial segments of the hospital. It has been used by various EHR companies like Epic [15], Athena Health [16], Cerner [17] and DIPS [18]. The FHIR framework defines FHIR resources [10] associated with entities involved in healthcare information exchange, such as patients, practitioners, hospitalizations, diagnoses, medications, etc. Moreover, the framework also defines the links between all the entities. The FHIR resource formats specify the content and structure of the information set of these entities. These standards enable building interoperable systems without worrying about underlying data formats, thereby making the interoperability of CDSS tools seamless. However, using the FHIR framework alone does not provide transportability of CDSS tools from one EHR system to another.
For integrating CDSS tools into an EHR system, it needs access to the underlying interfaces. Substitutable Medical Applications and Reusable Technologies (SMART) Health IT [1] enable this by providing an open, free and standards-based API for user interface(UI) and security integration required for accessing health data. It adopted FHIR specifications for data models and data formats for health applications called SMART on FHIR [10]. Similar to this, SMART Health IT has CDS hooks which provide hook-based patterns for invoking decision support from within a clinical workflow. The CDS hook API provides CDS calls with information and suggestions and a user-facing SMART app when the CDS needs additional interaction [16]. These CDS hooks are based on the HL7 FHIR standard. The FHIR framework provides standardised data formats and SMART standardised interface for interacting with these systems. Thus, transportability through either SMART on FHIR or CDS hooks and interoperability through FHIR standards enables seamless integration of CDSS tools.
The synthetic data generators, FHIR framework, and SMART Health IT address the three primary challenges disjointly; thus, there is a need to unify these in a single architecture. In this paper, we propose an architecture named SyntHIR that integrates synthetic data generation and modern FHIR-based EHR platforms. SyntHIR provides a realistic development environment for machine learning-based CDSS tools. Moreover, the architecture includes relevant and usable data for testing and verification of CDSS tools. The system bridges the gap between an FHIR server and synthetic data generation tools, such that synthetic data can be generated based on existing FHIR-based EHR data. Combined, these elements allow us to simulate a clinical EHR system outside the clinical setting while providing realistic health data for training and testing CDSS tools. Our architecture integrates the following services to enable seamless development, testing and integration of CDSS tools:
1. The synthetic FHIR data generator provides synthetic data addressing the issue of data accessibility to CDSS researchers and developers.
2. A FHIR adapter integrates with a cloud-based FHIR server mitigating interoperability issues of data models and data formats.
3. The data wrangling component facilitates the translation between health data and FHIR formats, enabling integration with synthetic data generators.
## 2 SyntHIR System Design & Architecture
We showcase the motivation for SyntHIR architecture, considering a typical use case. The outline of the use case follows the steps for the development, testing, and seamless integration of a CDSS tool into a clinical EHR system. Based on this use case, we provide a common design pattern for CDSS tools.
### Usage Scenario
A team is developing a machine learning-based CDSS tool using a sensitive health dataset they have collected and curated. They have many samples in the dataset, but the attributes of the data are restricted to those relevant to the model, and they do not have access to additional variables due to privacy concerns. Such high-quality domain-specific datasets are well-suited and widely used for machine learning. For example, Norway has national registers for prescriptions [12] and hospitalizations [15] that can be linked using national ID numbers. These registers have millions of curated records, but for each record, there are only a few attributes. Further, to get access to the data, it is typically necessary to limit the attributes to those that are relevant to a particular study. In particular, patient information is typically anonymized and often removed entirely.
The team uses their data to train and test the machine-learning model. The team may also use additional data with similar variables to validate the final tuned model. For example, the Norwegian prescription data linked to hospitalizations can be used to implement machine-learning algorithms for the risk of hospitalization for different drug combinations and then validated using similar prescription and hospitalization data from another country.
However, to translate the model into a CDSS tool that can be deployed in an EHR system, there is a need to test, verify, and validate the tool in the clinical setting. The CDSS tool needs to read the relevant data from the clinical system, use the model for inference, and display the results to a user. Finally, the team need to ensure that the tool provides the run-time performance, scalability, monitoring of model performance, and other operational requirements for production use of machine learning models [1, 13, 14].
We believe the above development steps include the development and testing of machine learning models. However, they can be further extended to generalise a design pattern for the development, testing, and deployment of machine learning-based CDSS tools. These include the following steps:
1. Develop and test the machine learning model using the sensitive dataset.
2. Define data structures for the fields required to use the machine learning model and other entities used by the EHR systems.
3. Transform the development dataset into these defined data structures, fill in missing values, and store the transformed data in an EHR server.
4. Create a dataset for tool development and testing that contains the variables necessary for the model, and the data is stored on an EHR server.
5. Implement a CDSS tool using a machine learning model; this will read relevant data from the EHR server, inputs the data to the model, and graphically displays the model results.
6. Deploy the tool on an EHR system. This enables the demonstration of the tool without accessing sensitive data.
7. Finally, the tool can be lifted to other EHR systems with real patient data for further validation and eventually used for clinical assistance.
### SyntHIR Components
Figure 1 depicts the components of the system, data flow across the components and various services within the components of the system. The three components of the system are Data Wrangling, FHIR Adapter that interacts with a sensitive and synthetic FHIR Server, and Synthetic FHIR Data Generator. Each of them is deployed on separate servers, either in the cloud or on-premises. The sensitive data is stored and managed on the user-provided cloud instance details. The components are modular such that each one of them can be utilised independently of others for respective use cases and provides access to the functionalities through APIs. The arrows connecting the components are implemented in the SyntHIR pipeline. For CDSS tool development, it interfaces with the synthetic FHIR server.
#### Data Wrangling
There are many health data sets which need data wrangling before they can be used inside any FHIR-based EHR systems. The Data Wrangling component is used to translate between the data structures in the machine learning models and synthetic data generators and FHIR resources defined by the FHIR-based EHR systems.
The Data Wrangling component is deployed as a web service application on a server, and each of the functionalities is implemented as REST API. There is a main index file in the resources folder of the component, which holds the mapping of the fields of the dataset to the FHIR resources and their corresponding attributes. This index file is created for each dataset, which needs to be converted to FHIR resources. To represent the structure of these FHIR resources, templates are defined for the resources such as Practitioner, Patient, Location, Encounter, Condition, MedicationRequest and MedicationDispense. This component implements two functionalities, first, to convert a CSV file to FHIR resources (refer Github: csv-to-fhir), and second to convert FHIR resources to CSV files (refer Github: fhir-to-csv). For the first service, in order to convert the CSV file to FHIR resources, the mapping of the CSV file attributes to the FHIR resources is identified using the main index file and the templates are populated with the data of respective attributes, which finally generates FHIR resources. In the second service, the list of FHIR resources is taken as input to the API, which is flattened and written to a CSV file.
The input to the REST API of the component is a CSV health data file, and the output is a list of FHIR resources in JSON format and vice-versa. This component is used when there is new data to be pushed into the sensitive FHIR server or when synthetic data is to be generated.
#### FHIR Adapter
To upload and download FHIR resources from any FHIR-based EHR system that are needed to develop and test a CDSS tool, there is a need to maintain a relationship between these FHIR resources. This component is an adapter which provides integration with the FHIR server(s), and these servers provide an implementation of an FHIR-based EHR system. The FHIR resources stored on
Figure 1: SyntHIR Design and Architecture. The arrows connecting the components depict the flow of data across the system (left to right).
these servers can be accessed through FHIR APIs implemented by the FHIR servers.
This component is deployed as a web application and provides the services for uploading and downloading FHIR resources from the FHIR server implemented as a REST API. There is a configuration file in the resources folder of the component that needs to be populated with credentials (refer Github: Component configuration) before running and deploying the component. These credentials are used to authenticate with the FHIR server and generate an access token for accessing the FHIR resources. The resources are uploaded (refer Github: upload) to or downloaded (refer Github: download) from the FHIR server using the generated access token. The FHIR resources structure has attributes which hold links to other related resources. So while uploading the resources, the component updates these attributes of the related resources. Similarly, while downloading resources, the related resources are fetched using these attributes of the resources.
The input to this component is the URL of the FHIR server and a list of FHIR resources to be uploaded or the URL of the FHIR server from where the FHIR resources are to be downloaded. This component is active all the time to upload or download FHIR resources. Next, we provide a brief discussion of FHIR server(s).
FHIR ServerThere are two types of FHIR servers: _sensitive_, which is private and confidential and a _synthetic_ which is public and open to everyone. With this, we segregate real from synthetic health data since there are always compliances associated with the sharing of Protected Health Information (PHI). The private FHIR server instance is maintained with respect to each dataset to be uploaded and will have restricted access limited only to the original owners of the dataset. The public server contains synthetic data generated from real datasets, and it will have open access. There can be multiple synthetic FHIR servers depending on the use case of generating synthetic data for different datasets or if the user wants to maintain its own version of synthetic data. These FHIR servers implement interoperable FHIR standards defined by HL7 for creating and managing health data using FHIR resources. There are various vendors providing FHIR servers based on the same foundation of the FHIR framework. Examples of such FHIR servers are HAPI FHIR Server (FHIR 2015), Google Cloud Healthcare API (Google 2019) and Microsoft Azure API for FHIR (Azure 2019). The component can be integrated with any vendor.
Synthetic FHIR Data GeneratorThere are various tools and platforms for generating synthetic data, which need to be configured, and most of them do not provide support for FHIR-based data. This component provides integration with an open-source synthetic data generator platform to generate synthetic data using the statistical properties of the real dataset, which has restricted access. The synthetic data generator is configured on-premises within the protected environment since it deals with sensitive datasets. Synthetic data is available in both FHIR format stored on a synthetic FHIR server and CSV files stored on the cloud storage to develop and test CDSS tools. It takes care of the complexity of data exchange formats required by the available synthetic data generators, thereby making it compatible to operate with the FHIR servers. This makes it easier to build and test models on synthetic FHIR data that can be seamlessly integrated with any FHIR-based EHR system.
The component is deployed as a web application, and the service is implemented as a REST API. It provides a service (refer Github: ) for generating synthetic data, which takes input as the CSV file and the number of synthetic records to be generated. The CSV file is uploaded to the cloud server, where the synthetic data generator is configured and returns the generated synthetic records in a CSV file. In order to convert the CSV file to FHIR resources, API (refer Github: csv-to-fhir) of the _Data Wrangling_ is used.
This component will get activated by the user every time there is new data on which synthetic data is to be generated and further can be uploaded to the synthetic FHIR server.
### App Development using SyntHIR
For building CDSS tools which can be integrated with any FHIR-based EHR system, we use the FHIR framework for defining clinical data schema to ensure interoperability and use standard-based specifications such as SMART on FHIR API or CDS hooks for transportability.
SMART on FHIR is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards, including OAuth2 for authorization and OpenID Connect for authentication, to FHIR interfaces to enable integration with EHR systems.
There are various steps in the SMART app launch. First, the SMART on the FHIR client app is registered with the sensitive FHIR server, and it receives a registration client ID. The second is launching the app, which can be done standalone or from the EHR, depending on the scope of the launch. Third, the app defines configuration, which holds details about the authorization endpoint of the FHIR server. This information helps clients direct authorization requests to the right endpoint. Fourth, the authorization code is obtained using the endpoint from the configuration. Fifth, the access token is generated using the authorization code,
Figure 2: Flow of CDSS tool development using SyntHIR. The arrows represent the components of the system used, and the text connecting the arrows is the input and output of the components (written in italics). The FHIR API is the API provided by the FHIR server, and it interfaces with the CDSS tool development environment. This would provide realistic data access to the CDSS tools.
which is used to access the FHIR server API. A more detailed discussion is given in Section 4.
### SyntHIR Workflow
To implement the design pattern defined in section 2.1, the SyntHIR system can be used for the development, testing and deployment of machine learning-driven CDSS tools in a real-world clinical setting. It is achieved using the components of the SyntHIR system. Figure 2 explains the flow of CDSS tool development using SyntHIR.
1. The user defines the mapping between the raw machine learning data format and FHIR resources using the main index file of _Data Wrangling_ component to represent the data in an FHIR-based EHR system.
2. Based on the defined mapping, the data is transformed into corresponding FHIR resources and their properties using _Data Wrangling_ API. The FHIR resources are then uploaded to a sensitive FHIR-based EHR server using _FHIR Adapter_ component.
3. A _synthetic data generator_ API is used to generate datasets for tool development and testing. The data is managed in FHIR-based data formats and further uploaded to a synthetic FHIR-based EHR server using _FHIR Server_ component.
4. A CDSS tool for decision support is implemented using a machine learning model to be used in a clinical workflow.
5. The tool is deployed on an FHIR-based EHR system which is a synthetic FHIR server that uses the synthetic FHIR data.
6. Finally, the tool can be integrated into any EHR system built on the FHIR standard and equipped with real patient data.
## 4 SyntHIR Implementation
Figure 3 shows the data flow and different services employed by each of the SyntHIR components. Data wrangling is implemented using the Handlebarjs templating engine to create reusable templates. The FHIR adapter is connected with Microsoft Azure API for creating an FHIR-based EHR system, and lastly, the synthetic data generator is integrated with Gretel. The use case of CDSS tool development using the SyntHIR system is illustrated as the fourth component, which reads data from the SyntHIR synthetic FHIR server. This component is discussed in detail in Section 4.
In this section, we explain the implementation details of all the components of SyntHIR, their interfaces and compatible data formats. The system is based on micro-services architecture implemented in Java using Spring Framework (version 3.0.5) for developing the APIs, Handlebarjs (version 4.3) as a templating engine, Azure API for FHIR as the FHIR server, Azure blob storage for data storage and management and FHIR framework version 4.3.0. Each of the services of the components is exposed as an API.
### Data Wrangling
Data Wrangling is implemented using Handlebarjs [1] templating engine. It comprises two services, the first is for converting data from CSV files to FHIR resources, and the second is for converting from FHIR resources to CSV files. To convert from CSV files to FHIR resources such as Patient, Practitioner, Encounter, MedicationRequest, Medication, Claims, and so on, a user-defined mapping will be used. First, the user identifies and defines the mapping of the CSV to the FHIR resources using the main index file, which is a dictionary of mapping from input attributes to FHIR attributes. Handlebarjs is used to define reusable structures called templates for each FHIR resource which can create outputs in JSON, HTML and other text-based formats. For this component, we have implemented FHIR resources in JSON format. Each template is a.hbs file, and it is populated using an input object. Each record of the CSV file is read and converted to a set of FHIR resources using the template files and returns a list of FHIR resources as nested JSON objects.
The synthetic data generator works with either CSV files or flattened JSON objects. Before sending data to the Synthetic data engine, the FHIR resources, which are nested JSON objects, are flattened and further converted into CSV files using the main index file.
### FHIR Adapter
FHIR Adapter uses the Microsoft Azure API for FHIR cloud service. It provides an implementation of the FHIR framework, with data storage and retrieval using FHIR APIs pro
Figure 3: SyntHIR Implementation. This figure depicts the implementation of various components discussed in section 2.
vided by the Azure cloud platform. For data access management, the Azure Active Directory (AAD) is used to provide authentication tokens. For uploading the data to the server, the credentials of the FHIR server and Azure Active Directory are uploaded using the configuration file. Using the configuration credentials, the component connects with the corresponding FHIR server. The data can be uploaded and downloaded as JSON-formatted FHIR resources using SyntHIR APIs.
### Synthetic FHIR Data Generator
This component is integrated with Gretel synthetics [1] service to generate synthetic data. It can be configured using Gretel Command Line Interface (CLI) or Python SDK. For this, we have integrated with Gretel CLI and a Gretel worker is installed on a local server for on-premises data generation since the health datasets are confidential. It provides an implementation of various types of machine-learning models using configurations. For structured datasets, we have used a Long-Short Term Memory (LSTM) artificial neural network to learn and create new synthetic datasets. First, a model is created and trained using the input dataset, and then the synthetic data is generated using the trained model. Gretel does not provide support for nested JSON objects, so input to this component is a CSV file and generates synthetic data generated in a CSV file.
## App development using SyntHIR
To demonstrate how SyntHIR can be used for the development, testing and deployment of a CDSS tool, we implemented a prediction tool for the risk of hospitalization using SMART on FHIR framework and demonstrate how it can be lifted from the SyntHIR environment to the Open DIPS platform.
### Application: Risk of hospitalization prediction tool
The two datasets used by this demonstrator app are the Norwegian Patient Registry (NPR) which has hospitalization details, and the Norwegian Prescription Database (NorPD), which has prescription details. These datasets are linked using a unique patient identifier. The two datasets contain the following details about entities: Patient, Prescriber, Hospitalization, Hospital location, Diagnosis, Drugs prescribed, Drugs dispatched, Prescription category and Reimbursements. The dataset is anonymized due to legal compliances. This data is representative of the datasets which are used to develop machine learning models for CDSS tools.
The dataset is in CSV files which contain the hospitalization and prescription details of a patient. It has 60,000 samples and 35 attributes: patient ID, patient ID type, patient gender, patient date of birth, patient age group, patient death year, patient death month, patient county name, patient county number, prescriber ID type, prescriber date of birth, prescriber gender, hospitalization arrival mode, hospitalization status, discharge location, hospitalization institute name, diagnosis code, hospitalization start and end date, prescription unique ID, prescription category, prescription category code, prescription reimbursement category, prescription reimbursement category code, reimbursement cICD or ICPC code, drug name, ATC code of the drug, a unique ID of the drug, daily defined dosage of the drug, day and year on which drug was dispensed, number of packages dispensed for a drug and number of daily defined dosages dispensed for each prescription.
The first step of the SyntHIR workflow is to prepare the dataset in the data structures that can be interfaced with any FHIR-based EHR systems. The main index file in the data wrangling component, which is defined by the user, is used to map each of the attributes of the input dataset to respective FHIR resources and their corresponding properties. Figure 4 represents the schema of FHIR resources created for the tool. This index file and templates are used by the data wrangling component. The data wrangling component reads each record from the input CSV file and creates JSON-formatted FHIR resource objects: Patient, Practitioner, Location, Encounter, Condition, Medication, MedicationRequest and MedicationDispense. A list of FHIR resources in JSON format is returned by 'the convert to hir' API of the data wrangling component, and they are uploaded using the 'upload' API of the FHIR adapter component to a sensitive FHIR server.
The next step is to generate synthetic data using this
Figure 4: FHIR Resources Schema. It represents the FHIR resources and corresponding properties to which the input dataset was mapped. The arrows connecting the resources indicate how the relationship is maintained across the resources with respect to a patient’s hospitalization. The \(\times\) attributes are mandatory for the FHIR resources as per the standard of the FHIR framework, and # attributes are the ones used by the CDSS tool for predicting the risk of hospitalization.
dataset. A list of FHIR resources is downloaded using the 'download' API of the FHIR adapter component from the sensitive FHIR Server. Before sending the data to the Synthetic Data Generator, this list of downloaded FHIR resources is converted to a CSV file using the 'convert to csv' API of the Data Wrangling component. This converted CSV is sent to the synthetic data generator, and the output is a CSV file with synthetic records with 120,000 records.
The synthetic data is uploaded to the synthetic FHIR server. For this, the CSV file with synthetic records is converted to a list of FHIR resources using the 'convert to fhir' API of the Data Wrangling component. This synthetic list of FHIR resources is then uploaded to the synthetic FHIR server using the 'upload' API of the FHIR adapter component. This synthetic FHIR server interfaces with the CDSS tool development environment.
A machine learning model is built to predict the risk of hospitalization using the synthetic dataset downloaded from the synthetic FHIR server. To refine the predictive model, several pre-processing steps are carried out on the dataset. The features with a high percentage of missing values were eliminated. To avoid redundancy, only one feature representing similar information was retained in the final model. Additionally, features with a high level of cardinality for a single category that did not provide valuable information for the model's output were removed. The hospitalization starts and end date feature, which is used to generate the outcome, was also eliminated after generating the outcome. Missing values were minimal, with only ten occurring in the entire dataset, and were imputed using a mode imputation method. Each row of the dataset represented a separate patient admission, and the ATC codes were truncated to the first three letters (therapeutic groups level) to reduce model dimensions. All input features were categorical and were encoded using the one-hot encoding method. The outcome variable is binary and was defined using the time of admission and discharge. If the patient was hospitalized for more than a day, the outcome value is one else; the value is 0. The final prediction model was constructed using eight input variables and an outcome variable. To overcome class imbalance, a stratified train-test split method was used, with an 80-20% ratio and four algorithms were implemented, namely logistic regression, random forest, support vector machine, and XGBoost classifier. To assess the model's performance, accuracy, area under the curve (AUC), and F-1 score were calculated at a 95% confidence interval. The confidence interval was determined using 1000 bootstrap samples. The performances of all algorithms were all close. We chose to save the XGB classifier, a boosting tree-based algorithm, to predict the probability of a patient staying in the hospital for more than zero days.
Using this machine learning model, the CDSS tool was developed as a SMART on FHIR app. This app is first registered with the SyntHIR synthetic FHIR server using the Azure portal. After registration, the app receives the client ID and client secret, which is used to authorize and authenticate the app with the synthetic FHIR server for accessing the FHIR data. Using the client ID and client secret, it fetches the authorization code and uses it to create an access token for accessing the FHIR resources. The machine learning model is integrated into the app as an API written in Python. On starting the app for a patient ID, it fetches details such as patient gender, patient age, patient county number, hospitalization arrival mode, discharge location, diagnosis code, ATC code of the drug prescribed and type of prescription. These variables are sent to the prediction API, which returns a response as 0 or 1, predicting the risk of hospitalization.
### Deployment to Open DIPS
We examined the interoperability and portability of the CDSS tool developed on the Open DIPS platform (AS 2021). The CDSS tool is based on a machine learning model for predicting the risk of hospitalization. In order to make a prediction, the model needs the following variables: patient gender, patient age, patient county, hospitalization arrival mode, discharge location, diagnosis code, ATC code of the drug prescribed and prescription category. These variables are stored in a collection of FHIR resources, namely Patient, Encounter, Condition, Medication and Medication Request.
However, the Open DIPS platform does not currently provide an implementation for FHIR resources such as Medication and Medication Requests. In order to integrate the CDSS tool into the Open DIPS platform, the data related to missing resources, such as the ATC Code of the drug and prescription category, was fetched from SyntHIR synthetic FHIR server and the other data from Open DIPS as shown in figure 5.
The CDSS tool is connected to two FHIR servers, Open DIPS and SyntHIR synthetic FHIR server. First, the CDSS tool is registered as a SMART on FHIR app on both servers, and it gets a client ID and client secret for the tool to fetch FHIR resources from the servers. Using the respective credentials of the servers, the app is authorized against both servers. Once the authorization is completed, model variables such as patient gender, patient age, patient county, hospitalization arrival mode, discharge location and diagnosis code are fetched from Open DIPS for a patient ID and ATC code of the drug prescribed and prescription category from SyntHIR synthetic FHIR server. Although the data from SyntHIR synthetic FHIR server does not have a correlation to the patient and its hospitalization information from Open DIPS, SyntHIR provides realistic data. These vari
Figure 5: It represents different resources fetched from two FHIR servers: Open DIPS and SyntHIR Synthetic FHIR server.
ables are sent to the machine learning model for prediction, which is displayed to the user on the app. The tool was successfully deployed on the Open DIPS platform by fetching the missing data from SyntHIR synthetic FHIR server.
## Discussion
In this study, our primary objective was to explore the potential of existing platforms for generating synthetic data and frameworks for health data interoperability to facilitate the validation and deployment of various machine learning-based CDSS tools. We investigated how these platforms and frameworks be leveraged to provide an integrated test environment for CDSS tools validation and deployment. To achieve our research objectives, we designed and implemented a modular system that employs various services enabling the development, validation and deployment of a CDSS tool in a clinical setting. First, the data wrangling component provides services for converting between CSV health data files and interoperable FHIR formats. Then, the FHIR server component provides an environment of an FHIR-based EHR system. Lastly, synthetic FHIR data generate synthetic data in interoperable health data formats, which are uploaded to the SyntHIR FHIR server.
### Related Work
To understand the current state of synthetic health data generators, data interoperability in EHR systems and development and deployment of CDSS tools, it is crucial to explore the existing literature and analyze the key works and advancements in these fields. This section provides a review of related works, highlighting significant contributions, methodologies, and existing gaps. Clinical decision support systems (CDSS) have become increasingly prevalent in healthcare settings, facilitating the timely and accurate diagnosis and treatment of patients. Moreover, numerous CDSS tools have been developed and evaluated in the literature, aiming to improve clinical decision-making and patient outcomes, but they are rarely used and need to be clinically validated. W.J. Gordon et al. (Gordon et al., 2020) stated that various issues need to be addressed for deploying these tools into clinical practice, such as regulatory restrictions, clinical validation, integration into EHR and clinical workflow, education and cost. Shaikh et al. listed the benefits of using AI/ML methodologies for building CDSS that incorporate advanced imaging to enhance understanding of disease process and management (Shaikh et al., 2021). In (Suraj et al., 2022), a web application called SMART COVID Navigator is developed that aggregates patient health information from various FHIR-based EHR systems and determines the latest findings about COVID-19 outcomes. Other than these, there are studies for improving on development of clinical decision support systems based on FHIR health data (Semenov et al., 2018, 2020; Dullabh et al., 2022). One of the advancements in enabling interoperability of CDSS tools, Jungsang et al. (Yoo et al., 2022) introduced a system known as CANE that facilitates the implementation and deployment of a CDSS in a practical healthcare environment. CANE architecture provides a pipeline for the development, evaluation, and deployment of the CDSS machine learning model with the EHR system. However, to ensure the effectiveness of CDSS tools, it is crucial to leverage high-quality health data that accurately represents patient populations and medical conditions. But, the collection and use of real-world data in healthcare research are often hindered by challenges such as privacy concerns and data heterogeneity. In order to address these issues, there are synthetic data generators developed using deep learning models that help in generating realistic health data (Gretel, 2019; Synthea, 2019; YData, 2020; Mendelevitch and Lesh, 2021). These datasets can be used in the development and validation of health apps and CDSS tools. But most of these tools work on specific data structures such as CSV or JSON formats and do not cater to the interoperable FHIR standards used by EHR systems. The previous studies have investigated synthetic data generation, FHIR standards and the development of CDSS tools; these demonstrations are independent of each other and have not explored the feasibility of validating and deploying these tools based on interoperable synthetic data.
### Limitations of SyntHIR
With the successful validation and deployment of the machine learning-driven CDSS tool with SyntHIR, there are certain limitations of the SyntHIR system. The system is demonstrated for the feasibility of validating and deploying the machine learning-based CDSS tools, but we have not deployed the system into a hospital EHR system. There are different data formats used by other EHR systems that have not adopted the FHIR framework. However, the data wrangling component only supports the FHIR standard. There are no checks on the quality of the synthetic data generated or if the generated data is fully anonymised.
### Future Work
The system needs to provide an implementation of templates for all the FHIR resources of the framework to enable tool development for other datasets as well. The quality of synthetic data generated is not taken into consideration, so there is a potential to investigate the integration with other synthetic data generator platforms, as well as methods to generate synthetic data for unstructured health data formats such as images and text.
### Conclusion
In summary, integrating synthetic data generators with the FHIR framework and FHIR server creates a unified system that facilitates validating and deploying machine learning-driven CDSS tools. This integrated system simplifies the process of validating and deploying a CDSS tool before integrating it within a real clinical environment by handling the complexities associated with the implementation. The user can validate and test the deployment of these CDSS tools through open-source APIs. The integrated approach tackles the key challenge of CDSS tools from development to deployment in a clinical setting. First, legal compliances restrict access to real-world patient health data, making it challenging for the tool developers to test and validate their
model against unseen data instances. The developers can access realistic yet privacy-preserving data by leveraging synthetic data generators. Secondly, the complexity and diversity of health data structures and formats of the EHR systems create a hurdle in developing interoperable CDSS tools. However, utilizing the FHIR framework, which provides a standardized framework for storing and managing patient health data, the CDSS tools developed on this integrated system can be transported to any system based on the same framework. Lastly, the inaccessibility to the interfaces of the EHR systems impedes the development and deployment of the CDSS tools. The integrated system overcomes this challenge by leveraging synthetic data generated on FHIR formats and thereby uploading to the FHIR server. This FHIR server is an FHIR-based EHR system holding synthetic patient data, which provides a standardized interface to the CDSS tool developers. In conclusion, the integrated system (SyntHIR) provides privacy-preserving yet realistic data, handles diverse data formats, and enables tool development without direct EHR access. This approach facilitates the development, validation and deployment of CDSS tools, ultimately aiding clinicians with decision-making.
|
2302.07730 | Transformer models: an introduction and catalog | In the past few years we have seen the meteoric appearance of dozens of
foundation models of the Transformer family, all of which have memorable and
sometimes funny, but not self-explanatory, names. The goal of this paper is to
offer a somewhat comprehensive but simple catalog and classification of the
most popular Transformer models. The paper also includes an introduction to the
most important aspects and innovations in Transformer models. Our catalog will
include models that are trained using self-supervised learning (e.g., BERT or
GPT3) as well as those that are further trained using a human-in-the-loop (e.g.
the InstructGPT model used by ChatGPT). | Xavier Amatriain, Ananth Sankar, Jie Bing, Praveen Kumar Bodigutla, Timothy J. Hazen, Michaeel Kazi | 2023-02-12T01:26:49Z | http://arxiv.org/abs/2302.07730v4 | # Transformer models: an introduction and catalog
###### Abstract
In the past few years we have seen the meteoric appearance of dozens of models of the Transformer family, all of which have funny, but not self-explanatory, names. The goal of this paper is to offer a somewhat comprehensive but simple catalog and classification of the most popular Transformer models. The paper also includes an introduction to the most important aspects and innovation in Transformer models.
###### Contents
* 1 Introduction: What are Transformers
* 1.1 Encoder/Decoder architecture
* 1.2 Attention
* 1.3 What are Transformers used for and why are they so popular
* 1.4 RLHF
* 1.5 Diffusion
* 2 The Transformers catalog
* 2.1 Features of a Transformer
* 2.1.1 Pretraining Architecture
* 2.1.2 Pretraining Task
* 2.1.3 Application
* 2.2 Catalog table
* 2.3 Family Tree
* 2.4 Chronological timeline
* 2.5 Catalog List
* 2.5.1 ALBERT
* 2.5.2 AlphaFold
* 2.5.3 Anthropic Assistant
* 2.5.4 BART
* 2.5.5 BERT
* 2.5.6 Big Bird
* 3
###### Abstract
We consider the \(\mathbb{R}^{3}\)-algebra \(\mathbb{
* 2.5.46 Pegasus
* 2.5.47 RoBERTa
* 2.5.48 SeeKer
* 2.5.49 Sparrow
* 2.5.50 StableDiffusion
* 2.5.51 Swin Transformer
* 2.5.52 Switch
* 2.5.53 T5
* 2.5.54 Trajectory Transformers
* 2.5.55 Transformer XL
* 2.5.56 Turing-NLG
* 2.5.57 ViT
* 2.5.58 Wu Dao 2.0
* 2.5.59 XLM-RoBERTa
* 2.5.60 XLNet
## 1 Introduction: What are Transformers
Transformers are a class of deep learning models that are defined by some architectural traits. They were first introduced in the now famous "Attention is All you Need" paper by Google researchers in 2017 [1] (the paper has accumulated a whooping 38k citations in only 5 years) and associated blog post1.
Footnote 1: [https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html](https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html)
The Transformer architecture is a specific instance of the encoder-decoder models[2]2 that had become popular just over the 2-3 years prior. Up until that point however, attention was just one of the mechanisms used by these models, which were mostly based on LSTM (Long Short Term Memory)[3] and other RNN (Recurrent Neural Networks)[4] variations. The key insight of the Transformers paper was that, as the title implies, attention could be used as the only mechanism to derive dependencies between input and output.
Footnote 2: [https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/)
It is beyond the scope of this blog to go into all the details of the Transformer architecture. For that, I will refer you to the original paper above or to the wonderful The Illustrated Transformer3 post. That being said, we will briefly describe the most important aspects since we will be referring to them in the catalog below. Let's start with the basic architectural diagram from the original paper, and describe some of the components.
Footnote 3: [https://jalammar.github.io/illustrated-transformer/](https://jalammar.github.io/illustrated-transformer/)
### Encoder/Decoder architecture
A generic encoder/decoder architecture (see Figure 1) is made up of two models. The encoder takes the input and encodes it into a fixed-length vector. The decoder takes that vector and decodes it into the output sequence. The encoder and decoder are jointly trained to minimize the conditional log-likelihood. Once trained the encoder/decoder can generate an output given an input sequence or can score a pair of input/output sequences.
In the case of the original Transformer architecture, both encoder and decoder had 6 identical layers. In each of those 6 layers the Encoder has two sub layers: a multi-head attention layer, and a simple feed forward network. Each sublayer has a residual connection and a layer normalization. The output size of the Encoder is 512. The Decoder adds a third sublayer, which is another multi-head attention layer over the output of the Encoder. Besides, the other multi-head layer in the decoder is masked to prevent attention to subsequent positions.
Figure 1: Transformer Architecture from [1]
### Attention
It is clear from the description above that the only "exotic" elements of the model architecture are the multi-headed attention, but, as described above, that is where the whole power of the model lies! So, what is attention anyway? An attention function is a mapping between a query and a set of key-value pairs to an output. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. Transformers use multi-headed attention, which is a parallel computation of a specific attention function called scaled dot-product attention. I will refer you again to the The Illustrated Transformer4 post for many more details on how the attention mechanism works, but will reproduce the diagram from the original paper in Figure 2 so you get the main idea
Footnote 4: [https://jalammar.github.io/illustrated-transformer/](https://jalammar.github.io/illustrated-transformer/)
There are several advantages of attention layers over recurrent and convolutional networks, the two most important being their lower computational complexity and their higher connectivity, especially useful for learning long-term dependencies in sequences.
### What are Transformers used for and why are they so popular
The original transformer was designed for language translation, particularly from English to German. But, already the original paper showed that the architecture generalized well to other language tasks. This particular trend became quickly noticed by the research community. Over the next few months most of the leaderboards for any language-related ML task became completely dominated by some version of the transformer architecture (see for example the well known SQUAD leaderboard5 for question answer where all models at the top are ensembles of Transformers).
Footnote 5: [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
One of the key reasons Transformers were able to so quickly take over most NLP leaderboards is their ability to quickly adapt to other tasks, a.k.a. Transfer learning. Pretrained Transformer models can adapt extremely easily and quickly to tasks they have not been trained on, and that has huge advantages. As an ML practitioner, you no longer need to train a large model on a huge dataset. All you need to do is re-use the pretrained model on your task, maybe just slightly
Figure 2: The Attention Mechanism
adapting it with a much smaller data set. A specific technique used to adapt pretrained models to a different task is the so-called fine tuning6.
Footnote 6: [https://huggingface.co/docs/transformers/training](https://huggingface.co/docs/transformers/training)
It turns out that the capability of Transformers to adapt to other tasks is so great, that, while they were initially developed for language related tasks, they quickly became useful for other tasks ranging from vision[5] or audio and music7 applications all the way to playing chess[6] or doing math[7].
Footnote 7: [https://magenta.tensorflow.org/music-transformer](https://magenta.tensorflow.org/music-transformer)
Of course all these applications would have not been possible if it wasn't because of the myriad of tools that made them readily available to anyone that could write a few lines of code. Not only were Transformers quickly integrated into the main AI frameworks (namely Pytorch8 and TF9), but they even enabled the creation of an entire company around them. Huggingface10, a startup that has raised over $ 60M to this day, is almost entirely built around the idea of commercializing their open source Transformers library11.
Footnote 8: [https://pytorch.org/tutorials/beginner/transformer_tutorial.html](https://pytorch.org/tutorials/beginner/transformer_tutorial.html)
Footnote 9: [https://www.tensorflow.org/text/tutorials/transformer](https://www.tensorflow.org/text/tutorials/transformer)
Last but not least, I would be remiss if I did not mention the impact of GPT-3[8] on the popularization of Transformers. GPT-3 is a Transformer model introduced by OpenAI in May 2020 as a follow up to their earlier GPT and GPT-2. The company made a big splash by introducing the model in a preprint[8] in which they claimed that the model was so powerful that they were not in a position to release it to the world. Since then, the model has not only been released, but also commercialized through a very large partnership12 between OpenAI and Microsoft. GPT-3 powers over 300 different applications13, and is the foundation for OpenAI's commercial strategy (which is a lot to say for a company that has received over $ 1B in funding).
Footnote 10: [https://huggingface.co/docs](https://huggingface.co/docs)
Footnote 11: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
Footnote 12: [https://openai.com/blog/openai-licenses-gpt-3-technology-to-microsoft/](https://openai.com/blog/openai-licenses-gpt-3-technology-to-microsoft/)
Footnote 13: [https://openai.com/blog/gpt-3-apps/](https://openai.com/blog/gpt-3-apps/)
### Rlhf
Reinforcement Learning from Human Feedback (or Preferences) aka RLHF (or RLHP) has become a huge addition to the AI toolkit as of lately. The concept was introduced already in 2017 in the paper ["Deep reinforcement learning from human preferences"]([https://arxiv.org/abs/1706.03741](https://arxiv.org/abs/1706.03741)). More recently though, it has been applied to ChatGPT and similar dialog agents like BlenderBot3 or Sparrow. The idea is pretty simple though: Once a language model is pretrained, we can generate different responses to a dialog and have Humans rank the results. We can use those ranking (aka preferences or feedback) to train a reward, in the reinforcement learning context (see Figure 3). You can read much more in these two wonderful posts by Huggingface)14 or Weights and Bias15.
Footnote 14: [https://huggingface.co/blog/rlhf](https://huggingface.co/blog/rlhf)
Footnote 15: [https://wandb.ai/ayush-thakur/RLHF/reports/Understanding-Reinforcement-Learning-from-Human-Feedback-RLHF-Part-1](https://wandb.ai/ayush-thakur/RLHF/reports/Understanding-Reinforcement-Learning-from-Human-Feedback-RLHF-Part-1)
Footnote 16: [https://en.wikipedia.org/wiki/Generative_adversarial_network](https://en.wikipedia.org/wiki/Generative_adversarial_network)
Footnote 17: [https://benanne.github.io/2022/01/31/diffusion.html](https://benanne.github.io/2022/01/31/diffusion.html)
### Diffusion
Diffusion models have become the new SOTA in image generation, clearly pushing aside the previous approaches such as GANs (Generative Adversarial Networks). What are diffusion models? They are a class of latent variable models trained variational inference. What this means in practice is that we train a deep neural network to denoise images blurred with some sort of noise function. Networks that are trained this way are in fact learning the latent space of what those images represent (see figure 4.
Diffusion models have relation with other generative models like the famous [Generative Adversarial Networks (GAN)]16, which they have mostly replaced in many applications and, particularly with (denoising) Autoencoders. Some authors17 will go as far as saying that Diffusion models are just a specific instance of autoencoders. However, they also admit that the small differences do transform their application, from the latent representation of autoconders to the pure generative nature of Diffusion models.
Figure 4: Probabilistic diffusion model architecture from “Diffusion Models: A Comprehensive Survey of Methods and Applications” [9]
Figure 3: Reinforcement Learning with Human Feedback. From HuggingFace’s RLHF blog post at [https://huggingface.co/blog/rhlf](https://huggingface.co/blog/rhlf).
Figure 2: Diffusion models smoothly perturb data by adding noise, then reverse this process to generate new data from noise. Each denoising step in the reverse process typically requires estimating the score function (see the illustrative figure on the right), which is a gradient pointing to the directions of data with higher likelihood and less noise.
## 2 The Transformers catalog
**Note:** For all the models available in Huggingface, I decided to directly link to the page in the documentation since they do a fantastic job of offering a consistent format and links to everything else you might need, including the original papers. Only a few of the models are not included in Huggingface. For those, I try to include a link to their github if available or blog post if not. For all, I also include bibliographic reference.
### Features of a Transformer
So hopefully by now you understand what Transformer models are, and why they are so popular and impactful. In this section I will introduce a catalog of the most important Transformer models that have been developed to this day. I will categorize each model according to the following properties: Pretraining Architecture, Pretraining Task, Compression, Application, Year, and Number of Parameters. Let's briefly define each of those:
#### 2.1.1 Pretraining Architecture
We described the Transformer architecture as being made up of an Encoder and a Decoder, and that is true for the original Transformer. However, since then, different advances have been made that have revealed that in some cases it is beneficial to use only the encoder, only the decoder, or both.
Encoder PretrainingThese models, which are also called bi-directional or auto-encoding, only use the encoder during pretraining, which is usually accomplished by masking words in the input sentence and training the model to reconstruct. At each stage during pretraining, attention layers can access all the input words. This family of models are most useful for tasks that require understanding complete sentences such as sentence classification or extractive question answering.
Decoder PretrainingDecoder models, often called auto-regressive, use only the decoder during a pretraining that is usually designed so the model is forced to predict the next word. The attention layers can only access the words positioned before a given word in the sentence. They are best suited for tasks involving text generation.
Transformer (Encoder-Decoder) PretrainingEncoder-decoder models, also called sequence-to-sequence, use both parts of the Transformer architecture. Attention layers of the encoder can access all the words in the input, while those of the decoder can only access the words positioned before a given word in the input. The pretraining can be done using the objectives of encoder or decoder models, but usually involves something a bit more complex. These models are best suited for tasks revolving around generating new sentences depending on a given input, such as summarization, translation, or generative question answering.
#### 2.1.2 Pretraining Task
When training a model we need to define a task for the model to learn on. Some of the typical tasks, such as predicting the next word or learning to reconstruct masked words were already mentioned above. "Pre-trained Models for Natural Language Processing: A Survey"[10] includes a pretty comprehensive taxonomy of pretraining tasks, all of which can be considered self-supervised:
1. **Language Modeling (LM):** Predict next token (in the case of unidirectional LM) or previous and next token (in the case of bidirectional LM)
2. **Masked Language Modeling (MLM):** mask out some tokens from the input sentences and then trains the model to predict the masked tokens by the rest of the tokens
3. **Permuted Language Modeling (PLM):** same as LM but on a random permutation of input sequences. A permutation is randomly sampled from all possible permutations. Then some of the tokens are chosen as the target, and the model is trained to predict these targets.
4. **Denoising Autoencoder (DAE):** take a partially corrupted input (e.g. Randomly sampling tokens from the input and replacing them with "[MASK]" elements. randomly deleting tokens from the input, or shuffling sentences in random order) and aim to recover the original undistorted input.
5. **Contrastive Learning (CTL):** A score function for text pairs is learned by assuming some observed pairs of text that are more semantically similar than randomly sampled text. It includes: * **Deep InfoMax (DIM):** maximize mutual information between an image representation and local regions of the image;
* **Replaced Token Detection (RTD):** predict whether a token is replaced given its surroundings;
* **Next Sentence Prediction (NSP):** train the model to distinguish whether two input sentences are continuous segments from the training corpus; and
* **Sentence Order Prediction (SOP):** Similar to NSP, but uses two consecutive segments as positive examples, and the same segments but with their order swapped as negative examples
#### 2.1.3 Application
Here we will note what are the main practical applications of the Transformer model. Most of these applications will be in the language domain (e.g. question answering, sentiment analysis, or entity recognition). However, as mentioned before, some Transformer models have also found applications well beyond NLP and are also included in the catalog.
### Catalog table
Figure 5 is a screenshot of a large table where I have tabulated all the models. If you are interested in the table, access it directly in 18
Footnote 18: [https://docs.google.com/spreadsheets/d/lltyrAB6BL29cOv2fSpNQnnq2vbX8UrH147d7FkIf6t4](https://docs.google.com/spreadsheets/d/lltyrAB6BL29cOv2fSpNQnnq2vbX8UrH147d7FkIf6t4)
### Family Tree
The diagram in figure 6 is a simple view that highlights the different families of transformers and how they relate to each other.
### Chronological timeline
Another interesting perspective of the catalog is to see it as a chronological timeline. In Figure 7 you will find all the transformers in the catalog sorted by their date of publication. In this first visualization, the Y-axis is only used to cluster transformers of related heritage/family.
In Figure 8, the Y-axis represents model size in millions of parameters. You won't be able to see all the models in the catalog since many fall right on the same time and size, so please refer to the previous image for that.
### Catalog List
Finally, here is the full list view that might be easier to follow along in some cases:
#### 2.5.1 Albert
* **Reference:19[11]** Footnote 19: [https://huggingface.co/docs/transformers/model_doc/albert](https://huggingface.co/docs/transformers/model_doc/albert)
* **Family:** BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM/NSP
* **Extension:** Compressed version of BERT using parameter sharing, which is much more efficient given the same number of parameters
* **Application:** Same as BERT
* **Date (of first known publication):** 09/2019
* **Num. Params:**b12M, Large = 18M, XLarge = 60M*
* **Corpus:** Same as BERT
* **Lab:** Google
#### 2.5.2 AlphaFold
* **Reference:20[12]**
Figure 5: You can access the original table at [http://bit.ly/3YFqRn9](http://bit.ly/3YFqRn9) for easier browsing across the different model features.
Figure 6: Transformers Family Tree
Figure 7: Transformer timeline. Colors describe Transformer family.
* **Family:** SE(3) Transformer21 Footnote 21: [https://arxiv.org/abs/2006.10503](https://arxiv.org/abs/2006.10503)
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** Protein folding prediction*ion of BERT using parameter sharing, which is much more efficient given the same number of parameters
* **Extension:**The original Alphafold used a BERT-style transformer. The details of Alphafold's Transformer are not known, but it is believed it is an extension of the SE(3)-Tranformer, a 3-D equivariant Transformer (see this blog post22) Footnote 22: [https://arxiv.org/abs/2204.05862](https://arxiv.org/abs/2204.05862)
* **Application:** Same as BERT
* **Date (of first known publication):** 09/2019
* **Num. Params:**b12M, Large = 18M, XLarge = 60M*
* **Corpus:**Same as BERT
* **Lab:**Google
#### 2.5.3 Anthropic Assistant
* **Reference:23** see also24[13, 14]
Footnote 23: [https://arxiv.org/abs/2112.00861](https://arxiv.org/abs/2112.00861)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Protein folding prediction*ion of BERT using parameter sharing, which is much more efficient given the same number of parameters
* **Extension:These models do not introduce novelties at the architecture/pretraining level and they are based on GPT-3 but rather focuses on how to improve alignment through fine-tuning and prompting. Note that the Anthropic Assistant includes several models optimized for different tasks. Latest versions of this work focus on the benefits of RLHF
Figure 8: Transformer timeline. On the vertical axis, number of parameters. Colors describe Transformer family.
* **Application:** Different models with different applications from general dialog to code assistant.
* **Date (of first known publication):** 12/2021
* **Num. Params:**10M to 52B
* **Corpus:**400B tokens from filtered Common Crawl and Books. They also create several Dialogue Preference datasets for the RLHF training.
* **Lab:**Anthropic
#### 2.5.4 Bart
* **Reference:25[15]**
Footnote 25: [https://huggingface.co/docs/transformers/model_doc/bart](https://huggingface.co/docs/transformers/model_doc/bart)
* **Family:** BERT for encoder, GPT for Decoder
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:It can be seen as a generalization of BERT and GPT in that it combines ideas from both in the encoder and decoder**
* **Application:** Mostly text generation but also some text understanding tasks*
* **Date (of first known publication):** 10/2019*
* **Num. Params:**10 % more than BERT
* **Corpus:**Same as RoBERTa (160Gb of news, books, stories
* **Lab:**Facebook
#### 2.5.5 Bert
* **Reference:26[16]**
Footnote 26: [https://huggingface.co/docs/transformers/model_doc/bert](https://huggingface.co/docs/transformers/model_doc/bert)
Family: BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM/NSP
* **Extension:It can be seen as a generalization of BERT and GPT in that it combines ideas from both in the encoder and decoder**
* **Application:**General Language Understanding and Question Answering. Many other language applications followed
* **Date (of first known publication):** 10/2018
* **Num. Params:**Base = 110M, Large = 340MT
* **Corpus:**Toronto Book Corpus and Wikipedia (3.3B Tokens)
* **Lab:**Google
#### 2.5.6 Big Bird
* **Reference:27[17]**
Footnote 27: [https://huggingface.co/docs/transformers/model_doc/big_bird](https://huggingface.co/docs/transformers/model_doc/big_bird)
* **Family:** BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM
* **Extension:** Big Bird can extend other architectures such as BERT, Pegasus, or RoBERTa by using a sparse attention mechanism that elminates the quadratic dependency thus making it more suitable for longer sequences
* **Application:**Particularly well suited for longer sequences, not only in text but also e.g. in genomics
* **Date (of first known publication):** 07/2020
* **Num. Params:**Depends on the overall architecture
* **Corpus:**Books, CC-News, Stories and Wikipedia)
* **Lab:**Google
#### 2.5.7 BlenderBot3
* **Reference:28[18]**
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** BlenderBot 3 is based on a pre-trained OPT. It adds features needed for a dialog agent such as long-term memory or the ability to search the internet. It is also fine-tuned for some specific tasks given human feedback on them.
* **Application:** Same as GPT-3
* **Date (of first known publication):** 08/2022
* **Num. Params:**175B
* **Corpus:** 180B tokens = RoBERTa + the Pile + PushShift.io Reddit
* **Lab:**Facebook
#### 2.5.8 Bloom
* **Reference:29 F Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Main difference to GPT-3 is that it uses full attention instead of sparse attention
* **Application:** Same as GPT-3
* **Date (of first known publication):** 07/2022
* **Num. Params:**176B
* **Corpus:** 366B tokens (1.5 TB of text data) multilingual dataset
* **Lab:** Big Science/Huggingface
#### 2.5.9 ChatGPT
* **Reference:30 F Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** ChatGPT takes a GPT3.5 (aka GPT3 Davinci-003) pretrained model and uses RLHF to finetune the model mostly like described in InstructGPT but with slight differences in the data collection. ChatGPT is also more than a model since it includes extensions for Memory Store and retrieval similar to BlenderBot3
* **Application:** Dialog agents
* **Date (of first known publication):** 10/2022
* **Num. Params:**Same as GPT3
* **Corpus:** Same as GPT3 + datasets generated for RLHF
* **Lab:** OpenAI
#### 2.5.10 Chinchilla
* **Reference:31[19]**
Footnote 31: [https://arxiv.org/abs/2203.15556](https://arxiv.org/abs/2203.15556)
Family: GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Same as Gopher but with optimizations to reduce model size and therefore training/inference time with equal or superior performance
* **Application:** Same as Gopher/GPT3
* **Date (of first known publication):** 03/2022
* **Num. Params:**70B
* **Corpus:** Massive Text
* **Lab:** Deepmind
#### 2.5.11 Clip
* **Reference:32[20]**
Footnote 32: [https://huggingface.co/docs/transformers/model_doc/clip](https://huggingface.co/docs/transformers/model_doc/clip)
Family: CLIP (Also using Resnet, ViT, and vanilla transformer for text)
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** predict which of the N x N possible (image, text) pairings across a batch actually occurred
* **Extension:** Combines Resnet and ViT for the visual encoding with Transformer for the Textual encoder
* **Application:** Image/object classification
* **Date (of first known publication):** 02/2021
* **Num. Params:?**
* 400 million text,image pairs
* **Lab:** OpenAI
#### 2.5.12 Cm3
* **Reference:33[21]**
Footnote 33: [https://arxiv.org/abs/2201.07520](https://arxiv.org/abs/2201.07520)
Family: HTML
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Causality-masked LM
* **Extension:** This is somewhat similar to HTML in its use of structured training data. However, it is a different architecture and uses causal masking
* **Application:** Multimodal language model with the ability to do structured prompting
* **Date (of first known publication):** 01/2022
* **Num. Params:**13B (largest)
* **Corpus:** CC-News, English Wikipedia
* **Lab:** Facebook
#### 2.5.13 Ctrl
* **Reference:34[22]**
Footnote 34: [https://huggingface.co/docs/transformers/model_doc/ctrl](https://huggingface.co/docs/transformers/model_doc/ctrl)
**Family:**
* **Pretraining Architecture:** Decoder
* **Pretraining Task:**
* **Extension:** model can generate text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior
* **Application:** Controllable text generation
* **Date (of first known publication):** xx
* **Num. Params:1.63B
* **Corpus:** 140 GB of text including: Wikipedia (En, De, Es, Fr), Project Gutenberg, 45 subreddits, OpenWebText2, Amazon Reviews, Europarl and UN data from WMT, question-answer pairs from ELI5, and the MRQA shared task3, which includes the Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions
* **Lab:** Salesforce
#### 2.5.14 Dall-E
* **Reference:35[23]**
Footnote 35: [https://openai.com/blog/dall-e/](https://openai.com/blog/dall-e/)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Caption prediction
* **Extension:** A differential variational auto-encoder is used to learn the visual codebook. The transformer is a variation of GPT-3
* **Application:** Text to image
* **Date (of first known publication):** 01/2021
* **Num. Params:12B**
* **Corpus:** 250 million text-images pairs from the internet
* **Lab:** OpenAI
#### 2.5.15 Dall-E 2
* **Reference:36[24]**
* **Family:** CLIP, GLIDE
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** Caption prediction
* **Extension:** Combines CLIP encoder and Diffusion decoder similar to GLIDE
* **Application:** Text to image
* **Date (of first known publication):** 04/2022
* **Num. Params:3.5B**
* **Corpus:** Combination of the DALL-E and CLIP datasets
* **Lab:** OpenAI
#### 2.5.16 Decision Transformers
* **Reference:**37[25]
Footnote 37: [https://arxiv.org/abs/2106.01345](https://arxiv.org/abs/2106.01345)
* **Family:** GPT, Control Transformers" (not per se a family, but grouping here those transformers that try to model more general control, RL-like, tasks)
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Next action prediction
* **Extension:** Decision transformers use a GPT architecture and extend it by encoding trajectories in a way that they can be learned by an auto-regressive task
* **Application:** General RL (reinforcement learning tasks)
* **Date (of first known publication):** 06/2021
* **Num. Params:**Same as GPT
* **Corpus:** Different corpus for different experiments
* **Lab:** Google/UC Berkeley/Facebook
#### 2.5.17 DialoGPT
* **Reference:**38[26]
Footnote 38: [https://huggingface.co/docs/transformers/model_doc/dialogpt](https://huggingface.co/docs/transformers/model_doc/dialogpt)
* **Reference:** SPT
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** GPT-2 architecture trained on dialog data
* **Application:** Text generation in dialog settings
* **Date (of first known publication):** 10/2019
* **Num. Params:1.5B
* **Corpus:** 140M Reddit conversations
* **Lab:** Microsoft
#### 2.5.18 DistilBERT
* **Reference:**39[27]
Footnote 39: [https://huggingface.co/docs/transformers/model_doc/dialogpt](https://huggingface.co/docs/transformers/model_doc/dialogpt)
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM/NSP
* **Extension:** Compressed version of BERT using distillation, which is much more efficient given the same number of parameters
* **Application:** Same as BERT
* **Date (of first known publication):** 10/2019
* **Num. Params:66M
* **Corpus:** Same as BERT
* **Lab:** Huggingface
#### 2.5.19 DQ-BART
* **Reference:**40[28]
Footnote 40: [https://arxiv.org/abs/2203.11239](https://arxiv.org/abs/2203.11239)
* **Family:** BART
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:** Adds quantization and distillation to a BART model to improve performance and model size
* **Application:** Text generation and understanding
* **Date (of first known publication):** 03/2022
* **Num. Params:**Up to 30x reduction in parameters compared to standard BART
* **Corpus:** CNN/DM, XSUM, EL15, WMT16 En-Ro ( 1M tokens)
* **Lab:** Amazon
#### 2.5.20 ELECTRA
* **Reference:**41[29]
Footnote 41: [https://huggingface.co/docs/transformers/model_doc/electric](https://huggingface.co/docs/transformers/model_doc/electric)
* **Family:**
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** RTD
* **Extension:** Same as BERT
* **Application:** 03/2020
* **Date (of first known publication):** xx
* **Num. Params:**Base = 110M, Large = 330M
* **Corpus:** Same as BERT except for Large with is same as XLNet
* **Lab:** Stanford/Google
#### 2.5.21 ERNIE
* **Reference:42[30]**
Footnote 42: [https://arxiv.org/abs/1905.07129](https://arxiv.org/abs/1905.07129)
* **Family:** BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM
* **Extension:** Uses BERT for Encoder architecture, but stacks and aggregates two of them for text and entities. This architecture could be understood as BERT for text + knowledge graphs
* **Application:** Knowledge intensive related tasks that might benefit from knowledge graphs or entities such as entity recognition
* **Date (of first known publication):** 05/2019
* **Num. Params:**114M
* **Corpus:** English Wikipedia + Wikidata for entitites (note that they initialize model to original BERT parameter values
* **Lab:** Various Chinese institutions
#### 2.5.22 Flamingo
* **Reference:43[31]** Familly: Chinchilla
Footnote 43: [https://arxiv.org/abs/2204.14198](https://arxiv.org/abs/2204.14198)
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Log likelihood of text given some visual input
* **Extension:** It uses a frozen textual language model (like Chinchilla) conditioned on the visual representation, which is encoded from a Normalizer-Free ResNet
* **Application:** Text to image
* **Date (of first known publication):** 04/2022
* **Num. Params:80B (largest)
* **Corpus:** MultiModal MassiveWeb (M3W): 185 million images and 182 GB text + a number of text paired with image datasets: ALIGN + LTIP (Long Text & Image Pairs) = 312 million images, and VTP (Video & Text Pairs) = 27 million short videos (approximately 22 seconds on average)
* **Lab:** Deepmind
#### 2.5.23 Gato
* **Reference:44[32]** Familly:** "Control Transformers" (not per se a family, but grouping here those transformers that try to model more general control, RL-like, tasks) Footnote 44: [https://www.deepmind.com/publications/a-generalist-agent](https://www.deepmind.com/publications/a-generalist-agent)
* **Pretraining Task:** MLM (where tokens are either text or agent actions)
* **Extension:** The standard decoder-only transformer architecture is preceded by an embedding layer that can embed text and images, plus add position encodings to add spatial information when applicable.
* **Application:** Gato presents a generalizable agent that can be used beyond text to tasks such as playing Atari or controlling a robot arm.
* **Date (of first known publication):** 05/2022
* **Num. Params:1.2B**
* **Corpus:** 1.5T tokens including standard text (e.g. MassiveText), vision (e.g. ALIGN), and simulation environments (e.g. ALE Atari, or RGB Stacking Real Robot)
* **Lab:** Deepmind
#### 2.5.24 GlaM
* **Reference:45[33]** Familly:** Transformer
Footnote 45: [https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)
* **Framiing Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** GLaM introduces a Mixture of 64 Experts to increase parameter count and generalization properties in a somewhat standard decoder-only. Transformer architecture. Only two experts get activated at a time per token, which makes the model also more efficient in training and inference.
* **Application:** General language modeling
* **Date (of first known publication):** 12/2021
* **Num. Params:1.2T across 64 experts, but only 96B get activated for inference Corpus: 1.6T tokens including web pages filtered by Wikipedia and books for quality Lab: Google
#### 2.5.25 Glide
* **Reference:46[34]**
Footnote 46: [https://arxiv.org/abs/2112.10741](https://arxiv.org/abs/2112.10741)
Family: Diffusion models
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** Caption prediction
* **Extension:** GLIDE can be seen as an extension of the ADM (Ablated Diffusion Model) by the same authors. However, ADM is not per se a transformer architecture although it does resemble one in some of the configurations the authors use. Given that ADM is by the same authors and was quickly followed up by GLIDE, I think it is fair to consider GLIDE as the first of its kind.
* **Application:** Text to image
* **Date (of first known publication):** 12/2021
* **Num. Params:3.5B diffusion model (2.3B for visual encoding, 1.2B for textual) + 1.5B for model for upsampling
* **Corpus:** Same as DALL-E
* **Lab:** OpenAI
#### 2.5.26 Global Context ViT
Reference:47[35]
Footnote 47: [https://arxiv.org/abs/2112.10741](https://arxiv.org/abs/2112.10741)
Family: ViT
**Pretraining Architecture:** Encoder
* **Pretraining Task:** Image classification
* **Extension:** hierarchical ViT architecture consisting of local and global self-attention modules
* **Application:** Image generation
* **Date (of first known publication):** 06/2022
* **Num. Params:90M
* **Corpus:** Imagenet-1K and other task dependent dataaets
* **Lab:** NVidia
#### 2.5.27 Gopher
Reference:48[36]
Footnote 48: [https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval](https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Same as GPT-2 but use RSNorm instead of LayerNorm and relative positional encoding rather than absolute
* **Application:** Mostly Language Modeling and NLU, but also extensible like GPT
* **Date (of first known publication):** 12/2021
* **Num. Params:280B
* **Corpus:** Massive Text (2.35 billion documents, or about 10.5 TB of text including Massive Web, Books, Github, News, C4, and Wikipedia.
#### 2.5.28 GopherCite
* **Reference:49[37]**
Footnote 49: [https://arxiv.org/abs/2203.11147](https://arxiv.org/abs/2203.11147)
Family: GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** GopherCite is based on Gopher but adds a step using RLHP (Reinforcement Learning from Human Preferences) to learn whether not only a response is plausible but also supported
* **Application:** Dialog systems, Q&A, general language generation tasks
* **Date (of first known publication):** 03/2022
* **Num. Params:**280B
* **Corpus:** Same as Gopher plus specific dataset generated in the RLHP process
* **Lab:** Deepmind
#### 2.5.29 Gpt
Reference:50[38]
Footnote 50: [https://huggingface.co/docs/transformers/model_doc/openai-gpt](https://huggingface.co/docs/transformers/model_doc/openai-gpt)
Family: GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:**
* **Application:** Text generation, but adaptable to many other NLP tasks when fine tuned.
Date (of first known publication):** 06/2018
* **Num. Params:**117M
* **Corpus:** Unsupervised Pretraining on BookCorpus dataset. Supervised Finetuning on several task-specific datasets including SNLI, RACE, Quora....
* **Lab:** OpenAI
#### 2.5.30 Gpt-2
* **Reference:51[39]**
Footnote 51: [https://huggingface.co/docs/transformers/model_doc/gpt2](https://huggingface.co/docs/transformers/model_doc/gpt2)
Family: GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Minor extensions to the GPT architecture (e.g. layer normalization moved to the input of each sub-layer, or increased context size from 512 to 1024)
* **Application:** Text generation, but adaptable to many other NLP tasks when fine tuned.
* **Date (of first known publication):** 02/2019
* **Num. Params:**1.5B
* **Corpus:** 8 million web pages (40 GB). 10X GPT. WebText dataset is created by crawling all links at Reddit with at least 3 Karma points.
* **Lab:** OpenAI
#### 2.5.31 Gpt-3
* **Reference:52[8]**
Footnote 52: [https://github.com/openai/gpt-3](https://github.com/openai/gpt-3)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Same as GPT-2 with the only addition of alternating dense and locally banded sparse attention patterns, inspired by the Sparse Transformer
* **Application:** Initially text generation, but has over time been used for a large range of applications in areas such as code generation, but also image and audio generation
* **Date (of first known publication):** 05/2020
* **Num. Params:**175 B
* **Corpus:** 500B tokens including CommonCrawl (410B), WebText2 (19B), Books1 (12B), Books2 (55B), and Wikipedia (3B)
* **Lab:** OpenAI
#### 2.5.32 Gpt-3.5
* **Reference:53**
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** The GPT3.5 series includes a number of models like Davinci-003. They are basically versions of the InstructGPT model. See [here]([https://scale.com/blog/gpt-3-davinci-003-comparison](https://scale.com/blog/gpt-3-davinci-003-comparison)) for details on the comparison of the performance to older GPT3 models.
* **Application:** Dialog and general language, but there is a code specific model too
* **Date (of first known publication):** 10/2022
* **Num. Params:**175B
* **Corpus:** Same as InstructGPT
* **Lab:** OpenAI
#### 2.5.33 InstructGPT
* **Reference:54[40]**
Footnote 54: [https://openai.com/blog/instruction-following/](https://openai.com/blog/instruction-following/)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** GPTInstruct starts off with a pretrained GPT3 model and adds reward modeling through reinforcement learning after a supervised finetuning
* **Application:** Knowledge-intensive dialog or language tasks
* **Date (of first known publication):** 01/2022
* **Num. Params:**Same as GPT3
* **Corpus:** Same as GPT3 for pretraining, but finetuned and optimized using labeler data and prompts
* **Lab:** OpenAI
#### 2.5.34 GPT-Neo
* **Reference:**55 Footnote 55: [https://huggingface.co/docs/transformers/model_doc/gpt_neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Similar to GPT-2 but uses local attention in every other layer with a window size of 256 tokens
* **Application:** Text generation, but adaptable to many other NLP tasks when fine tuned
* **Date (of first known publication):** 03/2021
* **Num. Params:** 5B, 2.7B (XL)
* **Corpus:** File -- 840 GB open source text dataset that combines 22 pre existing datasets
* **Lab:** EleutherAI
#### 2.5.35 GPT-NeoX-20B
* **Reference:**56 Footnote 56: [https://arxiv.org/abs/2204.06745](https://arxiv.org/abs/2204.06745)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Similar to GPT-3 with rotary encoders instead of positional, parallel attention and feed forward layers, different initialization, and all dense layers instead of alternate dense/sparse
* **Application:** same as GPT-3
* **Date (of first known publication):** 04/2022
* **Num. Params:**20B
* **Corpus:** File -- 840 GB open source text dataset that combines 22 pre existing datasets
* **Lab:** EleutherAI
#### 2.5.36 HTML
* **Reference:**57 Footnote 57: [https://arxiv.org/abs/2107.06955](https://arxiv.org/abs/2107.06955)
* **Family:** BART
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:** As opposed to BART, they don't do sentence shuffling
* **Application:** General purpose language model that allows structured HTML prompting
* **Date (of first known publication):** 07/2021
* **Num. Params:**400M
* **Corpus:** 23TB of simplified HTML extracted from CommonCrawl
* **Lab:** Facebook
#### 2.5.37 Imagen
* **Reference:**58 Footnote 58: [https://imagen.research.google/](https://imagen.research.google/)
* **Family:** T5, CLIP, Diffusion models
* **Pretraining Architecture:** T5 (or CLIP or BERT) for frozen text encoder + U-net architecture for cascaded diffusion models for text to image
* **Pretraining Task:** image/text pair prediction
* **Extension:** Imagen adds a few extensions to the U-net diffusion architecture (pooled embedding vector, cross attention over text embeddings, and Layer Normalizations)
* **Application:** Text to image
* **Date (of first known publication):** 06/2022
* **Num. Params:2B
* **Corpus:** a combination of internal datasets, with 460M image-text pairs, and the publicly available Laino dataset, with 400M image-text pairs
* **Lab:** Google
#### 2.5.38 Jurassic-1
* **Reference:59[44]**
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Very similar to GPT-3, but far more parameters and improved training efficiency mostly because of the improved tokenizer. Also, different ratio of depth to breadth
* **Application:** Similar to GPT-3
* **Date (of first known publication):** 09/2021
* **Num. Params:178B (Jumbo), 7.5B (Large)**
* **Corpus:** 300B tokens (same as GPT-3)
* **Lab:** AI21
#### 2.5.39 LAMDA
* **Reference:60[45]**
Footnote 60: [https://ui.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html](https://ui.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html)
* **Fear:** Transformer
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** LAMDA focuses on how to improve safety, quality, and groundeness using different fine-tuning strategies
* **Application:** General language modeling
* **Date (of first known publication):** 01/2022
* **Num. Params:137B
* **Corpus:** 1.56T words from public dialog data and other public web documents
* **Lab:** Google
#### 2.5.40 mBART
* **Reference:61[46]**
Footnote 61: [https://huggingface.co/docs/transformers/model_doc/mbart](https://huggingface.co/docs/transformers/model_doc/mbart)
* **Family:** BART
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:**
* **Application:** Translation
* **Date (of first known publication):** 01/2020
* **Num. Params:** Same as BART
* **Corpus:**
* **Lab:** CC25 Corpus includes 25 monolingual corpuses in different languages. Largest corpuses are English (300 GB) and Russian (280GB)
#### 2.5.41 Megatron
* **Reference:**62[47]
Footnote 62: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Family:** GPT/BERT/T5
* **Pretraining Architecture:** Encoder or Decorder, depending on the base model
* **Pretraining Task:** Same as base model
* **Extension:** Megatron is a family of models that extend previously known architectures (namely GPT-2 and BERT originally, but also T5 more recently) by introducing model parallelism primitives. In the case of BERT, the authors also replace the next sentence prediction head with sentence order prediction and use whole word n-gram masking.
* **Application:** Same as base model
* **Date (of first known publication):** 03/2020
* **Num. Params:** 8.3B (GPT-like), 3.9B (BERT-like)
* **Corpus:** Original paper uses an aggregate dataset consisting of Wikipedia), CC-Stories), RealNews, and OpenWebtext
* **Lab:** NVidia
#### 2.5.42 Minerva
* **Reference:**63[48]
Footnote 63: [https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html)
* **Family:** PaLM
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Extends PaLM by fine-tuning on the mathematical dataset
* **Application:** Mathematical reasoning
* **Date (of first known publication):** 06/2022
* **Num. Params:**540B
* **Corpus:** Same as PaLM + 118GB dataset of scientific papers from the arXiv preprint server and web pages that contain mathematical expressions using LaTeX, MathJax, or other mathematical typesetting formats
* **Lab:** Google
#### 2.5.43 MT-NLG (Megatron TouringNLG)
* **Reference:**64[49]
Footnote 64: [https://developer.nvidia.com/blog/using-deepsped-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-lax-2022](https://developer.nvidia.com/blog/using-deepsped-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-lax-2022)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Uses parallelization similar to Megatron to train a LM double the size of GPT-3
* **Application:** Language generation and others (similar to GPT-3)
* **Date (of first known publication):** 10/2021
* **Num. Params:**530B
* **Corpus:** The Pile65 (800GB dataset) + 2 Common Crawl snapshots
* **Lab:** NVidia
Footnote 65: [https://arxiv.org/abs/2101.00027](https://arxiv.org/abs/2101.00027)
#### 2.5.44 Opt
* **Reference:66[50]**
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Basically same architecture as GPT-3 but with some training improvements introduced in Megatron-LM
* **Application:** Same as GPT-3
* **Date (of first known publication):** 05/2022
* **Num. Params:** 175B (and other smaller versions)
* **Corpus:** 180B tokens = RoBERTa + the Pile + PushShift.io Reddit
* **Lab:** Facebook
#### 2.5.45 PalM
* **Reference:67[51]**
Footnote 67: [https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
* **Family:** Transformer
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Palm uses a typical decoder-only transformer architecture, but adds quite a few extensions: SwiGLU activations, parallel layers, multi-query attention, RoPE embeddings, Shared Input-Output Embeddings, no biases, and a 256k SentencePiece vocabulary generated from the training data.
* **Application:** PalM is designed as a general purpose language model with applicability to hundreds of different language tasks
* **Date (of first known publication):** 04/2022
* **Num. Params:**540B
* **Corpus:** 780B tokens from filtered webpages, books, Wikipedia, news articles, source code, and social media conversations. Code includes 24 programming languages.
* **Lab:** Google
#### 2.5.46 Pegasus
* **Reference:68[52]**
Footnote 68: [https://huggingface.co/docs/transformers/model_doc/pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)
* **Family:** Transformer
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE (more concretely GSG) and MLM
* **Extension:** Extends vanilla Transformer by using a different pretraining task (GSG: Gap Sentence Generation) that is better suited for summarization
* **Application:** Summarization
* **Date (of first known publication):** 12/2019
* **Num. Params:** Base = 223M, Large = 568M
* **Corpus:** C4 (750GB) + HugeNews (3.8 TB)
* **Lab:** UCL/Google
#### 2.5.47 RoBERTa
* **Reference:**69[53]
Footnote 69: [https://huggingface.co/docs/transformers/model_doc/roberta](https://huggingface.co/docs/transformers/model_doc/roberta)
Family: BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM (Dynamic)
* **Extension:** Extension of BERT with optimized training procedure and more data
* **Application:** Same as BERT
* **Date (of first known publication):** 07/2019
* **Num. Params:** 356M
* **Corpus:** Same as BERT + CC News + OpenWebText + Stories ( 33B Tokens)
Lab: UW/Google
#### 2.5.48 SeeKer
* **Reference:**70[54]
Footnote 70: [https://arxiv.org/abs/2209.14375](https://arxiv.org/abs/2209.14375)
* **Family:** GPT (but can extend any family)
* **Pretraining Architecture:** Encoder/decoder or decoder only, depending on the base model it's extending
* **Pretraining Task:** Depends on the base model
* **Extension:** SeeKer is an extension that can be applied to any Transformer architecture by introducing "search", "knowledge", and "response" modules that are introduced during pretraining
* **Application:** Same as base models
* **Date (of first known publication):** 03/2022
* **Num. Params:**Depends on the base model
* **Corpus:** Same as base model
* **Lab:** Facebook
#### 2.5.49 Sparrow
* **Reference:**71[55]
Footnote 71: [https://arxiv.org/abs/2209.14375](https://arxiv.org/abs/2209.14375)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Starts from the Chinchilla 70B model but adds RLHF (Reinforcement Learning with Human Feedback). It also adds inline evidence a la GopherCite
* **Application:** Dialog agents and general language generation applications like Q&A
* **Date (of first known publication):** 09/2022
* **Num. Params:** 70B
* **Corpus:** Same as Chinchilla + interactive data gathering with human annotators during the RLHF process
* **Lab:** Deepmind
#### 2.5.50 StableDiffusion
* **Reference:72[56]**
Footnote 72: [https://huggingface.co/CompVis/stable-diffusion](https://huggingface.co/CompVis/stable-diffusion)
* **Family:** Diffusion
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** Caption prediction
* **Extension:** Stable diffusion is basically the Latent Diffusion model developed by LMU Munich researchers + some learnings on conditional diffusion from DALL-e and Imagen
* **Application:** Text to image
* **Date (of first known publication):** 12/2021
* **Num. Params:** 890M (although there are different, smaller, variants)
* **Corpus:** LAION-5B, a publicly available dataset derived from Common Crawl
* **Lab:** LMU Munich + Stability.ai + Eleuther.ai
#### 2.5.51 Swin Transformer
* **Reference:73[57]**
Footnote 73: [https://github.com/microsoft/Swin-Transformer](https://github.com/microsoft/Swin-Transformer)
* **Family:** ViT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** Same as ViT
* **Extension:** Extends ViT by replacing the standard multi-head self attention (MSA) module by a module based on shifted windows (Swin) allowing ViT-like architectures to generalize to higher resolution images
* **Application:** Image (object detection, image classification..)
* **Date (of first known publication):** 03/2021
* **Num. Params:** 29M-197M
* **Corpus:** Imagenet and Imagenet-22k
* **Lab:** Facebook
#### 2.5.52 Switch
* **Reference:74[58]**
Footnote 74: [https://arxiv.org/abs/2101.03961](https://arxiv.org/abs/2101.03961)
* **Family:** T5
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:** Goal to increase parameter count while keeping FLOP operations constant by using efficient routing of MoE (Mixture of Experts)
* **Application:** General language tasks (e.g. question answering)
* **Date (of first known publication):** 01/2021
* **Num. Params:** 1T
* **Corpus:** Colossal Clean Crawled Corpus
* **Lab:** Google
#### 2.5.53 T5
* **Reference:75[59]**
Footnote 75: [https://huggingface.co/docs/transformers/model_doc/t5](https://huggingface.co/docs/transformers/model_doc/t5)
* **Family:**
* **Pretraining Architecture:** Encoder/Decoder
* **Pretraining Task:** DAE
* **Extension:** Same as original Transformer with some additions such as relative positional embeddings like Transformer XL
* **Application:** General language tasks including machine translation, question answering, abstractive summarization, and text classification
* **Date (of first known publication):** 10/2019
* **Num. Params:** 11 B (up to)
* **Corpus:** Colossal Clean Crawled Corpus (C4) -- Cleaned up version of the Common Crawl dataset -- 750 GB
* **Lab:** Google
#### 2.5.54 Trajectory Transformers
* **Reference:76[60]**
Footnote 76: [https://arxiv.org/abs/2106.02039](https://arxiv.org/abs/2106.02039)
* **Family:** GPT, Control Transformers" (not per se a family, but grouping here those transformers that try to model more general control, RL-like, tasks)
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** predict most likely sequence
* **Extension:** Similarly to the Decision transformers, the main extension introduced by Trajectory Transformers is a way to encode a trajectory (state, actions, rewards)
* **Application:** General RL (reinforcement learning tasks)
* **Date (of first known publication):** 06/2021
* **Num. Params:** Smaller architecture than GPT
* **Corpus:** D4RL dataset and other RL datasets depending on the task at hand
* **Lab:** UC Berkeley
#### 2.5.55 Transformer XL
* **Reference:77[61]**
Footnote 77: [https://huggingface.co/docs/transformers/model_doc/transfo-xl](https://huggingface.co/docs/transformers/model_doc/transfo-xl)
* **Family:**
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Relative positioned embeddings enable longer-context attention when compared to vanilla Transformer model
* **Application:** General language tasks
* **Date (of first known publication):** 01/2019
* **Num. Params:** 151M
* **Corpus:** Different training datasets depending on experiments, but baseline is Wikitext-103
* **Lab:** CMU/Google
#### 2.5.56 Turing-NLG
* **Reference:78[62]**
Footnote 78: [https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/)
* **Family:** GPT
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** LM
* **Extension:** Optimized version of GPT2 with optimal hyperparameters and software/hardware platform to improve training
* **Application:** Same as GPT-2/3
* **Date (of first known publication):** 02/2020
* **Num. Params:** 17B originally, up to 530B more recently
* **Corpus:** Highest quality subset from The Pile + 2 CC snapshots (339B tokens)
* **Lab:** Microsoft
#### 2.5.57 ViT
* **Reference:79[63]**
* **Family:** BERT
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** Image classification
* **Extension:** Extension of BERT architecture to train on patches of images
* **Application:** Image classification
* **Date (of first known publication):** 10/2020
* **Num. Params:** 86M(Base) to 632M (Huge)
* **Corpus:** From standard Imagenet to JFT-300M (large inhouse dataset)
* **Lab:** Google
#### 2.5.58 Wu Dao 2.0
* **Reference:80**
Footnote 80: [https://en.wikipedia.org/wiki/Wu_Dao](https://en.wikipedia.org/wiki/Wu_Dao)
* **Family:** GLM (General Language Model)
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** Autoregressive blank infilling
* **Extension:** Similar to GPT in that it uses a Decoder/autoregressive architecture but applies a different pretraining task proposed in the GLM family of models. Besides, Wu Dao uses a Fast Mixture of Experts [https://github.com/laekov/fastmoe](https://github.com/laekov/fastmoe)) approach to scale training to trillions of parameters
* **Application:** Language and multimodal (particularly image)
* **Date (of first known publication):** 06/2021
* **Num. Params:** 1.75T
* **Corpus:**?
* **Lab:** Beijing Academy of Artificial Intelligence
#### 2.5.59 Xlm-RoBERTa
* **Reference:**81[64] Footnote 81: [https://huggingface.co/docs/transformers/model_doc/xlm-roberta](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)
* **Family:** RoBERTa
* **Pretraining Architecture:** Encoder
* **Pretraining Task:** MLM (Dynamic)
* **Extension:** An extension of RoBERTa that introduces small parameter tuning insights in the context of multilingual applications
* **Application:** Translation and other cross-lingual language tasks
* **Date (of first known publication):** 10/2019
* **Num. Params:** Base = 270M, Large = 550M
* **Corpus:** Cleaned Common Crawl in 100 languages
* **Lab:** Facebook
#### 2.5.60 XlNet
* **Reference:**82[65]
* **Family:** Transformer XL
* **Pretraining Architecture:** Decoder
* **Pretraining Task:** PLM
* **Extension:** This model basically adapts Transformer XL architecture to permutation-based LM
* **Application:** General language tasks
* **Date (of first known publication):** 05/2019
* **Num. Params:** Base=117M, Large=360M
* **Corpus:** Same as BERT + Giga5 (16GB text) + and aggressively filtered ClueWeb 2012-B (19GB), Common Crawl (110 GB)
* **Lab:** CMU/Google
## 3 Further reading
Most of the following references have already been mentioned in the post. However, it is worth listing them here in case you need more details:
* The Huggingface Transformers documentation83 and course is extremely good and comprehensive. I have used myself in this post, and I can't recommend enough as a natural follow up to what you will find here. Footnote 83: [https://huggingface.co/course/chapter1/17fw=pt](https://huggingface.co/course/chapter1/17fw=pt)
* A survey of transformers84[66] includes a 40 page long survey wit over 170 references and a full blown taxonomy. Footnote 84: [https://arxiv.org/abs/2106.04554](https://arxiv.org/abs/2106.04554)
* Pre-trained Models for Natural Language Processing: A Survey[10] is also a very comprehensive survey that includes many of the pretrained models with a particular focus on NLP
|
2307.02096 | Adaptive multi-stage integration schemes for Hamiltonian Monte Carlo | Hamiltonian Monte Carlo (HMC) is a powerful tool for Bayesian statistical
inference due to its potential to rapidly explore high dimensional state space,
avoiding the random walk behavior typical of many Markov Chain Monte Carlo
samplers. The proper choice of the integrator of the Hamiltonian dynamics is
key to the efficiency of HMC. It is becoming increasingly clear that
multi-stage splitting integrators are a good alternative to the Verlet method,
traditionally used in HMC. Here we propose a principled way of finding optimal,
problem-specific integration schemes (in terms of the best conservation of
energy for harmonic forces/Gaussian targets) within the families of 2- and
3-stage splitting integrators. The method, which we call Adaptive Integration
Approach for statistics, or s-AIA, uses a multivariate Gaussian model and
simulation data obtained at the HMC burn-in stage to identify a system-specific
dimensional stability interval and assigns the most appropriate 2-/3-stage
integrator for any user-chosen simulation step size within that interval. s-AIA
has been implemented in the in-house software package HaiCS without introducing
computational overheads in the simulations. The efficiency of the s-AIA
integrators and their impact on the HMC accuracy, sampling performance and
convergence are discussed in comparison with known fixed-parameter multi-stage
splitting integrators (including Verlet). Numerical experiments on well-known
statistical models show that the adaptive schemes reach the best possible
performance within the family of 2-, 3-stage splitting schemes. | Lorenzo Nagar, Mario Fernández-Pendás, Jesús María Sanz-Serna, Elena Akhmatskaya | 2023-07-05T08:16:36Z | http://arxiv.org/abs/2307.02096v3 | # Adaptive multi-stage integration schemes for Hamiltonian Monte Carlo
###### Abstract
Hamiltonian Monte Carlo (HMC) is a powerful tool for Bayesian statistical inference due to its potential to rapidly explore high dimensional state space, avoiding the random walk behavior typical of many Markov Chain Monte Carlo samplers. The proper choice of the integrator of the Hamiltonian dynamics is key to the efficiency of HMC. It is becoming increasingly clear that multi-stage splitting integrators are a good alternative to the Verlet method, traditionally used in HMC. Here we propose a principled way of finding optimal, problem-specific integration schemes (in terms of the best conservation of energy for harmonic forces/Gaussian targets) within the families of 2- and 3-stage splitting integrators. The method, which we call Adaptive Integration Approach for statistics, or s-AIA, uses a multivariate Gaussian model and simulation data obtained at the HMC burn-in stage to identify a system-specific dimensional stability interval and assigns the most appropriate 2-/3-stage integrator for any user-chosen simulation step size within that interval. s-AIA has been implemented in the in-house software package HaiCS without introducing computational overheads in the simulations. The efficiency of the s-AIA integrators and their impact on the HMC accuracy, sampling performance and convergence are discussed in compari
son with known fixed-parameter multi-stage splitting integrators (including Verlet). Numerical experiments on well-known statistical models show that the adaptive schemes reach the best possible performance within the family of 2-, 3-stage splitting schemes.
keywords: Hamiltonian Monte Carlo, Multi-stage integrators, Adaptive integration, Bayesian inference, Stability limit, Velocity Verlet +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
First introduced for lattice field theory simulations [1], Hamiltonian Monte Carlo (HMC) is nowadays recognized as a popular and efficient tool for applications in Bayesian statistical inference [2].
Using gradient information on the posterior distribution, HMC reduces random walk behavior typical of many conventional Markov Chain Monte Carlo (MCMC) samplers and makes it possible to sample high dimensional and complex distributions more efficiently than simpler MCMC algorithms. The use of Hamiltonian dynamics makes HMC able to perform large moves while keeping high acceptance rates, thus lowering the correlation between samples, provided that an accurate symplectic integrator is in use [3; 4]. On the other hand, known drawbacks of HMC are the computational cost deriving from the evaluation of gradients and the strong dependence of the performance on the choice of the parameters in the algorithm. Many variants of HMC have been proposed in the literature during the last decades (see [5] for an advanced list of HMC methods in computational statistics and physical sciences).
Numerical integration of the Hamiltonian equations of motion is crucial for HMC, since its accuracy and efficiency strongly affect the overall performance of the method. Velocity Verlet [6; 7] is currently the method of choice owing to its simplicity, optimal stability properties and computational efficiency. Recently proposed multi-stage splitting integrators have shown promising performance in HMC for statistical and molecular simulation applications [8; 9; 10]. Such integrators are as easy to implement as Verlet schemes due to their kick-drift structure. However, they possess shorter stability intervals than corresponding multi-stage Verlet algorithms [8].
The Adaptive Integration Approach (AIA) [11] for HMC and its extensions MAIA and e-MAIA for Modified HMC (MHMC) methods [12] offer an intelligent (system- and step size-specific) choice of the most appropriate
2-stage integrator in terms of the best conservation of energy for harmonic forces. They have been formulated and implemented for molecular simulation applications and demonstrated an improvement in accuracy, stability and sampling efficiency compared with the fixed-parameter 1-, 2-stage numerical integrators (including the standard Verlet) when used in simulations of complex physical systems [11; 12; 13; 14; 15; 16].
In this paper, we propose an Adaptive Integration Approach for statistics, that we call s-AIA, which extends the ideas of the original AIA to Bayesian statistical inference applications. The method employs a theoretical analysis of the multivariate Gaussian model and simulation data obtained at the HMC burn-in stage to identify a system-specific dimensional stability interval and assigns the most appropriate 2-, 3-stage integrator at any user-chosen simulation step size within that interval. To construct s-AIA, we address the difficulties encountered by the extension to the computational statistics scenario of the assumptions typical of molecular simulation applications made in AIA -- such as dominating harmonic forces, known angular frequencies and resonance conditions, nonrandomized integration step size. The proposed algorithm does not add computational overheads during a simulation.
We have implemented s-AIA in the in-house software HaiCS (Hamiltonians in Computational Statistics) [5; 17] and tested its efficiency and impact on the HMC accuracy, sampling performance and convergence in comparison with known fixed-parameter multi-stage splitting integrators for HMC-based methods (including Velocity Verlet). The numerical experiments have been performed on representative benchmarks and datasets of popular statistical models.
The paper is structured as follows. We briefly review HMC in Section 2 and multi-stage integrators in Section 3. The s-AIA algorithm and its implementation are presented in Section 4. Validation and testing of the new algorithm are described and discussed in Section 5. Our conclusions are summarized in Section 6.
## 2 Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) is a Markov Chain Monte Carlo (MCMC) method for obtaining correlated samples \(\mathbf{\theta}_{i}\sim\pi(\mathbf{\theta})\) from a target probability distribution \(\pi(\mathbf{\theta})\) in \(\mathbb{R}^{D}\) by generating a Markov chain in the joint phase space \(\mathbb{R}^{D}\times\mathbb{R}^{D}\) with invariant distribution
\[\pi(\mathbf{\theta},\mathbf{p})=\pi(\mathbf{\theta})p(\mathbf{p})\propto\exp(-H(\mathbf{\theta},\mathbf{p} )). \tag{1}\]
Here
\[H(\mathbf{\theta},\mathbf{p})=K(\mathbf{p})+U(\mathbf{\theta})=\frac{1}{2}\mathbf{p}^{T}M^{-1}\mathbf{p} +U(\mathbf{\theta}) \tag{2}\]
is the Hamiltonian function, where the potential energy \(U(\mathbf{\theta})\) is related to the target \(\pi(\mathbf{\theta})\) by means of
\[U(\mathbf{\theta})=-\log\pi(\mathbf{\theta})+\text{const}\,,\]
and the kinetic energy \(K(\mathbf{p})\) is explicited through an auxiliary momentum variable \(\mathbf{p}\) drawn from the normal distribution \(\mathcal{N}(0,M)\), with \(M\) being a symmetric positive definite matrix (the mass matrix).
HMC alternates momentum update steps, where a sample of \(\mathbf{p}\) is drawn from the distribution \(\mathcal{N}(0,M)\), with steps where both position \(\mathbf{\theta}\) and momenta \(\mathbf{p}\) are updated through the numerical integration of the Hamiltonian dynamics
\[\frac{d\mathbf{\theta}}{dt}=M^{-1}\mathbf{p},\qquad\frac{d\mathbf{p}}{dt}=-\nabla_{\theta }U(\mathbf{\theta}). \tag{3}\]
The latter is performed using a symplectic and reversible integrator. If \(\Psi_{h}\) is the map in phase space that advances the numerical solution over a step size of length \(h\), symplecticness means [3]
\[\Psi_{h}^{\prime}(\mathbf{z})^{T}J^{-1}\Psi_{h}^{\prime}(\mathbf{z})=J^{-1},\quad \forall\mathbf{z}\in\Omega,\,\forall h>0,\]
where \(\Psi_{h}^{\prime}\) is the Jacobian matrix of \(\Psi_{h}\) and
\[J=\begin{pmatrix}0&I\\ -I&0\end{pmatrix},\]
with \(I\) the \(D\times D\) unit matrix. Reversibility demands \(\Psi_{h}\circ\mathcal{F}=\left(\Psi_{h}\circ\mathcal{F}\right)^{-1},\) where \(\mathcal{F}(\mathbf{\theta},\mathbf{p})=(\mathbf{\theta},-\mathbf{p})\) is the _momentum flip_ map. Given the state of the Markov chain \((\mathbf{\theta}_{i},\mathbf{p}_{i})\) at the beginning of the \(i\)-th iteration, a proposal \((\mathbf{\theta}^{\prime},\mathbf{p}^{\prime})\) is obtained by integrating the Hamiltonian equations of motion for \(L\) steps using \(\Psi_{h}\), i.e.
\[(\mathbf{\theta}^{\prime},\mathbf{p}^{\prime})=\underbrace{\Psi_{h}\circ...\circ\Psi _{h}}_{L\text{ times}}(\mathbf{\theta}_{i},\mathbf{p}_{i}). \tag{4}\]
Due to numerical integration errors, the Hamiltonian energy and thus the target density (1) are not exactly preserved. The invariance of the target density is ensured through a Metropolis test with acceptance probability
\[\alpha=\min\{1,\exp(-\Delta H)\},\]
where \(\Delta H=H(\mathbf{\theta}^{\prime},\mathbf{p}^{\prime})-H(\mathbf{\theta}_{i},\mathbf{p}_{i})\) is the energy error resulting from the numerical integration. In case of acceptance, \(\mathbf{\theta}^{\prime}\) is the starting point for the following iteration, i.e. \(\mathbf{\theta}_{i+1}=\mathbf{\theta}^{\prime}\), whereas in case of rejection, the initial proposal \(\mathbf{\theta}_{i}\) is kept for the following iteration, i.e. \(\mathbf{\theta}_{i+1}=\mathbf{\theta}_{i}\). In both cases, the momentum is discarded and a new momentum \(\mathbf{p}_{i+1}\) is drawn from its Gaussian distribution.
### Splitting
The integration of the Hamiltonian dynamics in HMC is always performed by resorting to the idea of splitting. The split systems
\[\text{(A)}\quad\frac{d\mathbf{\theta}}{dt} =\nabla_{p}K(\mathbf{p})=M^{-1}\mathbf{p}, \frac{d\mathbf{p}}{dt} =-\nabla_{\theta}K(\mathbf{p})=0,\] \[\text{(B)}\quad\frac{d\mathbf{\theta}}{dt} =\nabla_{p}U(\mathbf{\theta})=0, \frac{d\mathbf{p}}{dt} =-\nabla_{\theta}U(\mathbf{\theta}),\]
have solution flows \(\varphi_{t}^{A}\) and \(\varphi_{t}^{B}\) explicitly given by
\[\varphi_{t}^{A}(\mathbf{\theta},\mathbf{p})=(\mathbf{\theta}+tM^{-1}\mathbf{p},\mathbf{p}),\qquad \varphi_{t}^{B}(\mathbf{\theta},\mathbf{p})=(\mathbf{\theta},\mathbf{p}-t\nabla_{\theta}U( \mathbf{\theta})); \tag{5}\]
these flows are often called a position _drift_ and a momentum _kick_ respectively. The integration of the target dynamics (3) is carried out by combining drifts and kicks. The best known algorithm is the Velocity Verlet integrator [6; 7]
\[\mathbf{p} \leftarrow\mathbf{p}-\frac{h}{2}\nabla_{\theta}U(\mathbf{\theta}),\] \[\mathbf{\theta} \leftarrow\mathbf{\theta}+hM^{-1}\mathbf{p},\] \[\mathbf{p} \leftarrow\mathbf{p}-\frac{h}{2}\nabla_{\theta}U(\mathbf{\theta}). \tag{6}\]
With the notation in (5), the algorithm may be written as
\[\Psi_{h}^{\text{VV}}=\varphi_{\frac{h}{2}}^{B}\circ\varphi_{h}^{A}\circ\varphi _{\frac{h}{2}}^{B}. \tag{7}\]
As before, \(h\) is the length of an integration step, i.e. step size. By switching the roles of \(A\) and \(B\) in (7) one obtains the Position Verlet algorithm [18], whose performance is often worse than that of the velocity scheme [4].
More general splitting integration schemes [4; 19] that alternate position drifts and momentum kicks will be reviewed in Section 3.
### Advantages and limitations of HMC
By suitably choosing the time span \(Lh\) of the numerical integration (cf. (4)), HMC offers the possibility of generating proposals that are sufficiently far from the current state of the Markov chain. At the same time, for fixed \(Lh\), one may always reduce \(h\) and increase \(L\) to achieve a more accurate numerical integration and therefore an arbitrarily high acceptance rate. Thus HMC is in principle able to generate samples with low correlation and to explore rapidly the state space, even if the dimensionality is high, avoiding in this way the random walk behaviour of simpler MCMC algorithms. Unfortunately, it is well known that in practice the performance of HMC very much depends on the choice of the parameters \(h\) and \(L\).
Since most of the computational effort in HMC goes in the (often extremely costly) evaluations of the gradient \(\nabla U(\mathbf{\theta})\) required by the integrator, and the acceptance rate depends on the numerical integration error, the choice of the integration method is key to the efficiency of the HMC algorithm.
## 3 Multi-stage integrators and adaptive approach
In this Section, we review multi-stage palindromic splitting integrators, which have demonstrated promising performance in HMC for both statistical and molecular simulation applications [8; 9; 10; 11; 12; 13; 14; 15].
### k-stage palindromic splitting integrators
The family of palindromic \(k\)-stage splitting integrators with \(k-1\) free parameters is defined as [4]
\[\Psi_{h}=\varphi^{B}_{b_{1}h}\circ\varphi^{A}_{a_{1}h}\circ\cdots\circ\varphi^ {A}_{a_{k^{\prime}}h}\circ\varphi^{B}_{b_{k^{\prime}+1}h}\circ\varphi^{A}_{a_{ k^{\prime}}h}\circ\cdots\circ\varphi^{A}_{a_{1}h}\circ\varphi^{B}_{b_{1}h},\quad b _{i},a_{j}\in\mathbb{R}^{+}, \tag{8}\]
if \(k=2k^{\prime}\) is even, and
\[\Psi_{h}=\varphi^{B}_{b_{1}h}\circ\varphi^{A}_{a_{1}h}\circ\cdots\circ\varphi^ {B}_{b_{k^{\prime}}h}\circ\varphi^{A}_{a_{k^{\prime}}h}\circ\varphi^{B}_{b_{k ^{\prime}}h}\circ\cdots\varphi^{A}_{a_{1}h}\circ\varphi^{B}_{b_{1}h},\quad b _{i},a_{j}\in\mathbb{R}^{+}, \tag{9}\]
if \(k=2k^{\prime}-1\) is odd. The coefficients \(b_{i}\), \(a_{j}\) in (8)-(9) have to satisfy the conditions \(2\sum_{i=1}^{k^{\prime}}b_{i}+b_{k^{\prime}+1}=2\sum_{j=1}^{k^{\prime}}a_{j}=1\), and \(2\sum_{i=1}^{k^{\prime}}b_{i}=2\sum_{j=1}^{k^{\prime}-1}a_{j}+a_{k^{\prime}}=1\), respectively. The integrators (8) and (9) are symplectic as compositions of flows of Hamiltonian systems, and reversible, due to their palindromic structure. The number of stages \(k\) is the number of times the algorithm performs
an evaluation of gradients \(\nabla_{\theta}U(\mathbf{\theta})\) per step size. Though \(\varphi^{B}\) appears \(k+1\) times in (8) and (9), the number of gradient evaluations performed is still \(k\) since the (last) one in the leftmost \(\varphi^{B}_{b_{1}h}\) at the current step is reused in the rightmost \(\varphi^{B}_{b_{1}h}\) at the following step. Multi-stage splitting integrators alternate position drifts and momentum kicks of different lengths, which makes all of them, including the most common and popular 1-stage Verlet (7), easy to implement.
As pointed out above, most of the computational effort in HMC is due to evaluations of gradients. Splitting integrators with different numbers of stages do not perform the same number of gradient evaluations per integration step and therefore using those integrators with a common value of \(L\) and \(h\) does not result in fair comparisons (in terms of computational cost). If \(\hat{L}\) is a number of gradient evaluations/time steps suitable for the 1-stage Verlet algorithm with step size \(h\), \(k\)-stage integrators will here be used by taking \(L=\hat{L}/k\) steps of length \(kh\). In this way all algorithms integrate the Hamiltonian dynamics over a time interval of the same length \(\hat{L}h\) and use the same number of gradient evaluations.
### Examples of 2- and 3-stage integrators
We plan to derive adaptive 2- and 3-stage integrators and we first review the examples in the literature of 2- and 3-stage integrators.
The one-parameter family of 2-stage integrators is described as (see (8)):
\[\Psi^{2\text{stage}}_{h}=\varphi^{B}_{bh}\circ\varphi^{A}_{ah}\circ\varphi^{B }_{b_{1}h}\circ\varphi^{A}_{ah}\circ\varphi^{B}_{bh},\]
with \(a=1/2\) and \(b_{1}=1-2b\). Thus the integrators can be written as
\[\Psi^{2\text{stage}}_{h}=\varphi^{B}_{bh}\circ\varphi^{A}_{\frac{1}{2}}\circ \varphi^{B}_{(1-2b)h}\circ\varphi^{A}_{\frac{h}{2}}\circ\varphi^{B}_{bh}, \tag{10}\]
with \(b\in(0,0.5)\) if we wish \(b>0\) and \(b_{1}>0\).
Similarly, (9) with \(k^{\prime}=2\), \(2a+a_{1}=1\) and \(2b+2b_{1}=1\) yields the two-parameter family of 3-stage integrators
\[\Psi^{3\text{stage}}_{h}=\varphi^{B}_{bh}\circ\varphi^{A}_{ah}\circ\varphi^{ B}_{\left(\frac{1}{2}-b\right)h}\circ\varphi^{A}_{(1-2a)h}\circ\varphi^{B}_{ \left(\frac{1}{2}-b\right)h}\circ\varphi^{A}_{ah}\circ\varphi^{B}_{bh}, \tag{11}\]
with \(a,b\in(0,0.5)\).
Several 2- and 3-stage integrators with suitably chosen parameters for achieving high performance in HMC have been proposed in the literature [8; 9; 20; 21]. Some of them are presented below and summarized in Table 1. In
the cited literature, two alternative types of analysis have been carried out in order to choose the integration parameters \(a\) and/or \(b\) in the context of HMC. In [20; 21] or [22], the integration coefficients are determined by minimizing the coefficients in the Taylor expansion of the _Hamiltonian truncation error_[22]
\[\Delta H=H(\boldsymbol{\theta},\boldsymbol{p})-H(\Psi_{h}(\boldsymbol{\theta},\boldsymbol{p})). \tag{12}\]
On the other hand, the paper [8] does not look at the behaviour of the Hamiltonian truncation error as \(h\to 0\), as typically integrators are not operated with small values of \(h\). Their analysis is rather based on a (tight) bound
\[\mathbb{E}[\Delta H]\leq\rho(h,\boldsymbol{z}),\]
for the expected energy error for given \(h\), that may be rigorously proved for Gaussian targets (and has been experimentally shown to be useful for all targets). Here \(\rho\) is a function associated with the integrator and \(\boldsymbol{z}\) represents the coefficients that identify the integrator within a family. For 2-stage palindromic splitting schemes [8]
\[\rho_{2}(h,b)=\frac{h^{4}\left(2b^{2}\left(\tfrac{1}{2}-b\right)h^{2}+4b^{2}-6 b+1\right)^{2}}{8\left(2-bh^{2}\right)\left(2-\left(\tfrac{1}{2}-b\right)h^{2} \right)\left(1-b\left(\tfrac{1}{2}-b\right)h^{2}\right)}. \tag{13}\]
For 3-stage integrators the attention may be restricted to pairs \((b,a)\) that satisfy [9; 23]
\[6ab-2a-b+\frac{1}{2}=0; \tag{14}\]
when this condition is not fulfilled the integrator has poor stability properties. Under this restriction (see A)
\[\rho_{3}(h,b)=\frac{h^{4}\left(-3b^{4}+8b^{3}-19/4b^{2}+b+b^{2}h^{2}\left(b^{3 }-5/4b^{2}+b/2-1/16\right)-1/16\right)^{2}}{2(3b-bh^{2}(b-1/4)-1)\left(1-3b-bh ^{2}(b-1/2)^{2}\right)(-9b^{2}+6b-h^{2}(b^{3}-5/4b^{2}+b/2-1/16)-1)}. \tag{15}\]
The following schemes have been considered in the literature.
* **2-stage Velocity Verlet (VV2).** This is the integrator with the longest stability interval \((0,4)\) among 2-stage splitting schemes and corresponds to \(b=1/4\) in (10). To perform one step of length \(h\) with this algorithm, one just performs two steps of length \(h/2\) of standard Velocity Verlet. It is important to emphasize that it follows that when below we compare experimentally VV2 with alternative integrators, _one
_is really comparing the standard Verlet algorithm with such alternative integrators,_ simply adjusting the step lengths and number of steps per integration leg so as to have a fair comparison.
* **2-stage BCSS (BCSS2).** This scheme was derived in [8] to minimize the maximum of \(\rho_{2}(h,b)\) in (13) as \(h\) ranges over the interval \(0<h<2\) (VV2 is often operated with \(h\) close to 2), i.e. \[b=\operatorname*{arg\,min}_{b\in(0,0.5)}\max_{0<h<2}\rho_{2}(h,b)=0.211781.\] It achieves its best performance when \(h\) is near the center of the stability interval [8, 11, 12, 23].
* **2-stage Minimum Error (ME2).** The coefficient of this integrator (\(b=0.193183\)) was obtained by McLachlan in [20] through the minimization of the Hamiltonian truncation error (12). For quadratic problems, see also [23].
* **3-stage Velocity Verlet (VV3).** Similarly to VV2, the 3-stage Velocity Verlet is a 3-stage integrator with the longest stability interval \((0,6)\) among 3-stage splitting integrators. One step of this algorithm of length \(h\) is just the concatenation of three steps of length \(h/3\) of the standard Velocity Verlet integrator. As we did for VV2, we emphasize that when comparing below VV3 and alternative integrators, one is really comparing the standard VV algorithms.
* **3-stage BCSS (BCSS3).** The parameter values are found by imposing the relation (14) and \[b=\operatorname*{arg\,min}_{b\in(0,0.5)}\max_{0<h<3}\rho_{3}(h,b),\] with \(\rho_{3}\) in (15).
* **3-stage Minimum Error (ME3).** ME3 was derived in [24] by requiring (14) and a Hamiltonian truncation error of size \(\mathcal{O}(h^{6})\).
The performance of the different integrators within HMC very much depends on the simulation parameters, in particular on the choice of step size. Minimum Error schemes achieve their best performance for small step size,
since they are obtained by studying the limit of vanishing step size. However, they have small stability limits and may perform badly for bigger integration step sizes. Velocity Verlet schemes preserve stability for values of the step size larger than those that may be used in other integrators, but may not be competitive in situations where the step size is not chosen on grounds of stability (for instance in problems of large dimensionality where accuracy demands that the step size be small to ensure non-negligible acceptance rates). BCSS integrators were designed for optimizing performance for values of the step size not close to 0 and not close to the maximum stability allowed for Verlet.
### Adaptive Integration Approach (AIA)
Adaptive 2-stage integration schemes were proposed by Fernandez-Pendas et al. in [11] for molecular simulation applications. Its extensions, called MAIA and e-MAIA, for Modified HMC (MHMC) methods, such as Generalized Shadow HMC (GSHMC) methods [25; 26; 27; 28], were introduced by Akhmatskaya et al. in [12].
Given a simulation problem, in AIA, the user chooses, according to their computational budget, the value of \(h\) to be used (i.e. \(h\) is chosen to be smaller if more time and resources are available for the simulation). After that, the AIA algorithm itself finds the most appropriate integration scheme within the family of 2-stage integrators (10). If the time-step is very small for the problem at hand, AIA will automatically pick up a parameter value close to Minimum Error; if the time-step is very large, AIA will automatically choose an integrator close to the 2-stage Velocity Verlet. For intermediate values of \(h\), AIA will choose an intermediate parameter value (near the BCSS integrator). We emphasize that in AIA, the parameter value used changes
\begin{table}
\begin{tabular}{c c c c c} Integrator & N. of stages & Coefficients & Stability interval & References \\ \hline Velocity Verlet & 1 & - & \((0,2)\) & [6; 7] \\
2-stage Velocity Verlet & 2 & \(b=1/4\) & \((0,4)\) & [8] \\
2-stage BCSS & 2 & \(b=0.211781\) & \((0,2.634)\) & [8] \\
2-stage Minimum Error & 2 & \(b=0.193183\) & \((0,2.533)\) & [20; 23] \\
3-stage Velocity Verlet & 3 & \(b=1/6\), \(a=1/3\) & \((0,6)\) & [8; 9] \\
3-stage BCSS & 3 & \(b=0.118880\) & \((0,4.662)\) & [8; 9] \\ & & \(a=0.296195\) & \((0,4.662)\) & [8; 9] \\
3-stage Minimum Error & 3 & \(b=0.108991\) & \((0,4.584)\) & [9; 24] \\ \hline \end{tabular}
\end{table}
Table 1: Multi-stage splitting integrators presented in Section 3.2.
with \(h\) and with the problem being tackled. Given a simulation problem, the Adaptive Integration Approach (AIA) offers, for any integration step size chosen within an appropriate stability interval, an intelligent choice of the most appropriate integration scheme (in terms of the best conservation of energy for harmonic forces) within a family of 2-stage integrators. The original AIA algorithm is summarized in Algorithm 1.
Our objective in this paper is to employ the ideas behind the 2-stage AIA approach for deriving multi-stage adaptive integration schemes specifically addressed to Bayesian inference applications. Taking into account the recent indications of the superiority of 3-stage integrators over 2-stage schemes in statistical applications [23], we plan to develop not only 2-stage adaptive approaches as in AIA but also 3-stage adaptive algorithms. Extending AIA to computational statistics is not straightforward. The potential challenges are discussed in the next Section.
```
1:highest angular frequency \(\tilde{\omega}\) in the problem, dimensional time-step \(\overline{\Delta t}\), safety factor \(S_{f_{\text{AIA}}}=\sqrt{2}\)
2:Calculate dimensionless time-step: \(\overline{h}\gets S_{f_{\text{AIA}}}\tilde{\omega}\overline{\Delta t}\)
3:if\(\overline{h}\geq 4\)then
4: abort - there does not exist an integration coefficient \(b\) for which a 2-stage integrator \(\Psi_{h}^{\text{2stage}}\) in (10) is stable
5:else
6: Find optimal integrator coefficient: \(b_{\text{opt}}\leftarrow\underset{0<b<0.5}{\arg\min}\;\underset{0<h<\overline {h}}{\max}\rho_{2}(h,b)\)
7:endif
8: An integration coefficient \(b_{\text{opt}}\) which determines an optimal 2-stage integrator \(\Psi_{h}^{\text{2stage}}\) in (10) to be used in an HMC simulation of the given physical system with integration time-step \(\overline{\Delta t}\)
```
**Algorithm 1**Adaptive Integration Approach (AIA). Given a physical system and a time-step \(\overline{\Delta t}\), AIA offers the most appropriate choice of an integration parameter \(b\) for a 2-stage splitting integrator (10).
## 4 s-AIA
### Extension of AIA to computational statistics
AIA makes use of specific properties and assumptions that hold for molecular simulation problems, e.g. the strongest forces in the target distribution are approximately harmonic (Gaussian) with known angular frequencies, there are well determined safety factors to avoid resonances, and the step size does not vary from one integration leg to the next. Unfortunately, those conditions are not usually met in Bayesian inference applications and therefore, when formulating s-AIA, the statistics version of AIA, the following issues have to be dealt with.
* **Harmonic forces.** In contrast to molecular systems, they are not typically dominating in the Bayesian scenario.
* **Computation of frequencies.** Even if the integrator could be chosen by examining only harmonic forces, the corresponding angular frequencies would not be known a priori in a Bayesian simulation.
* **Resonance conditions.** Restrictions on the integration step size imposed by nonlinear stability are not known in the Bayesian case.
* **Choice of a step size.** In statistics, the step size is usually randomized at the beginning of each integration leg and this would involve having to adjust at each step of the Markov chain the parameter values within the chosen family of integrators (see Step 5 in Algorithm 1).
We address these issues separately.
_Pre-tabulation of the map \(\overline{h}\to b_{opt}\)_
For each family of methods (2- or 3-stage), we tabulate _once and for all_ the optimal integration coefficients \(b_{\text{opt}}^{k}\), \(k=2,3\), at small increments of \(\overline{h}\). In this way, the extra computational effort due to Step 5 in Algorithm 1 can be avoided.
We produced tables for \(k\)-stage s-AIA, \(k=2,3\), using grids \(\{\overline{h}_{i}\}_{k}\), \(i=1,...,N_{\text{grid}}\) of the dimensionless stability interval \((0,2k)\) (\(N_{\text{grid}}\) controls the accuracy of the estimated \(b_{\text{opt}}^{k}\) for a given \(\overline{h}\)). Similarly to Algorithm 1,
\(\{b^{k}_{\mathrm{opt}_{i}}\}\), \(i=1,...,N_{\mathrm{grid}}\), \(k=2,3\), are found as
\[b^{k}_{\mathrm{opt}_{i}}=\operatorname*{arg\,min}_{b\in(b_{\mathrm{ ME}k},\,b_{\mathrm{VV}k})}\max_{0<h<\overline{h_{i}}}\rho_{k}(h,b), \tag{16}\] \[\overline{h_{i}}\in\{\overline{h}_{i}\}_{k},\quad i=1,...,N_{ \mathrm{grid}},\quad k=2,3,\]
where \(b_{\mathrm{ME}k}\) (the optimal parameter for the \(k\)-stage integrator as \(h\to 0\)) and \(b_{\mathrm{VV}k}\) (the longest stability limit for the \(k\)-stage family) are the boundaries for \(b\), and \(\rho_{2}(h,b)\), \(\rho_{3}(h,b)\) are given by (13) and (15) respectively. For 3-stage s-AIA, the second parameter \(a\) in (11) is calculated according to (14).
Similarly to what happens in AIA, in s-AIA, one expects \(b^{k}_{\mathrm{opt}}\) to be close to the ME\(k\) integrator coefficients for smaller values of \(h\); to be close to \(b_{\mathrm{BCSS}k}\) near \(\overline{h}=k\), and to increase up to \(b_{\mathrm{VV}k}\) as \(\overline{h}\) approaches \(2k\). Figure 1 shows the \(\rho_{2}(h,b)\) and \(\rho_{3}(h,b)\) functions for the range of adaptive and fixed-parameter multi-stage integrators discussed in this work, whereas Figure 2 depicts \(b^{2}_{\mathrm{opt}}\) and \(b^{3}_{\mathrm{opt}}\) as functions of dimensionless step size.
### Computation of frequencies
The frequencies \(\omega_{j}\), \(j=1,...,D\), of the system are calculated during the burn-in stage (a mandatory initial stage of an HMC simulation to reach its stationary regime) as
\[\omega_{j}=\sqrt{\lambda_{j}},\qquad j=1,...,D, \tag{17}\]
where \(\lambda_{j}\) are the eigenvalues of the Hessian matrix of the potential function
\[H_{i,j}=\frac{\partial^{2}U(\mathbf{\theta})}{\partial\theta_{i}\partial\theta_{j }},\qquad i,j=1,...,D.\]
### Calculation of fitting factors
Explicit integrators, such as the ones discussed in this study, may become unstable, and thus suffer from serious step size limitations when applied to nonlinear Hamiltonian systems [29]. To quantify the step size limitations imposed by nonlinear stability in the Verlet integrator, Schlick et al. [29] introduced so-called safety factors [11] for up to the 6th order resonances. This seemed to cover the worst scenarios in molecular simulations. We have already mentioned that AIA [11] makes use of a safety factor \(\sqrt{2}\) (cf. Algorithm 1), which avoids resonances up to 4-th order, while the MAIA algorithm for Modified HMC [12] utilizes \(\sqrt{3}\), that covers resonances up to 5-th order.
In Bayesian inference applications, the number of multiple time scales and the level of non-linearity are in general hardly predictable, and should be treated for each problem separately. For our purposes, instead of a safety factor, we introduce what we call a fitting factor \(S_{f}\), which not only plays the role of the safety factor but also results from fitting the proposed multivariate Gaussian model to the data generated during the burn-in stage. As in the case of a safety factor in [11], we use a fitting factor for nondimensionalization of the step size. Thus, for a chosen step size \(\overline{\Delta t}\), its nondimensional counterpart is found as
\[\overline{h}=S_{f}\,\tilde{\omega}\,\overline{\Delta t}. \tag{18}\]
Here, \(S_{f}\) is the fitting factor determined below and \(\tilde{\omega}\) is the highest frequency of the system, obtained from the burn-in simulation. Our objective now is to express \(S_{f}\) in terms of the known properties of the simulated system. We
Figure 1: Comparison of the upper bounds \(\rho_{k}(h,b)\), \(k=2\;(13),3\;(15)\) of the energy error, for fixed-parameter multi-stage splitting integrators — VV2, VV3, BCSS2, BCSS3, ME2, ME3 (Table 1) — and the adaptive integrators AIA and s-AIA\(k\). The interval for the step size \(h\) is normalized with respect to the number of stages \(k\) of the integrator in order to lead to fair comparisons. The zoomed plot in the upper left corner shows the situation for \(h/k\in(0,1.2)\).
choose to run a burn-in simulation using a Velocity Verlet algorithm and setting \(L=1\) and \(\Delta t=\Delta t_{\rm VV}\). The reason for that is the availability of a simple closed-form expression for the expected energy error \(\mathbb{E}[\Delta H]\) of a univariate Gaussian target with such a choice of an integrator and \(L\) (see B for details):
\[\mathbb{E}^{1}_{\rm VV}[\Delta H]=\frac{h_{\rm VV}^{6}}{32}, \tag{19}\]
with \(h_{\rm VV}\) being a dimensionless counterpart of \(\Delta t_{\rm VV}\), i.e. from (18)
\[h_{\rm VV}=S_{f}\,\omega\,\Delta t_{\rm VV}.\]
For a \(D\)-dimensional multivariate Gaussian target, one can consider \(D\) dimensionless counterparts
\[h_{\rm VV_{j}}=S_{f}\,\omega_{j}\,\Delta t_{\rm VV},\qquad j=1,...,D, \tag{20}\]
Figure 2: Comparison of the integration coefficient \(b\) for fixed-parameter multi-stage splitting integrators —VV2, VV3, BCSS2, BCSS3, ME2, ME3 (Table 1)— and the adaptive integrators AIA and s-AIA\(k\), \(b_{\rm opt}^{2}\) and \(b_{\rm opt}^{3}\) (16). The interval for the step size \(h\) is normalized with respect to the number of stages \(k\) of the integrator to lead to fair comparisons.
and find the expected energy error for a multivariate Gaussian model with the help of (19) as
\[\mathbb{E}_{\rm VV}^{D}[\Delta H]=\sum_{j=1}^{D}\frac{h_{\rm VV_{j}}^{6}}{32}. \tag{21}\]
Combining (21) and (20), we find the fitting factor
\[S_{f}=\frac{1}{\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_{\rm VV}^{D}[ \Delta H]}{\sum_{j=1}^{D}\omega_{j}^{6}}}. \tag{22}\]
Alternatively, the calculation of the frequencies may be avoided (and computational resources saved), if the multivariate Gaussian model is replaced with a univariate Gaussian model (as in [11]), which leads to
\[S_{f}=\frac{1}{\tilde{\omega}\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_{ \rm VV}^{D}[\Delta H]}{D}}. \tag{23}\]
Notice that, though \(\tilde{\omega}\) appears in (23), one can compute
\[S_{f}\,\tilde{\omega}=\frac{1}{\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_ {\rm VV}^{D}[\Delta H]}{D}}, \tag{24}\]
without needing frequencies and use it in (18).
From now on, in order to distinguish between the two approaches, we will denote the one in (22) -- which requires frequency calculation -- by \(S_{\omega}\) and the second one in (23) -- which does not -- by \(S\), i.e.
\[S_{\omega}=\frac{1}{\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_{\rm VV}^{D }[\Delta H]}{\sum_{j=1}^{D}\omega_{j}^{6}}},\qquad\qquad S=\frac{1}{\tilde{ \omega}\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_{\rm VV}^{D}[\Delta H]}{ D}}.\]
As pointed out above, safety factors are meant to impose limitations on a system-specific stability interval (cf. (18)). Thus, they should not be less than 1 and, as a consequence, we actually use
\[S_{\omega}=\max\left(1,\frac{1}{\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_ {\rm VV}^{D}[\Delta H]}{\sum_{j=1}^{D}\omega_{j}^{6}}}\right),\ S=\max\left(1, \frac{1}{\tilde{\omega}\Delta t_{\rm VV}}\sqrt[6]{\frac{32\mathbb{E}_{\rm VV }^{D}[\Delta H]}{D}}\right). \tag{25}\]
The only unknown quantity in (25) is \(\mathbb{E}_{\rm VV}^{D}[\Delta H]\), which can be found by making use of the data collected during the burn-in stage. In fact, following
the high-dimensional asymptotic formula for expected acceptance rate \(\mathbb{E}[\alpha]\)[30] proven for Gaussian distributions in a general scenario [10], i.e.
\[\mathbb{E}[\alpha]=1-\frac{1}{2\sqrt{\pi}}\sqrt{\mathbb{E}^{D}[\Delta H]},\qquad \mathbb{E}^{D}[\Delta H]\to 0,\,D\rightarrow\infty,\]
we get an expression for \(\mathbb{E}^{D}[\Delta H]\)
\[\mathbb{E}^{D}[\Delta H]=4\pi\left(1-\mathbb{E}[\alpha]\right)^{2}. \tag{26}\]
An estimation of \(\mathbb{E}[\alpha]\) in a simulation is given by the acceptance rate AR, i.e. the ratio between the accepted \(N_{\rm acc}\) and the total \(N\) number of proposals
\[\mathrm{AR}=\frac{N_{\rm acc}}{N}. \tag{27}\]
Combining (26) with \(\mathbb{E}[\alpha]=\mathrm{AR}\) calculated during the burn-in stage, we compute \(\mathbb{E}^{D}_{\rm VV}[\Delta H]\) as
\[\mathbb{E}^{D}_{\rm VV}[\Delta H]=4\pi\left(1-\mathrm{AR}\right)^{2},\]
which gives an explicit expression for the fitting factors in (25)
\[S_{\omega}=\max\left(1,\frac{2}{\Delta t_{\rm VV}}\sqrt[6]{\frac{2\pi(1- \mathrm{AR})^{2}}{\sum_{j=1}^{D}\omega_{j}^{6}}}\right),\,\,S=\max\left(1, \frac{2}{\tilde{\omega}\Delta t_{\rm VV}}\sqrt[6]{\frac{2\pi(1-\mathrm{AR})^ {2}}{D}}\right). \tag{28}\]
Once the fitting factor is computed using (28), a dimensionless counterpart of a given step size \(\overline{\Delta t}\) can be calculated either as
\[\overline{h}=\frac{2\tilde{\omega}\overline{\Delta t}}{\Delta t_{\rm VV}} \sqrt[6]{\frac{2\pi(1-\mathrm{AR})^{2}}{\sum_{j=1}^{D}\omega_{j}^{6}}}, \tag{29}\]
or
\[\overline{h}=\frac{2\overline{\Delta t}}{\Delta t_{\rm VV}}\sqrt[6]{\frac{2 \pi(1-\mathrm{AR})^{2}}{D}}. \tag{30}\]
We remark that for systems with disperse distributions of frequencies, i.e. when the standard deviation of frequencies, \(\sigma\), is big, it might be useful to apply a nondimensionalization of \(\overline{\Delta t}\) smoother than the proposed in (29), namely
\[\overline{h}=\frac{2\left(\tilde{\omega}-\sigma\right)\,\overline{\Delta t}} {\Delta t_{\rm VV}}\sqrt[6]{\frac{2\pi(1-\mathrm{AR})^{2}}{\sum_{j=1}^{D} \omega_{j}^{6}}}. \tag{31}\]
Otherwise, if \(\sigma<1\), (29) is a better choice. In Section 4.2 we will analyze different choices of scaling and provide practical recommendations. With (29)-(31) one has everything in place for finding the optimal integrator parameter \(b_{\rm opt}^{k}\) (16).
To conclude this section, it is worth mentioning yet another useful output of the analysis. Let us recall that the dimensionless maximum stability limit of \(k\)-stage integrators is equal to \(2k\), \(k=1,2,3,...\)[4]. Then, the stability interval can be expressed in terms of the chosen fitting factor \(S_{f}\) (\(S\) or \(S_{\omega}\) in (25)) as \(\big{(}0,2k/(S_{f}\tilde{\omega})\big{)}\), \(k=1,2,3,...\), or
\[0<\Delta t<{\rm SL}=\frac{2k}{S_{f}\,\tilde{\omega}},\qquad k=1,2,3,...\,. \tag{32}\]
Here \({\rm SL}\) is the stability limit. We remark that, with the nondimensionalization (31), the estimation of the stability interval differs from (32) and reads as
\[0<\Delta t<{\rm SL}=\frac{2k}{S_{\omega}\,\left(\tilde{\omega}-\sigma\right)},\qquad k=1,2,3,...\,. \tag{33}\]
In summary, we have proposed an approach for the prediction of a stability interval and an optimal multi-stage integrator for a given system. The step size can be freely chosen within the estimated stability interval.
### s-AIA algorithm
Since the nondimensionalization method forms a key part of the s-AIA algorithm, it is important to give some insight into the options offered by (29)-(31). Obviously, the method (30) is cheaper in terms of computational effort as it does not require the calculation of frequencies. In addition, (30) is not affected by potential inaccuracies of the computed frequencies due, e.g., to insufficient sampling during the burn-in stage. On the other hand, taking into account the different frequencies (hence, the different time scales) of the system provides a more accurate estimation of the system-specific stability interval. Moreover, in the case of dominating anharmonic forces, the analysis based on the univariate harmonic oscillator model may lead to poor estimation of the fitting factor \(S\) and, as a result, of the dimensionless step size in (30). Therefore, we expect \(S_{\omega}\) in (28) to provide a better approximation of the stability interval, and thus to lead to a better behavior of s-AIA. However, with the upper bound of the safety factor for the 1-stage Velocity Verlet suggested in [29], it is possible to identify those computational models
for which the less computationally demanding fitting factor \(S\) ensures a reliable stability limit estimation. In particular, \(S>2\) implies an anharmonic behavior of the underlying dynamics of the simulated model, and thus the need for a more accurate \(S_{\omega}\), together with (29) or (31) (depending on the distribution of \(\omega_{j}\)), for a proper estimation of the stability limit. On the contrary, if \(S\leq 2\), one expects \(S\) and (30) to be able to provide a reliable approximation of the stability limit. Though, in contrast to (30), the calculation of \(S\) in (28) requires the knowledge of the highest frequency \(\tilde{\omega}\), it is still less computationally demanding than the \(S_{\omega}\) approach since \(\tilde{\omega}\) can be computed avoiding calculations of Hessians [31], which is the bulk of computational cost for the frequencies calculations. We remark that the option to avoid calculating frequencies and use (30) straightaway is present in the s-AIA algorithm.
The s-AIA algorithm is summarised in Figure 3. Given a model; a dataset; HMC parameters and settings for Tuning, Burn-in and Production stages; \(I_{\omega}\) (see Figure 3) and an order \(k\) of s-AIA (\(k=2\) or \(3\)), s-AIA algorithm works as shown in Figure 4.
### Implementation
s-AIA has been implemented in the BCAM in-house software package HaiCS(Hamiltonians in Computational Statistics) for statistical sampling of high dimensional and complex distributions and parameter estimation in Bayesian models using MCMC and HMC based methods. The package is written in C and R and is targeted to computers running UNIX certified operating systems. Specifically, the code for the computational simulation is written in C, while the performance analysis and MCMC diagnostics are carried out in R by means of scripts and tools compatible with the popular CODA[32] toolkit. A detailed presentation and description of the package can be found in [17]. Thanks to the implementation of the novel and efficient algorithms for statistical sampling, HaiCS ensures competitive performance with respect to already existing HMC software packages. Moreover, due to its structure, it allows the user to have flexibility in methodology development and testing as well as to control the code performance and optimization. The current version incorporates several sampling techniques: Random Walk Metropolis algorithm, Hamiltonian Monte Carlo (HMC), Generalized HMC (GHMC) [33; 34], Metropolis-Adjusted Langevin Algorithm (MALA) [35], second-order Langevin Monte Carlo (L2MC), Generalized Shadow HMC
(GSHMC) [25] and Mix & Match HMC (MMHMC) [5]. In addition, different models for Bayesian inference applications are available: Gaussian Distribution (GD) [36], Bayesian Logistic Regression (BLR) [5; 37], Stochastic Volatility (SV) [38], inverse magnetotelluric (MT) model [39], SIR/SEIR-like models [40; 41]. The package comprises the state-of-the-art and most popular numerical integration schemes for HMC based methods: s-AIA (2- and 3-stage), AIA, Velocity Verlet (1-, 2- and 3-stage), BCSS (2-, 3- and 4-stage), ME (2- and 3-stage). Moreover, it includes an efficient implementation of modified Hamiltonians, jointly with the corresponding fixed-parameter multi-stage integrators (m-BCSS2, m-BCSS3, m-BCSS4, m-ME2, m-ME3, m-ME4 --full description and derivation in [23]), and adaptive ones (MAIA, e-MAIA [12]). Finally, various randomization schemes for the HMC parameters can be found in the package.
Figure 3: Summary of the s-AIA\(k\) algorithm. The proposed approach consists of three stages: (i) tuning stage for adjusting the step size \(\Delta t_{\text{VV}}\) to get AR \(\approx\alpha_{\text{target}}\) (Appendix C); (ii) burn-in stage; the optimal multi-stage integrator and the HMC simulation parameters are found by combining the simulation data and the analysis provided; (iii) production stage to generate the HMC samples.
## 5 Numerical results and discussion
In order to evaluate the efficiency of the proposed s-AIA algorithms, we compared them in accuracy and performance with the integrators previously introduced for HMC-based sampling methods (Table 1). We examined 2- and 3-stage s-AIA on four benchmark models presented.
Figure 4: s-AIA\(k\).
### Benchmarks
* **Gaussian 1**, **Gaussian 2**: two \(D\)-dimensional multivariate Gaussian models \(\mathcal{N}(0,\Sigma)\), \(D=1000\), with precision matrix \(\Sigma^{-1}\) generated from a Wishart distribution with \(D\) degrees of freedom and the \(D\)-dimensional identity scale matrix [36] (Gaussian 1) and with diagonal precision matrix \(\Sigma^{-1}\) made by \(D_{1}=990\) elements taken from \(\mathcal{N}(1000,100)\) and \(D_{2}=10\) from \(\mathcal{N}(4000,1600)\) (Gaussian 2).
* **German**, **Musk**: two real datasets for a Bayesian Logistic Regression model [5; 37] available from the University of California Irvine Machine Learning Repository [42], with dimensions \(D=25\,(\text{German}),167\,(\text{Musk})\) and \(K=1000\,(\text{German}),476\,(\text{Musk})\) observations.
The frequency distributions of the selected benchmarks are plotted in Figure 5.
Figure 5: Frequency distributions of the benchmark models.
### Metrics
For HMC performance evaluation we monitored the following properties:
* **Acceptance rate.** The acceptance rate (AR) is the ratio between the accepted and the total \(N\) number of proposals as in (27).
* **Effective Sample Size.** The Effective Sample Size (ESS) is the number of effectively uncorrelated samples out of \(N\) collected samples of a Markov chain. We calculated it, as proposed in [5], through the _effectiveSize_ function of the CODA package of R[43].
* **Monte Carlo Standard Error.** The Monte Carlo Standard Error (MCSE) quantifies the estimation noise caused by Monte Carlo sampling methods. It indicates the estimated Standard Error of the sample mean \[\hat{\mathbf{\mu}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{\theta}_{i}\] in a Markov chain [44], and is calculated by substituting the sample size \(N\) in the Standard Error formula \[\text{SE}=\sqrt{\frac{\hat{\mathbf{\sigma}}^{2}}{N}},\] (34) with the ESS, i.e. \[\text{MCSE}=\sqrt{\frac{\hat{\mathbf{\sigma}}^{2}}{\text{ESS}}}.\] (35) In (34) and (35), \(\hat{\mathbf{\sigma}}^{2}\) is the sample variance.
* **Potential Scale Reduction Factor.** The Potential Scale Reduction Factor (PSRF) monitors the convergence of a Markov chain by comparing it with other randomly initialized chains [45]. We calculated it as explained in [46] (Sections 1.2-1.3).
We took \(\min\text{ESS}\) and \(\min\left(\text{MCSE}\right)^{-1}\) normalized with respect to the theoretical average number of gradient evaluations, that is \(k\bar{L}\) (\(\bar{L}\) is the theoretical average of number of integration steps, \(k\) is the number of stages of an integrator in use). Evaluation of gradients constitutes the bulk of the computational effort in HMC simulations and the chosen normalization provides leads to fair comparison between integrators with different number of
stages. Of course, larger values of \(\min\,\mathrm{ESS}\) and \(\min\left(\mathrm{MCSE}\right)^{-1}\) imply better sampling performance.
Finally, we monitored \(\max\,\mathrm{PSRF}\) to examine the convergence of tests and used a very conservative threshold, \(\mathrm{PSRF}<1.01\), as suggested in [47], for all benchmarks but Musk, for which the threshold was relaxed to 1.1 [45].
### Simulation setup
The proposed \(k\)-stage s-AIA algorithms, \(k=2,3\), were tested for a range of step sizes \(\{k\Delta t_{i}\}\) within the system-specific dimensional stability interval \((0,k\Delta t_{l})\). Such an interval is found through the dimensionalization of the theoretically predicted nondimensional stability limit for the \(k\)-stage Velocity Verlet using the fitting factor (28) and a method chosen among (32), (33), adjusted to a heuristic randomization method and aiming to minimize the effect of inaccuracies in the prediction due to the approximated nature of the proposed analysis. The grids of step sizes were obtained by dividing the stability interval into 20 equidistant parts \(k\Delta t_{i}\), \(i=1,...,20\), with \(k\Delta t_{1}=k\Delta t_{l}/20\), \(k\Delta t_{2}=k\Delta t_{1}+k\Delta t_{l}/20\),..., \(k\Delta t_{20}=k\Delta t_{l}\). For each iteration of the HMC simulation, a step size was drawn uniformly randomly from \((k\Delta t_{i-1},k\Delta t_{i}]\), \(i=1,...,20\)\((k\Delta t_{0}=0)\). The number of integration steps per iteration, \(L\), was drawn randomly uniformly at each iteration from \(\{1,...,2\bar{L}-1\}\), with \(\bar{L}\) such that
\[\bar{L}h=\tau D,\]
where \(D\) is the problem dimension and \(\tau\) is a benchmark-specific constant, found empirically to maximize performance near the center of the stability interval \(h=k\). Such a setting provides a fair comparison between various multi-stage integrators by fixing the average number of gradients evaluations performed within each tested integrator. We remark that optimal choices of HMC simulation parameters, such as step sizes, numbers of integration steps and randomization intervals are beyond the scope of this study and will be discussed in detail elsewhere. Each simulation was repeated 10 times and the results reported in the paper were obtained by averaging over those multiple runs to reduce statistical errors. The simulation settings are detailed in Table 2.
### Results and discussion
First, we tested 2- and 3-stage s-AIA integrators using the fitting factor approach \(S_{\omega}\) (28) and its corresponding nondimensionalization methods (29) or (31), selected according to the distribution of \(\omega_{j}\) (Table 2).
Figures 6-7 show the metrics collected for the Gaussian 1 and the German BLR benchmarks. One can appreciate the superiority of 2- and 3-stage s-AIA in terms of acceptance rate and sampling performance when compared with fixed-parameter multi-stage schemes of the same number of stages. Recall that, as explained before, the standard Verlet typically used in HMC is included in the family of multi-stage schemes. In particular, s-AIA integrators reach the best possible performance in their groups, i.e. 2- and 3-stage groups respectively, almost for each step size in the stability interval. This means that the adaptation of the integrator coefficient \(b_{\text{opt}}^{k}\) with respect to the randomized step size did enhance the accuracy and sampling of HMC. Specifically, the highest performance was reached around the center of the stability interval, in good agreement with the recommendations in [8]. As expected, HMC combined with 3-stage s-AIA outperformed HMC with 2-stage s-AIA in sampling efficiency. Moreover, the \(\max\)PSRF plot demonstrates that 3-stage s-AIA was the last integrator to lose convergence. In particular, for German BLR (Figure 7), s-AIA ensured convergence over the entire range of step sizes, which suggests that the stability limit had been estimated accurately, i.e. the chosen fitting factor approach worked properly.
Similar trends, though less pronounced, can be observed for the Gaussian 2 benchmark in Figure 8. Again, 2- and 3-stage s-AIA exhibited the best possible performance (with the clear superiority of 3-stage s-AIA) for most step sizes and turned out to be the last integrators to lose convergence in their groups. In contrast, the same fitting factor approach \(S_{\omega}\) applied to the Musk BLR benchmark did not show the level of accuracy observed for other benchmarks. In Figure 9, one can admit the poor performance achieved for almost all integrators in the second half of the stability interval, i.e the stability limit was overestimated. However, 3-stage s-AIA reached the best
\begin{table}
\begin{tabular}{c c c c c c c} Benchmark & \(D\) & HMC iterations & Fitting factor & \(\sigma\) correction & \(\Delta t_{l}\) & \(k\bar{L}\) \\ \hline Gaussian 1 & 1000 & 40000 & \(S=1\) & - & 0.03017 & 4000 \\ & & & \(S_{\omega}=1.2648\) & yes (\(\sigma=16.7\)) & 0.03248 & 1000 \\ Gaussian 2 & 1000 & 40000 & \(S=1\) & - & 0.02983 & 1000 \\ & & & \(S_{\omega}=1.2641\) & yes (\(\sigma=3.14\)) & 0.03005 & 1000 \\ & 25 & 40000 & \(S=1.3273\) & - & 0.1093 & 25 \\ & & & \(S_{\omega}=1.4284\) & no (\(\sigma=0.897\)) & 0.1015 & 167 \\ & & & \(S=2.9719\) & - & 0.1030 & 167 \\ Musk & 167 & 100000 & \(S_{\omega}=3.8827\) & no (\(\sigma=0.578\)) & 0.07115 & 167 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters settings for each benchmark model.
values in terms of \(\min\,\mathrm{ESS}\) and \(\min\,(\mathrm{MCSE})^{-1}\), again around the center of the stability interval. Further analysis of the simulated frequencies and forces of the benchmarks revealed (see Figure 10) the anharmonic behavior of the Musk system, which, along with the fitting factor \(S\approx 2.93>2\) (Table 2), explains the inaccuracy of the harmonic analysis presented in Section 4.1 in the estimation of the stability limit in this case.
Next, we tested 2- and 3-stage s-AIA integrators using the fitting factor approach \(S\) (28) and its corresponding nondimensionalization method (30) (see Figures 11-14). As expected, the more accurate \(S_{\omega}\) fitting factor and its nondimensionalization methods (29), (31) lead to an overall better performance than s-AIA with \(S\) and (30). However, for models with \(S<2\) (cf. Figures 11, 12, 13), both fitting approaches exhibited similar trends. On the
Figure 6: Gaussian 1 benchmark model with \(S_{\omega}\) fitting factor (28) and nondimensionalization (31). The metrics in Section 5.2 are plotted vs a range of step sizes within the stability interval (33). s-AIA3 (solid green line) leads to the best HMC performance and improves on the other integrators for almost all step sizes and all the metrics. The \(\max\mathrm{PSRF}\) plot shows that s-AIA3 is the integrator with best convergence. s-AIA2 (dashed green line) shows similar advantages over the other 2-stage integration schemes. For both 2- and 3-stage s-AIA, the top performance in terms of \(\min\mathrm{ESS}\) and \(\min\,(1/\mathrm{MCSE})\) is reached near the center of the stability interval.
other hand, for the Musk BLR benchmark, i.e. when \(S>2\), 2- and 3-stage s-AIA benefit from the more accurate \(S_{\omega}\) fitting factor approach, reaching a clearly better estimation of the stability limit (cf. Figure 14).
Finally, we wish to review the behavior of the other tested multi-stage integrators. First, we remark the superiority of 3-stage integrators over their 2-stage counterparts. For any benchmark and fitting factor approach, the 3-stage integrators performed on average better at the same computational cost, as previously suggested in [23]. In addition, we highlight that the other integration schemes tested showed a strong dependence on the model in use. In particular, VV performed poorly for the Gaussian benchmarks (Figures 6, 8) but demonstrated solid performance for the BLR models, especially for larger step sizes (Figures 7, 9). Similarly to VV, AIA resulted to be one of the
Figure 7: German benchmark model with \(S_{\omega}\) fitting factor (28) and nondimensionalization (29). The metrics in Section 5.2 are plotted vs a range of step sizes within the stability interval (32). s-AIA3 (solid green line) improves on the other integrators for all step sizes and all the metrics. It shows its best performance near the center of the stability interval. s-AIA2 (dashed green line) shows similar advantages within the class of 2-stage integration schemes. The \(\max\)PSRF plot shows that both s-AIA2 and s-AIA3, together with AIA2 (dashed black line) and VV2 (dashed red line), maintain convergence within the entire stability interval.
worst integrators for the Gaussian benchmarks (Figures 6, 8), but achieved performance similar to 2-stage s-AIA for the BLR models (Figures 7, 9). On the contrary, the BCSS and ME integrators performed similarly to s-AIA for Gaussian 2 and Musk (Figures 8, 9), whereas they lose performance for Gaussian 1 and BLR German (Figures 6, 7).
In conclusion, we observed that the s-AIA algorithms enhanced the performance of HMC, if the stability interval length was estimated accurately. When that is the case, s-AIA demonstrates the best performance around the center of the stability interval, which, together with (32)-(33), gives a helpful suggestion for the choice of step size in HMC simulations. Moreover, the more accurate fitting factor approach \(S_{\omega}\) (28) with (29) or (31) provided a better approximation of the stability limit, which resulted in higher accuracy and greater performance of the adaptive integrators, mostly when applied to systems with prevailing anharmonic forces, i.e. if \(S>2\).
Figure 8: Gaussian 2 benchmark model with \(S_{\omega}\) fitting factor (28) and nondimensionalization (31). Metrics are plotted vs a range of step sizes within the stability interval (33). s-AIA3 (solid green line) leads to the highest performance for most step sizes. The max PSRF plot confirms that s-AIA3 guarantees the best HMC convergence. s-AIA2 (dashed green line) shows similar advantages within the class of 2-stage integration schemes.
## 6 Conclusion
We have presented a novel adaptive multi-stage integration approach for enhancing the accuracy and sampling efficiency of HMC-based methods for Bayesian inference applications. The proposed methodology, which we call s-AIA, provides, for any choice of step size within the stability interval, a system-specific palindromic 2- or 3-stage splitting integrator which ensures the best energy conservation for harmonic forces within its family. Moreover, we offered a solution for detecting a system specific dimensional stability interval using the simulation data generated at the HMC burn-in stage. In particular, we introduced three optional scaling/nondimensionalization approaches for estimating the stability limit with different level of accuracy and computational effort.
s-AIA was implemented (without introducing computational overheads in simulations) in the in-house software package HaiCS (Hamiltonians in Com
Figure 9: Musk with \(S_{\omega}\) fitting factor (28) and nondimensionalization (29). Metrics are plotted vs a range of step sizes within the stability interval (32). s-AIA3 (solid green) leads to the best performance together with ME3 (solid orange) and BCSS3 (solid blue), while VV2 (dashed red) maintains better performance for larger step sizes. The \(\max\)PSRF plot shows that the stability interval is overestimated.
putational Statistics) [5; 17] and tested against the popular numerical integrators (Verlet [6; 7], BCSS [8] and Minimum Energy [20; 24]) on the range of benchmark models. We found that the adaptivity helped to reach the best possible performance within the families of 2-, 3-stage splitting integration schemes. We emphasize that standard Velocity Verlet, the HMC integrator of choice, is a member of those families. If the stability limit was estimated accurately, s-AIA integrators reached the best performance in their groups, i.e. 2- and 3-stage groups, almost for each step size in the stability interval. Also, using more stages enhanced the sampling performance, stability and conservation of the energy of the harmonic forces with the same computational effort.
We have demonstrated that the more accurate fitting factor approach \(S_{\omega}\) (28) led to an overall better performance in HMC simulations than its less computationally expensive counterpart \(S\). However, the latter was able to reach comparable results when lying below the upper threshold \(S<2\)[29].
Figure 10: Evolution of forces \(-\nabla U\) (top) and average frequencies \(\bar{\omega}\) (bottom) observed in the numerical experiments for the four benchmarks with \(S_{\omega}\) fitting factor (Figures 6–9). The low frequencies of Musk BLR (violet line, bottom plot) generate the anharmonic behavior (violet line, top plot).
In that way, computational time and resources may be saved by avoiding the computation of angular frequencies. On the other hand, for more complex distributions, e.g. with dominating low-frequencies (like the Musk BLR benchmark model [42]), we found that a proper analysis of the underlying dynamics of the simulated system might assist in the choice of a suitable system-specific fitting factor, the randomization interval and the number of HMC iterations required for a chain to converge.
We remark that even in the case of a rough estimation of the stability limit (like in Musk BLR), HMC with multi-stage adaptive splitting schemes achieves top performance in comparison with the fixed-parameter schemes, though the exact location of the optimal step size is harder to predict in this case. In an upcoming study, we will show how the proposed methodology can be adjusted for refining optimal parameters of HMC-based simulations.
Figure 11: Gaussian 1: The effect on the HMC performance of different scaling approaches \(S\) and \(S_{\omega}\) (28) with (30) (in green) and with (31) (in purple) respectively. The metrics to monitor are plotted against the 1-stage dimensionless stability interval \((0,2)\) in order to display the comparison. HMC with s-AIA using the \(S_{\omega}\) fitting factor approach (in purple) exhibits more accuracy and better sampling around the center of the stability interval.
## Appendix A Derivation of \(\mathbf{\rho_{3}(h,b)}\) in (15)
Consider the harmonic oscillator with Hamiltonian
\[H=\frac{1}{2}(p^{2}+\theta^{2}),\qquad\theta,p\in\mathbb{R},\] (A.1)
and equations of motions
\[\frac{d\theta}{dt}=p,\qquad\frac{dp}{dt}=-\theta.\] (A.2)
Given a \(k\)-stage palindromic splitting integrator \(\Psi_{h}\) (\(h\) is the integration step size), it acts on a configuration \((\theta_{i},p_{i})\) at the \(i\)-th iteration as
\[\Psi_{h}\left(\begin{array}{c}q_{i}\\ p_{i}\end{array}\right)=\left(\begin{array}{c}q_{i+1}\\ p_{i+1}\end{array}\right)=\begin{pmatrix}A_{h}^{\mathbf{z}}&B_{h}^{\mathbf{z}}\\ C_{h}^{\mathbf{z}}&D_{h}^{\mathbf{z}}\end{pmatrix}\left(\begin{array}{c}q_{i}\\ p_{i}\end{array}\right),\] (A.3)
Figure 12: German BLR: The effect on the performance of different scaling approaches \(S\) and \(S_{\omega}\) (28) with (30) (in green) and with (29) (in purple) respectively. Metrics are plotted against the 1-stage dimensionless stability interval \((0,2)\) for comparison. HMC with s-AIA using the \(S_{\omega}\) fitting factor (in purple) exhibits more accuracy and better sampling in the second part of the stability interval.
for suitable method-dependent coefficients \(A_{h}^{\mathbf{z}}\), \(B_{h}^{\mathbf{z}}\), \(C_{h}^{\mathbf{z}}\), \(D_{h}^{\mathbf{z}}\) (\(\mathbf{z}=\{b_{i},a_{j}\}\) is the set of \(k-1\) integration coefficients). In [8], a formula for \(\rho(h,\mathbf{z})\) is provided:
\[\rho(h,\mathbf{z})=\frac{(B_{h}^{\mathbf{z}}+C_{h}^{\mathbf{z}})^{2}}{2(1-A_{h}^{\mathbf{z}^{2} })}.\] (A.4)
For a 3-stage palindromic splitting integrator (11), the integrator coefficients are (\(\mathbf{z}=\{b,a\}\))
\[A_{h}^{\mathbf{z}} =1-\frac{h^{2}}{2}+a(1/2-b)(1/2-a+b)h^{4}-2a^{2}b(1/2-a)(1/2-b)^{2 }h^{6},\] (A.5) \[B_{h}^{\mathbf{z}} =h-2a(1-a)(1/2-b)h^{3}+2a^{2}(1/2-a)(1/2-b)^{2}h^{5},\] (A.6) \[C_{h}^{\mathbf{z}} =-h+(2ab(1-b)-a/2+1/4)h^{3}+\] \[+2ab(1/2-b)(a(1-b)-1/2)h^{5}+2a^{2}b^{2}(1/2-a)(1/2-b)^{2}h^{7}.\] (A.7)
Figure 13: Gaussian 2: The effect of the scaling approaches \(S\) and \(S_{\omega}\) (28) with (30) (in green) and with (31) (in purple) respectively. Metrics to monitor are plotted against the 1-stage dimensionless stability interval \((0,2)\) for comparison. Both approaches lead to almost identical performance in terms of accuracy, sampling and stability.
Finally, for \(a,b\) in (14) and \(A_{h}^{\mathbf{z}}\), \(B_{h}^{\mathbf{z}}\) and \(C_{h}^{\mathbf{z}}\) in (A.5)-(A.6)-(A.7), \(\rho(h,\mathbf{z})\) in (A.4) becomes
\[\rho_{3}(h,b)=\frac{h^{4}\left(-3b^{4}+8b^{3}-19/4b^{2}+b+b^{2}h^{2}\left(b^{3} -5/4b^{2}+b/2-1/16\right)-1/16\right)^{2}}{2(3b-bh^{2}(b-1/4)-1)\left(1-3b-bh^{ 2}(b-1/2)^{2}\right)\left(-9b^{2}+6b-h^{2}(b^{3}-5/4b^{2}+b/2-1/16)-1\right)}.\]
## Appendix B Derivation of \(\mathbb{E}_{\mathbf{VV}}^{1}[\Delta H]\) in (19)
According to [8], for the harmonic oscillator with the Hamiltonian (A.1) and the equations of motion (A.2), the expected energy error produced by a \(k\)-stage palindromic splitting integrator \(\Psi_{h}\) applied for \(L\) integration steps is given by
\[\mathbb{E}[\Delta H]=\sin^{2}\left(L\Theta_{h}^{\mathbf{z}}\right)\rho(h,\mathbf{z}),\] (B.1)
Figure 14: Musk BLR: The effect of the scaling approaches \(S\) and \(S_{\omega}\) (28) with (30) (in green) and with (29) (in purple) respectively. Metrics are plotted against the 1-stage dimensionless stability interval \((0,2)\) for comparison. Using \(S_{\omega}\) (in purple) helps to shift the best performance of both adaptive schemes towards the center of the stability interval. Moreover, \(\max\)PSRF confirms that the stability limit is estimated better with \(S_{\omega}\).
where \(\Theta_{h}^{\mathbf{z}}=\arccos A_{h}^{\mathbf{z}}\), and \(A_{h}^{\mathbf{z}}\) is defined in (A.3). For \(L=1\) and \(\rho(h,\mathbf{z})\) defined in (A.4), (B.1) yields
\[\mathbb{E}[\Delta H]=\frac{\left(B_{h}^{\mathbf{z}}+C_{h}^{\mathbf{z}}\right)^{2}}{2}.\] (B.2)
For the 1-stage Velocity Verlet integrator (6), one has
\[\Psi_{h}^{\rm VV}\left(\begin{array}{c}\theta_{i}\\ p_{i}\end{array}\right)=\left(\begin{array}{c}\left(1-\frac{h^{2}}{2}\right) \theta_{i}+hp_{i}\\ \left(-h+\frac{h^{3}}{4}\right)\theta_{i}+\left(1-\frac{h^{2}}{2}\right)p_{i} \end{array}\right),\]
that is
\[B_{h}^{\mathbf{z}}=h,\qquad C_{h}^{\mathbf{z}}=-h+\frac{h^{3}}{4},\]
which, combined with (B.2), provides
\[\mathbb{E}_{\rm VV}^{1}[\Delta H]=\frac{h_{\rm VV}^{6}}{32}.\] (B.3)
## Appendix C Derivation of \(\mathbf{\alpha_{\rm target}}\) for s-AIA tuning.
For the burn-in stage, we choose the 1-stage Velocity Verlet integrator with \(L=1\) and step size \(\Delta t_{\rm VV}\), which should be ideally chosen to be close to the center of the stability interval to achieve the best accuracy and sampling efficiency of an HMC simulation. In order to identify such a step size, we estimate the expected acceptance probability \(\mathbb{E}[\alpha]\) following [10] (Sec. 5.2, Th. 1), i.e.
\[\mathbb{E}[\alpha]=1-\frac{2}{\pi}\arctan\sqrt{\frac{\mathbb{E}[\Delta H]}{2}},\] (C.1)
which holds for standard univariate Gaussian distribution, i.e. the harmonic oscillator with the Hamiltonian (A.1), regardless of the integrator being used, the step size and \(L\). For the burn-in stage simulation setting, the expected energy error \(\mathbb{E}[\Delta H]\) is defined in (B.3) and, evaluated at the middle of the stability interval, \(h=1\), it is equal to
\[\mathbb{E}[\Delta H]=\frac{1}{32}.\] (C.2)
Combining (C.1) and (C.2), one obtains
\[\mathbb{E}[\alpha]\approx 0.92=\alpha_{\rm target}.\]
## Acknowledgments
We thank Tijana Radivojevic, Jorge Perez Heredia and Felix Muller for their valuable contributions at the early stage of the study.
We acknowledge the financial support by the Ministerio de Ciencia y Innovacion (MICINN, AEI) of the Spanish Government through BCAM Severo Ochoa accreditation CEX2021-001142-S (LN, EA) and projects PID2019-104927GB-C22, PID2019-104927GB-C21, MCIN/AEI/10.13039/501100011033, ERDF ("A way of making Europe") (all). This work was supported by the BERC 2022-2025 Program (LN, EA), by Convenio IKUR 21-HPC-IA, by ELKARTEK Programme, grants KK-2022/00006 (EA), KK-2021/00022 (EA, LN) and KK-2021/00064 (EA) - all funded by the Basque Government, and by La Caixa - INPhINIT 2020 Fellowship, grant LCF/BQ/DI20/11780022 (LN), funded by the Fundacion "la Caixa". This work has been possible thanks to the support of the computing infrastructure of the i2BASQUE academic network, Barcelona Supercomputing Center (RES), DIPC Computer Center, BCAM in-house cluster Hipatia and the technical and human support provided by IZO-SGIker of UPV/EHU.
|
2301.11402 | A Hybrid Deep Neural Operator/Finite Element Method for Ice-Sheet
Modeling | One of the most challenging and consequential problems in climate modeling is
to provide probabilistic projections of sea level rise. A large part of the
uncertainty of sea level projections is due to uncertainty in ice sheet
dynamics. At the moment, accurate quantification of the uncertainty is hindered
by the cost of ice sheet computational models. In this work, we develop a
hybrid approach to approximate existing ice sheet computational models at a
fraction of their cost. Our approach consists of replacing the finite element
model for the momentum equations for the ice velocity, the most expensive part
of an ice sheet model, with a Deep Operator Network, while retaining a classic
finite element discretization for the evolution of the ice thickness. We show
that the resulting hybrid model is very accurate and it is an order of
magnitude faster than the traditional finite element model. Further, a
distinctive feature of the proposed model compared to other neural network
approaches, is that it can handle high-dimensional parameter spaces (parameter
fields) such as the basal friction at the bed of the glacier, and can therefore
be used for generating samples for uncertainty quantification. We study the
impact of hyper-parameters, number of unknowns and correlation length of the
parameter distribution on the training and accuracy of the Deep Operator
Network on a synthetic ice sheet model. We then target the evolution of the
Humboldt glacier in Greenland and show that our hybrid model can provide
accurate statistics of the glacier mass loss and can be effectively used to
accelerate the quantification of uncertainty. | QiZhi He, Mauro Perego, Amanda A. Howard, George Em Karniadakis, Panos Stinis | 2023-01-26T20:28:34Z | http://arxiv.org/abs/2301.11402v1 | # A Hybrid Deep Neural Operator/Finite Element Method for Ice-Sheet Modeling
###### Abstract
One of the most challenging and consequential problems in climate modeling is to provide probabilistic projections of sea level rise. A large part of the uncertainty of sea level projections is due to uncertainty in ice sheet dynamics. At the moment, accurate quantification of the uncertainty is hindered by the cost of ice sheet computational models. In this work, we develop a hybrid approach to approximate existing ice sheet computational models at a fraction of their cost. Our approach consists of replacing the finite element model for the momentum equations for the ice velocity, the most expensive part of an ice sheet model, with a Deep Operator Network, while retaining a classic finite element discretization for the evolution of the ice thickness. We show that the resulting hybrid model is very accurate and it is an order of magnitude faster than the traditional finite element model. Further, a distinctive feature of the proposed model compared to other neural network approaches, is that it can handle high-dimensional parameter spaces (parameter fields) such as the basal friction at the bed of the glacier, and can therefore be used for generating samples for uncertainty quantification. We study the impact of hyper-parameters, number of unknowns and correlation length of the parameter distribution on the training and accuracy of the Deep Operator Network on a synthetic ice sheet model. We then target the evolution of the Humboldt glacier in Greenland and show that our hybrid model can provide accurate statistics of the glacier mass loss and can be effectively used to accelerate the quantification of uncertainty.
keywords: hybrid model, finite element, neural operator, ice-sheet dynamics, deep learning surrogate +
Footnote †: journal: Elsevier
## 1 Introduction
Ice sheet models are important components of climate models and are crucial for providing projections of sea-level rise. In fact, sea-level rise is due in large part to added water to the ocean originating from mass loss of Greenland and Antarctic ice sheets [1; 2; 3].
Quantifying the uncertainty on the projections of sea-level rise, due to uncertainties in the data and in the models, is an extremely challenging task. The large dimensionality of the parameter space, and high computational cost of ice sheet models make Bayesian inference and uncertainty quantification infeasible, despite the large computational resources available. While there are efficient ways to perform Bayesian inference under certain approximations [4; 5], previous attempts to quantify the uncertainty on sea level rise (e.g., [6; 7; 8]) perform drastic reductions of the dimensionality of the parameter space that are often dictated by feasibility reasons rather than by physical or mathematical arguments.
Several efforts [9; 10; 11; 12; 13; 14; 15; 16; 17; 18] over the last decades focused on efficiently solving the steady state Stokes-like flow equations governing the ice flow, which still represents the most computationally expensive part of an ice flow model. Flow equations need to be solved at each time step. While time steps can be as little as a week, typical temporal periods of interest range from a few decades to centuries, to millennia. In this work we aim at replacing the most expensive part of an ice sheet model, the Stokes-like flow equations, with a deep learning surrogate that is order of magnitudes faster than the finite element based implementation. A similar idea has been pursued by Jouvet et al. [19], where a deep learning model was used to accelerate ice sheet modeling of Paleo simulations. A key requirement for our surrogate, that sets it apart from [19], is that it depends on high-dimensional parameter spaces (parameter fields), such as the basal friction coefficient that determines the basal sliding or the bed topography. This allows us to use the model for inference and for uncertainty quantification. We also note that in paleo simulations
most of the uncertainty comes from the climate forcing, whereas in the simulations in which we are interested here, that spans approximately half a century, model error is a significant source of uncertainty [8], which forces us to have very accurate models. Another related problem, where deep learning models have been used to approximate the parameter-to-velocity map in ice-sheet problems, is presented in [20]. In that work, the authors first find a basis of the operator using principal component analysis, and then use a residual neural network to compute the basis coefficients as a function of the parameters. In contrast to our problem, in [20] only a handful of parameters are considered.
We represent our deep learning surrogate with Deep Operator Networks (DeepONets) [21], which have proven to work well in learning operators in a wide range of applications ranging from fracture mechanics to combustion problems [22; 23; 24; 25; 26]. In its vanilla formulation, a DeepONet contains two deep neural networks, referred to as the _branch_ network and the _trunk_ network. The trunk network takes as input spatial coordinates whereas the branch network takes as input the input fields evaluated at a fixed set of points. DeepONets approximate operators as a linear combination of "basis functions" generated by the trunk network, with coefficients generated by the branch network. The mathematical foundations of DeepONets are based on the universal approximation theorem [27; 28], and, under mild assumptions, it has been proven that DeepONets can approximate with given accuracy any operator [21]. Our DeepONet surrogate takes as input fields the ice thickness and the basal friction field and computes the depth-averaged ice velocity field.
We use the trained DeepONet to build a fast _hybrid_ ice-flow model, where the evolution of the ice thickness is discretized with a classic finite element method, and, at each time step, the ice velocity field (as a function of the ice thickness and the basal friction field) is computed by the DeepONet. A finite element implementation of the ice-flow model is used as the "reference model" and also used to generate data to train the the DeepONet model. We demonstrate our approach on two ice sheet problems: 1. a synthetic ice sheet problem for exploring different hyper-parameters of the DeepONet and for studying the impact of mesh resolution and correlation length on the DeeoONet training and accuracy, and 2. a realistic simulation of the Humboldt glacier, which is one of the largest glaciers in Greenland and one that is expected to greatly contribute to sea-level rise in this century [29]. We show how our DeepONet surrogate can approximate the ice velocity computed by the finite element model very accurately (relative error of 0.4%) and at a fraction of the cost of the finite element model. The hybrid model produces accurate results for the ice thickness (2% relative error over a span of 100 years). We also show how the mass loss of the Humboldt glacier, computed using the hybrid model, is an accurate representation of the finite element model and can be used for computing statistics of sea level rise, yielding a 10 fold speed-up.
In Section 2 we present the mathematical equations that we use to compute the ice thickness and velocity and the probability distribution of the basal friction parameter. In section 3 we introduce the hybrid model, focusing in particular on its DeepONet component. In Section 4 we present the result of training the DeepOpNet for a synthetic test case, studying how the resolution of the input data and the correlation length of the basal friction distribution affect the accuracy and training of the DeepONet. Finally in Section 5 we target the Humboldt glacier and show how hybrid model can be effectively used for computing the statistics of the glacier mass loss. We conclude in Section 6 with a summary.
## 2 Ice Sheet Models
In this section, we briefly introduce the ice sheet models considered in this work, as depicted in Fig. 1.
Let \(x\) and \(y\) denote the horizontal coordinates and \(z\) the vertical coordinate, chosen such that the sea level corresponds to \(z=0\). The ice domain, at time \(t\), can be approximated as a vertically extruded domain \(\Omega\) defined as
\[\Omega(t):=\{(x,y,z)\text{ s.t. }(x,y)\in\Sigma,\text{ and }l(x,y,t)<z<s(x,y,t)\},\]
where \(\Sigma\subset\mathbb{R}^{2}\) is the horizontal extension of the ice. \(\Gamma_{l}(t):=\{(x,y,z)\text{ s.t. }z=l(x,y,t)\}\) denotes the lower surface of the ice at time \(t\), and \(\Gamma_{s}(t):=\{(x,y,z)\text{ s.t. }z=s(x,y,t)\}\) denotes the upper surface of the ice1. The bed topography, which we assume constant in time, is given by \(\Gamma_{b}:=\{(x,y,z)\text{ s.t. }z=b(x,y)\}\). In general, the ice sheet can have ice shelves where the ice is floating. We hence partition the lower surface of the ice \(\Gamma_{l}\) in the grounded part \(\Gamma_{g}=\Gamma_{l}\cap\Gamma_{b}\) (here,
\(l(x,y,t)=b(x,y)\)) and the floating part \(\Gamma_{f}\) under the ice shelf. We partition the lateral boundary of \(\Omega\) in \(\Gamma_{m}\), denoting the ice sheet margin (either terrestrial or marine margin), and, when we only consider a portion of the ice sheet, in \(\Gamma_{d}\), denoting an internal (artificial) boundary often chosen in correspondence of the ice divides.
The thickness of the ice, given by \(H(x,y,t):=s(x,y,t)-l(x,y,t)\), is defined on \(\Sigma\times[0,t_{f}]\) and evolves according to
\[\partial_{t}H+\nabla\cdot(\mathbf{\bar{u}}H)=f_{H} \tag{1}\]
where \(\mathbf{\bar{u}}:=\dfrac{1}{H}\int_{l}^{s}\mathbf{u}\,dz\) is the depth-integrated velocity and \(f_{H}\) is an accumulation rate, accounting for accumulation (e.g., due to snow precipitations) and melting at the upper surface and accumulation/melting at the base of the ice sheet. We need to constrain \(H\) to be non-negative, as there is no guarantee that \(f_{H}\), typically coming from climate models, is consistent with the ice thickness equation.
Ice sheets behave as a shear thinning fluid and can be modeled with the nonlinear Stokes equation [30]. In this work we use simplifications of Stokes equations that are less expensive to solve and that are obtained with scaling arguments based on the fact that glaciers and in particular ice sheets are typically shallow. We consider two such simplifications: the mono-layer higher-order approximation (MOLHO) and the shallow shelf approximation (SSA). The MOLHO model [31] is suitable for both frozen and thawed beds, whereas the simpler SSA model [32; 33] works well only for grounded ice with significant sliding at the bed or for ice shelves where the ice is floating over the water. In the following we detail the Stokes model and its approximations.
### Stokes model
We denote with \(u\), \(v\) and \(w\) the \(x\), \(y\) and \(z\) components of the ice velocity, respectively, and the ice velocity vector is denoted by \(\boldsymbol{u}:=(u,v,w)\). Denoting the pressure with \(p\), and the ice density with \(\rho\), the Stokes equation reads
\[-\nabla\cdot\sigma =\rho\mathbf{g} \tag{2}\] \[\nabla\cdot\mathbf{u} =0 \tag{3}\]
with stress tensor \(\sigma=2\mu\mathbf{D}-pI\), and strain rate tensor \(\mathbf{D}_{ij}(\mathbf{u})=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x _{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\). The non-linear viscosity is given by
\[\mu=\frac{1}{2}A(T)^{-q}\,D_{e}(\mathbf{u})^{q-1} \tag{4}\]
with \(q\leq 1\). In this work we take \(q=\frac{1}{3}\), a typical choice. \(A\) is the ice flow factor that depends on the ice temperature \(T\). The effective strain rate \(D_{e}(\mathbf{u})\) is given by \(D_{e}(\mathbf{u})=\frac{1}{\sqrt{2}}|\mathbf{D}(\mathbf{u})|\), where \(|\cdot|\) denotes the Frobenius norm. The Stokes
Figure 1: Cartoon of an ice sheet in the \(x-z\) plane.
equation is accompanied by the following boundary conditions:
\[\left\{\begin{array}{ll}\sigma\mathbf{n}=0&\text{on }\Gamma_{s}&\text{ stress free, atmospheric pressure neglected}\\ \sigma\mathbf{n}=\rho_{w}\,g\,\min(z,0)\mathbf{n}&\text{on }\Gamma_{m}& \text{boundary condition at the ice margin}\\ \mathbf{u}=\mathbf{u}_{d}&\text{on }\Gamma_{d}&\text{Dirichlet condition at internal boundary}\\ \mathbf{u}\cdot\mathbf{n}=\mathbf{0},\,\,(\sigma\mathbf{n})_{\parallel}= \beta\mathbf{u}_{\parallel}&\text{on }\Gamma_{g}&\text{impenetrability + sliding condition}\\ \sigma\mathbf{n}=\rho_{w}\,g\,\mathbf{z}\,\mathbf{n}&\text{on }\Gamma_{f}& \text{back pressure from ocean under ice shelves}\end{array}\right.\]
Here \(\beta(x,y)\) is the sliding (or friction) coefficient, \(\rho_{w}\) is the density of the ocean water and \(\mathbf{n}\) the unit outward-pointing normal to the boundary. The boundary condition at the margin includes the ocean back-pressure term, when the margin is partially submerged (\(z<0\)). For terrestrial margin, \(z>0\), hence the term becomes a stress-free condition. The friction term \(\beta\) can also depend on \(\mathbf{u}\), depending on the choice of the sliding law.
### Mono-layer higher-order (MOLHO)
The MOLHO model [31] is based on the Blatter-Pattyn approximation [34] that can be derived neglecting the terms \(w_{x}\) and \(w_{y}\) in the strain-rate tensor \(D\) and, using the continuity equation, replacing \(w_{z}\) with \(-(u_{x}+v_{y})\):
\[\mathbf{D}=\begin{bmatrix}u_{x}&\frac{1}{2}(u_{y}+v_{x})&\frac{1}{2}u_{z}\\ \frac{1}{2}(u_{y}+v_{x})&v_{y}&\frac{1}{2}u_{z}\\ \frac{1}{2}u_{z}&\frac{1}{2}v_{z}&-(u_{x}+v_{y})\end{bmatrix}. \tag{5}\]
This leads to the following elliptic equations in the horizontal velocity \((u,v)\)
\[-\nabla\cdot(2\mu\hat{\mathbf{D}})=-\rho g\nabla s \tag{6}\]
with
\[\hat{\mathbf{D}}=\begin{bmatrix}2u_{x}+v_{y}&\frac{1}{2}(u_{y}+v_{x})&\frac{1 }{2}u_{z}\\ \frac{1}{2}(u_{y}+v_{x})&u_{x}+2v_{y}&\frac{1}{2}v_{z}\end{bmatrix}. \tag{7}\]
Here the gradient is two-dimensional: \(\nabla=[\partial_{x},\partial_{y}]^{T}\). The viscosity \(\mu\) is given by (4) with the effective strain rate
\[D_{e}=\sqrt{u_{x}^{2}+v_{y}^{2}+u_{x}v_{y}+\frac{1}{4}(u_{y}+v_{x})^{2}+\frac{ 1}{4}u_{z}^{2}+\frac{1}{4}v_{z}^{2}}.\]
The boundary conditions reads
\[\left\{\begin{array}{ll}2\mu\hat{\mathbf{D}}\,\mathbf{n}=0&\text{on }\Gamma_{s}&\text{stress free, atmospheric pressure neglected}\\ 2\mu\hat{\mathbf{D}}\,\mathbf{n}=\psi\mathbf{n}&\text{on }\Gamma_{m}& \text{boundary condition at at ice margin}\\ \mathbf{u}=\mathbf{u}_{d}&\text{on }\Gamma_{d}&\text{Dirichlet condition at internal boundary}\\ 2\mu\hat{\mathbf{D}}\,\mathbf{n}=\beta\mathbf{u}_{\parallel}&\text{on }\Gamma_{g}& \text{sliding condition}\\ 2\mu\hat{\mathbf{D}}\,\mathbf{n}=0&\text{on }\Gamma_{f}&\text{free slip under ice shelves}\end{array}\right.\]
where \(\psi=\rho g(s-z)\mathbf{n}+\rho_{w}\,g\,\min(z,0)\mathbf{n}\), which can be approximated with its depth-averaged value \(\tilde{\psi}=\frac{1}{2}gH(\rho-r^{2}\rho_{w})\), \(r\) being the the submerged ratio \(r=\max\left(1-\frac{s}{H},0\right)\); \(\mathbf{u}_{\parallel}\) is the component of the velocity \(\mathbf{u}\) tangential to the bed.
MOLHO consists of solving the weak form of the Blatter-Pattyn model, with the ansatz that the velocity can be expressed as :
\[\mathbf{u}(x,y,z)=\mathbf{u}_{b}(x,y)+\mathbf{u}_{v}(x,y)\left(1-\left(\frac{s- z}{H}\right)^{\frac{1}{q}+1}\right).\]
The problem is then formulated as a system of two two-dimensional partial differential equations (PDEs) for \(\mathbf{u}_{b}\) and \(\mathbf{u}_{v}\) (for a detailed derivation see [31].) Note that the depth-averaged velocity is given by \(\tilde{\mathbf{u}}=\mathbf{u}_{b}+\frac{(1+q)}{(1+2q)}\,\mathbf{u}_{v}\).
### Shallow Shelf Approximation (SSA)
The shallow shelf approximation [32] is a simplification of the Blatter-Pattyn model, assuming that the velocity is uniform in \(z\), so \(\mathbf{u}=\mathbf{\bar{u}}\). It follows that \(u_{z}=0\) and \(v_{z}=0\), giving:
\[\mathbf{D}=\begin{bmatrix}u_{x}&\frac{1}{2}(u_{y}+v_{x})&0\\ \frac{1}{2}(u_{y}+v_{x})&v_{y}&0\\ 0&0&-(u_{x}+v_{y})\end{bmatrix},\quad\mathbf{\hat{D}}=\begin{bmatrix}2u_{x}+v_{ y}&\frac{1}{2}(u_{y}+v_{x})&0\\ \frac{1}{2}(u_{y}+v_{x})&u_{x}+2v_{y}&0\end{bmatrix}, \tag{8}\]
and \(D_{e}=\sqrt{u_{x}^{2}+v_{y}^{2}+u_{x}v_{y}+\frac{1}{4}(u_{y}+v_{x})^{2}}\). The problem simplifies to a two-dimensional PDE in \(\Sigma\)
\[-\nabla\cdot\left(2\mu H\mathbf{\hat{D}}(\mathbf{\bar{u}})\right)+\beta \mathbf{\bar{u}}=-\rho gH\nabla s,\quad\text{in }\Sigma\]
with \(\bar{\mu}=\frac{1}{2}\bar{A}(T)^{-\frac{1}{n}}\,D_{e}(\mathbf{\bar{u}})^{ \frac{1}{n}-1}\), where \(\bar{A}\) is the depth-averaged flow factor and with boundary conditions:
\[\left\{\begin{array}{ll}2\mu\mathbf{\hat{D}}(\mathbf{\bar{u}})\,\mathbf{n}= \bar{\lambda}\mathbf{n}&\text{on }\Gamma_{m}&\text{boundary condition at ice margin}\\ \mathbf{\bar{u}}=\mathbf{\bar{u}}_{d}&\text{on }\Gamma_{d}&\text{ Dirichlet condition at internal boundary}\end{array}\right.\]
Recall that \(\bar{\psi}=\frac{1}{2}gH(\rho-r^{2}\rho_{w})\), \(r\) being the the submerged ratio \(r=\max\left(1-\frac{z}{H},0\right)\). With abuse of notation, here \(\Gamma_{m}\) and \(\Gamma_{d}\) are intended to be subsets of \(\partial\Sigma\).
### Distribution of basal friction field
The basal friction field \(\beta\) is one of the main factors that control the ice velocity. It cannot be measured directly and it is typically estimated by solving a PDE-constrained optimization problem, e.g., [35; 36], to assimilate observations of the surface ice velocity. As a result, the basal friction field is affected by both uncertainties in the observations and in the the model. While it is possible to characterize the probability distribution for \(\beta\) using a Bayesian inference approach, e.g., [37], here we adopt a simplified log-normal distribution for \(\beta\). We write the basal friction field as \(\beta=\exp(\gamma)\), where \(\gamma\) is normally distributed as
\[\gamma\sim\mathcal{F}\left(\log(\bar{\beta}),k_{l}\right),\;\;\text{and}\;\;k_ {l}(\mathbf{x}_{1},\mathbf{x}_{2})=a\exp\left(-\frac{|\mathbf{x}_{1}-\mathbf{ x}_{2}|^{2}}{2l^{2}}\right). \tag{9}\]
Here \(\log(\bar{\beta})\) is the mean of the Gaussian process \(\mathcal{F}\) and it is often obtained by assimilating the observed velocities [35], \(l\) is the correlation length and \(a\) is a scaling factor. In this work we choose values of the correlation length and of the scaling factor that produce reasonable results. While an in-depth validation of the chosen parameters is beyond the scope of this work, we explore the dependence of the accuracy of the DeepONet model as a function of the correlation length, as discussed in Section 4.
## 3 Computational Models
In this section we introduce the finite element ice flow model and the hybrid ice flow model. We first perform a semi-implicit time discretization of the ice thickness equation (1):
\[\left\{\begin{array}{lll}H^{n+1}&=&H^{n}-\Delta t\,\nabla\cdot\left(\mathbf{ \bar{u}}^{n}H^{n+1}\right)+\Delta tF_{H}^{n}\\ \mathbf{\bar{u}}^{n}&=&\mathcal{G}(\beta,H^{n})\end{array}\right. \tag{10}\]
where \(H^{n}\) is the approximation of \(H\) at time \(t^{n}=t^{0}+n\Delta t\), for a given time-step \(\Delta t\), and \(F_{H}^{n}=F_{H}(t^{n})\) is the corresponding discrete approximation of the accumulation rate \(f_{H}\). Here, \(\mathcal{G}(\cdot,\,\cdot)\) is the velocity operator that maps the basal friction field and the ice thickness into the depth-averaged velocity vector, based either on the SSA (Sec. 2.3) model or the MOLHO (Sec. 2.2) model. In this work we discretize the thickness equation (10) with finite elements, using streamline upwind stabilization. Similarly, we provide a classic Galerkin finite element discretization of the nonlinear operator \(\mathcal{G}\). The finite element discretization is implemented in FEniCS [38]. We use continuous piece-wise linear finite elements for both the thickness and the velocity fields, and solve the discretized problem with PETSc[39] SNES nonlinear
solvers. We refer to this finite element implementation of (10) as the _finite element ice flow model_ that we use as our a reference model.
The focus of the paper is on avoiding the high computational cost of constructing a finite element approximation of the nonlinear operator \(\mathcal{G}\), and using, instead, a DeepONet approximation of \(\mathcal{G}\), which, in combination with the finite element discretization of the first equation of (10), constitutes the _hybrid ice flow model_. The DeepONet implementation and training are performed using JAX[40]. At each time step, the FEniCS finite element code calls theJAX DeepONet code to compute an approximation of \(\mathcal{G}(\beta,H^{n})\). In the next sections we describe in detail the DeepONet architecture and its training.
### DeepONet approximation
As briefly discussed in the introduction, the main idea of DeepONet is to learn, in general nonlinear, operators mapping between infinite-dimensional function spaces via deep neural networks [21]. Inspired by the universal approximation theorem for operators [27], DeepONet's architecture consists of two neural networks: one is used to encode the input function sampled at fixed sensor points (_branch net_) whereas the other inputs the location coordinates to evaluate the output function (_trunk net_). It has been shown that this architecture of two sub-networks can substantially improve generalization compared to fully connected neural networks [21]. In this study, a DeepONet denoted by \(\mathcal{G}_{\theta}\) is used as a surrogate for the nonlinear operator \(\mathcal{G}\) in Eq. (10),
\[\mathcal{G}_{\theta}(\beta,H^{n})(\mathbf{x})\approx\mathcal{G}(\beta,H^{n})( \mathbf{x}), \tag{11}\]
where \(\theta\) represents the collection of trainable parameters in DeepONet, and the approximated velocity components are
\[\begin{split}\bar{u}_{x}^{n}\approx\mathcal{G}_{\theta}^{x}( \beta,H^{n})(\mathbf{x})&=\sum_{m=1}^{p}b_{m}(\beta,H^{n})t_{m}( \mathbf{x}),\\ \bar{u}_{y}^{n}\approx\mathcal{G}_{\theta}^{y}(\beta,H^{n})( \mathbf{x})&=\sum_{m=p+1}^{2p}b_{m}(\beta,H^{n})t_{m}(\mathbf{x}),\end{split} \tag{12}\]
where \(b_{m}\) and \(t_{m}\) denote the outputs of the branch net and the trunk net, respectively. The details of the DeepONet model is shown in the schematic of Fig. 2. In this setting, the input functions, i.e., the friction \(\beta\) and thickness \(H^{n}\) at the moment \(t^{n}\), evaluated at finite locations (sensors), \(\mathcal{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}\}\), are mapped as embedded coefficients through the branch net, while the trunk net learns a collection of space-dependent basis functions that are linearly combined with the branch coefficients to approximate the velocity components. Note that the learned operator \(\mathcal{G}_{\theta}(\beta,H^{n})\) is a continuous function with respect to coordinates \(\mathbf{x}\), which are the inputs to the trunk net. For brevity, we denote the DeepONet approximated velocity as \(\bar{\mathbf{u}}^{NN}\).
### DeepONet training
The trainable parameters, i.e., \(\boldsymbol{\theta}\), associated with the DeepONet model are obtained by minimizing the loss function
\[\mathcal{L}(\boldsymbol{\theta})=\frac{1}{N_{\beta}N_{T}}\sum_{i=1}^{N_{\beta }}\sum_{j=1}^{N_{T}}\sum_{\mathbf{x}\in\mathcal{Y}}w_{ij}(\mathbf{x})\bar{ \mathbf{u}}(\mathbf{x},t^{j};\beta_{i})-\mathcal{G}_{\theta}(\beta_{i},H^{j} )(\mathbf{x})|^{2}, \tag{13}\]
where \(w_{ij}(\mathbf{x})\) are weights corresponding to each data point, \(N_{\beta}\) is the number of friction fields \(\{\beta_{i}\}_{i=1}^{N_{\beta}}\) used for different training simulations, \(N_{T}\) is the number of time steps within each simulation to sample the velocity and thickness, \(\bar{\mathbf{u}}(\mathbf{x},t^{j};\beta_{i})\) is the target velocity solution, and \(\mathcal{G}_{\theta}(\beta_{i},H^{j})(\mathbf{x})\) is the predicted value obtained from DeepONet. Both target solution \(\bar{\mathbf{u}}(\mathbf{x},t^{j};\beta_{i}):=\mathcal{G}(\beta_{i},H^{j})( \mathbf{x})\) and DeepONet prediction \(\mathcal{G}_{\theta}(\beta_{i},H^{j})(\mathbf{x})\) are evaluated at the set of locations \(\mathcal{Y}\). The input functions \(\beta_{i}\) and \(H^{j}\) of the branch network are discretized at the fixed set of sensor points \(\mathcal{X}\) (see Fig. 2). In this work it is convenient to choose \(\mathcal{X}\) to be the set of the grid nodes used in the finite element discretization and to take \(\mathcal{Y}=\mathcal{X}\).
In Eq. (13), the penalizing weights \(w_{ij}(\mathbf{x})\) are generally related to the characteristics of training data, i.e., the friction field, time step, and spatial locations. For simplified cases where the target operator presents little variability with
respect to the input parameters, the weights are assumed to be unity, i.e., \(w_{ij}(\mathbf{x})\equiv 1\). However, it is observed in our numerical investigation that using nonuniform (space-dependent) weights can lead to better generalization. To this end, we use the self-adaptive weight estimation approach [41; 42] to adjust the weight parameters through gradient descent along with the network parameters. Assuming that the weights depend only on the space coordinates, i.e., \(w_{ij}(\mathbf{x})=w(\mathbf{x})\), the loss function (13) is modified as
\[\mathcal{L}(\mathbf{\theta},\mathbf{\lambda})=\frac{1}{N_{\beta}N_{T}}\sum_{i=1}^{N_{ \beta}}\sum_{j=1}^{N_{T}}\sum_{\mathbf{x}\in\mathcal{Y}}w(\mathbf{x})|\mathbf{ \tilde{u}}(\mathbf{x},t^{j};\beta_{i})-\mathcal{G}_{\theta}(\beta_{i},H^{j})( \mathbf{x})|^{2}, \tag{14}\]
where \(w(\mathbf{x})\) is further defined as \(m(\lambda(\mathbf{x}))\) in which \(\mathbf{\lambda}=\{\lambda(\mathbf{x})\}_{\mathbf{x}\in\mathcal{Y}}\) are the trainable self-adaptive weight parameters dependent on locations \(\mathbf{x}\), and \(m(\lambda)\) is a mask function defined on \([0,\infty]\) to accelerate convergence [41]. The mask function needs to be differentiable, nonnegative, and monotonically increasing. The polynomial mask \(m(\lambda)=\lambda^{q}\) for \(q=1,2,...\) is adopted in this study.
The key feature of self-adaptive DeepONet training is that the loss \(\mathcal{L}(\mathbf{\theta},\mathbf{\lambda})\) is simultaneously minimized with respect to the network parameters \(\mathbf{\theta}\) but maximized with respect to the self-adaptive parameters \(\mathbf{\lambda}\), i.e.,
\[\min_{\mathbf{\theta}}\max_{\mathbf{\lambda}}\mathcal{L}(\mathbf{\theta},\mathbf{\lambda}). \tag{15}\]
If one uses the gradient descent method, the updated equations of the two sets of parameters at \(v\) iteration are:
\[\begin{split}\mathbf{\theta}^{v+1}&=\mathbf{\theta}^{v}- \eta_{\theta}\nabla_{\theta}\mathcal{L}(\mathbf{\theta}^{v},\mathbf{\lambda}^{v}),\\ \mathbf{\lambda}^{v+1}&=\mathbf{\lambda}^{v}+\eta_{\lambda }\nabla_{\lambda}\mathcal{L}(\mathbf{\theta}^{v},\mathbf{\lambda}^{v}),\end{split} \tag{16}\]
where \(\eta_{\theta}\) and \(\eta_{\lambda}\) are the learning rates for updating \(\mathbf{\theta}\) and \(\mathbf{\lambda}\), respectively. The employment of self-adaptive weights can significantly improve the prediction accuracy at the localized features in the solution by properly balancing the terms via the corresponding weights [41; 43].
### Data preparation & training details
To generate sufficient training data, we perform simulations of the finite element ice flow model (10) based on either SSA or MOLHO and considering \(N_{\beta}\) basal friction samples, \(\beta_{i}(\mathbf{x})\), \(i=1,...,N_{\beta}\), taken from distribution (9). For
Figure 2: Schematic representation of DeepONet. The branch net takes as inputs the functions \(\beta(\mathbf{x})\) and \(H^{n}(\mathbf{x})=H(t_{n},\mathbf{x})\) evaluated at \(N\) fixed sensor points \(\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) and returns the feature embedding vector \(\mathbf{b}\in\mathbb{R}^{2p}\) as output. The trunk net takes the continuous coordinates \(\mathbf{x}\in\mathcal{Y}\) as input and outputs another embedding vector \(\mathbf{t}\in\mathbb{R}^{2p}\). The embedding vectors \(\mathbf{b}\) and \(\mathbf{t}\) are combined by dot product to generate the solution operator, \(\mathcal{G}_{\theta}(\beta,H^{n})(\mathbf{x})\). The trainable parameters \(\theta\) associated with the branch net and the trunk net are optimized by minimizing the loss function defined as a weighted mean square error (see Eq. 13). In this study, we set \(\mathcal{Y}=\mathcal{X}\) for simplicity.
each sample \(\beta_{i}\), we compute the thickness and depth-integrated velocity using the finite element flow model and store their values \(\{H_{i}^{j}\}_{i=1}^{N_{T}}\) and \(\{\bar{\mathbf{u}}_{i}^{j}\}_{i=1}^{N_{T}}\) at times \(t^{j}\), \(j=1,2,...,N_{T}\) and grid points \(\mathbf{x}_{i}\in\mathcal{X}\).
In training the DeepONet, the input functions, \(\beta(\mathbf{x})\) and \(H^{n}(\mathbf{x})\), as well as the DeepONet operator \(\mathcal{G}_{\theta}\) are evaluated at points \(\mathcal{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}\}\), as described in Fig. 2. Therefore, a DeepONet training dataset is expressed as a triplet of the form,
\[\left[\left\{[\boldsymbol{\beta}^{(k)},\boldsymbol{H}^{(k)}]\right\}_{k=1}^{N _{\theta}N_{T}},\left\{\mathcal{Y}^{(k)}\right\}_{k=1}^{N_{\theta}N_{T}}, \left\{\bar{\mathbf{U}}^{(k)}\right\}_{k=1}^{N_{\theta}N_{T}}\right], \tag{17}\]
where
\[\begin{split}&[\boldsymbol{\beta}^{(k)},\boldsymbol{H}^{(k)}]=[ \beta_{j}(\mathbf{x}_{1}),\beta_{j}(\mathbf{x}_{2}),...,\beta_{j}(\mathbf{x}_ {N}),H_{i}^{j}(\mathbf{x}_{1}),H_{i}^{j}(\mathbf{x}_{2}),...,H_{i}^{j}( \mathbf{x}_{N})],\\ &\mathcal{Y}^{(k)}\equiv\mathcal{X}=\{\mathbf{x}_{1},\mathbf{x} _{2},...,\mathbf{x}_{N}\},\\ &\bar{\mathbf{U}}^{(k)}=[\bar{\mathbf{u}}_{i}^{j}(\mathbf{y}_{1} ),\bar{\mathbf{u}}_{i}^{j}(\mathbf{y}_{2}),...,\bar{\mathbf{u}}_{i}^{j}( \mathbf{y}_{N_{u}})].\end{split} \tag{18}\]
Here, the superscript \(k\) is defined as \(k=(i-1)N_{T}+j\) with \(i=1,...,N_{\beta}\) and \(j=1,...,N_{T}\), denoting the index of input parameters associated with time steps and friction samples.
Regarding the basal friction fields, we adopt the following procedure to split the training and testing data: if \(N_{b}\) friction fields are generated from the Gaussian process described in Section 2.4, the simulation solutions associated with the first 20 fields, \(\{\beta_{i}\}_{i=1}^{20}\), are exclusively used for testing, while the rest \(N_{\beta}=N_{b}-20\) fields, \(\{\beta_{i}\}_{i=21}^{20+N_{\beta}}\), are selected for training the DeepONet model. Unless stated otherwise, for the given training basal friction fields the finite element solutions at time steps \(t=1,2,...,100\) (i.e., \(N_{T}=100\)) are used for the training.
In the following tests, the default training scheme uses the Adam optimizer with a learning rate \(1\times 10^{-3}\). ReLU is selected as the activation function, and the batch size is 200. The architecture of both the branch net and the trunk net is a fully connected neural network consisting of 4 hidden layers and 300 neurons per layer (denoted as \(4\times 300\)). To mitigate possible overfitting in training, we also introduce an \(\ell^{2}\) regularization in (14) with a small penalty coefficient \(5\times 10^{-5}\). However, we note that we did not observe any signs of conventional overfitting during our numerical tests, and the additional regularization has a negligible impact on the DeepONet accuracy.
## 4 Synthetic Ice-Sheet Problem
In this section we apply our approach to a well-known benchmark in ice sheet modeling, the MISMIP problem [44]. We use this problem to explore how hyper-parameters affect the training of the DeepONet and the accuracy of the hybrid model.
The problem geometry is defined by a marine ice stream that is partially floating. The ice domain is 640 km long and 80 km wide (\(\Omega=[0,\,640\,\mathrm{km}]\times[0,\,80\,\mathrm{km}]\)). The bed topography is provided in [44]. We consider an initial thickness (note that this is different from the one in [44]):
\[H(x,y)=100\,\mathrm{m}\left(\frac{3}{2}+\frac{1}{2}\tanh\left(\frac{400\, \mathrm{km}-x}{100\,\mathrm{km}}\right)\right).\]
We prescribe the normal velocity at the upstream boundary (\(x=0\,\mathrm{km}\)) and lateral boundaries (\(y=0\,\mathrm{km}\) and \(y=80\,\mathrm{km}\)) to be zero, and free-slip conditions in the direction tangential to these boundaries. We prescribe stress-free conditions at the outlet boundary (\(x=640\,\mathrm{km}\)). No boundary conditions are prescribed for the thickness equation, as there are no inflow boundaries. We use a constant mean basal friction field \(\tilde{\beta}=5000\,\mathrm{Pa}\) yr/m and a scaling factor \(a=0.2\) in (9). As described in Section 3.2, for each sample \(\beta\) from (9), we run the finite-element ice flow model for 100 years, using a constant forcing \(f_{H}=0.3\) m/ yr, and compute the ice thickness \(H\). We then use the thickness data to train the DeepONet.
For ease of analysis, the mean squared error (MSE) and relative squared error (RSE), given as follows, are used to evaluate the DeepONet performance:
\[e_{MSE}=\frac{1}{N}\sum_{i=1}^{N}\|u_{i}-u_{i}^{*}\|^{2},\quad e_{RSE}=\frac{ \sum_{i=1}^{N}\|u_{i}-u_{i}^{*}\|^{2}}{\sum_{i=1}^{N}\|u_{i}^{*}\|^{2}}\]
where \(u_{i}\) and \(u_{i}^{*}\) denote the prediction and reference values, respectively, and \(N\) is the number of data.
Table 1 shows that DeepONet converges well with respect to the size \(N_{\beta}\) of the training dataset and that using more training data enhances generalization capacity. The table also shows the impact of the correlation length magnitude on the approximation accuracy. As expected, in order to maintain the same level of accuracy, larger training datasets are required for smaller correlation lengths. Another important piece of information from the table is that DeepONets can approximate with a similar accuracy both the lower-fidelity SSA model and higher-fidelity MOLHO model.
Taking the case with \(\{\beta_{i}\}_{i=21}^{300}\) as an example, the curves of training and testing losses are plotted in Fig. 3. The result shows that the DeepONet models converge stably for all three different correlation lengths, and the prediction accuracy on testing cases reaches a plateau after 50000 epochs. It is observed that the generalization gap2 remains nearly the same for the data with different correlation lengths when the size of the training dataset is fixed.
Footnote 2: The difference between a model’s performance on training data and its performance on unseen testing data drawn from the same distribution.
The trained DeepONet model \(\mathcal{G}_{\theta}(\beta,H^{j})(\mathbf{x})\) is able to predict the velocity field \(\mathbf{\bar{u}}^{NN}(\mathbf{x})\) at any time \(t^{j}\) for the given friction field \(\beta\) and thickness field \(H^{j}\). The DeepONet predictions at \(t=99\) yr for an exemplary training case corresponding to correlation lengths \(l=20,40,80\) km are presented in Fig. 4. The results in Fig. 4(g)-(i) show that more localized features appear in the velocity solution with a smaller correlation length, e.g., the case of \(l=20\) km. The RSEs between the predicted and reference velocity fields at \(t=99\) yr are \(3.61\times 10^{-4}\), \(2.57\times 10^{-3}\), and \(7.96\times 10^{-3}\) for the correlation lengths \(l=80,40\), and \(20\) km, respectively, indicating the excellent learning capacity of DeepONet on the training velocity fields.
To examine the generalization performance, we test the trained DeepONet on an unseen test case (\(\beta_{6}\)) with \(l=20\) km at two different time instances, as shown in Fig. 5. The relative squared errors at \(t=18\) and \(t=94\) yr are \(5.67\times 10^{-2}\) and \(4.88\times 10^{-2}\), respectively. We observe that the DeepONet accuracy does not depend significantly on the time \(t\) at which the input thickness is evaluated.
Lastly, we investigate the effect of mesh resolution on the DeepONet performance. We use the same \(4\times 300\) DeepONet architecture as before, but we change the size of the input layer to accommodate input data of different resolutions. Table 2 presents the relative squared errors of the DeepONet model against the training dataset \(\{\beta_{i}\}_{i=21}^{300}\) and testing dataset \(\{\beta_{i}\}_{i=1}^{20}\) under different mesh resolutions of \(36\times 9\), \(60\times 15\), and \(100\times 25\). Overall, the accuracy of
\begin{table}
\begin{tabular}{c|c c c c c} \hline & \multicolumn{2}{c}{\(l=80\) km} & \multicolumn{2}{c}{\(l=40\) km} & \multicolumn{2}{c}{\(l=20\) km} \\ \hline Training dataset & SSA & MOLHO & SSA & MOLHO & SSA & MOLHO \\ \hline \(\{\beta_{i}\}_{i=21}^{200}\) & \(7.37\times 10^{-5}\) & \(7.31\times 10^{-5}\) & \(2.09\times 10^{-4}\) & \(1.77\times 10^{-4}\) & \(2.92\times 10^{-4}\) & \(3.04\times 10^{-4}\) \\ \hline \(\{\beta_{i}\}_{i=21}^{20}\) & \(4.84\times 10^{-5}\) & \(4.72\times 10^{-5}\) & \(1.32\times 10^{-4}\) & \(1.47\times 10^{-4}\) & \(2.54\times 10^{-4}\) & \(2.11\times 10^{-4}\) \\ \hline \(\{\beta_{i}\}_{i=21}^{40}\) & \(3.62\times 10^{-5}\) & \(4.09\times 10^{-5}\) & \(1.05\times 10^{-4}\) & \(0.97\times 10^{-4}\) & \(2.28\times 10^{-4}\) & \(1.97\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 1: MISMIP test case with the SSA and MOLHO models. The mean square errors of the DeepONet training with different training dataset sizes under various correlation lengths \(l\). The testing error is evaluated on the same size of testing data of \(\{\beta_{i}\}_{i=1}^{20}\).
Figure 3: The loss plots of DeepONet training for the MISMIP testcase with SSA model under different correlation lengths: (a) \(l=80\) km; (b) \(l=40\) km; (c) \(l=20\) km. The simulation data associated with \(\{\beta_{i}\}_{i=21}^{300}\) is used as the training data while \(\{\beta_{i}\}_{i=1}^{20}\) is used as testing data. At the final epoch (\(300,000\)), the training MSEs are \(2.60\times 10^{-6}\), \(1.03\times 10^{-5}\), and \(3.33\times 10^{-5}\), respectively.
DeepONet remains comparable for the various mesh resolutions. The training time for DeepONet under different mesh resolutions is also provided in Table 2, indicating a linear relation between the training time and the size of meshes (i.e., the size of the dataset).
## 5 Hybrid Modeling of Humboldt Glacier
In this section we consider the Humboldt glacier, one of the largest glaciers in Greenland. In Fig. 6, we report the Humboldt bed topography, ice surface elevation and ice thickness obtained from observations, refer to [29] for details on how these fields are collected and processed. These fields will be use to determine the problem geometry and the initial ice thickness \(H^{0}\). The mean value \(\bar{\beta}\) of the basal friction in (9) is obtained with a PDE-constrained optimization approach [35] where the mismatch between the computed and observed surface velocities are minimized. Fig. 7 shows \(\bar{\beta}\) together with a couple of samples of the basal friction from (9).
Similarly to the MISMIP case, for each sample of \(\beta\), obtained from (9) with correlation length \(l=50\) km and scaling \(a=0.2\), the ice finite element flow model is run forward in time for 100 yr, using a climate forcing generated according to the _Representative Concentration Pathway 2.6_ (see [29] for the problem definition and the data used including the mean basal friction \(\bar{\beta}\)). The collected thickness and velocity simulation data are used to train the DeepONet model.
### DeepONet Training
We first evaluate the performance of DeepONet for different ice approximation models (MOLHO and SSA). Figs. 8a-c present the plots of training and testing errors corresponding to three different DeepONet cases, i.e., training with 1) simulation data obtained from the SSA ice model, 2) simulation data obtained from the MOLHO ice model, and 3) simulation data obtained from the MOLHO ice model together with the self-adaptive scheme described in (14)-(16). At
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline \hline & \multicolumn{4}{c}{\(l=80\) km} & \multicolumn{2}{c}{\(l=40\) km} & \multicolumn{2}{c}{\(l=20\) km} \\ \hline Mesh resolution & Time & training & testing & training & testing & training & testing \\ \hline \(36\times 9\) & 1.13 hrs & \(2.97\times 10^{-4}\) & \(8.02\times 10^{-3}\) & \(0.90\times 10^{-3}\) & \(2.70\times 10^{-2}\) & \(5.28\times 10^{-3}\) & \(6.19\times 10^{-2}\) \\ \hline \(60\times 15\) & 2.80 hrs & \(3.03\times 10^{-4}\) & \(5.70\times 10^{-3}\) & \(1.14\times 10^{-3}\) & \(1.99\times 10^{-2}\) & \(5.48\times 10^{-3}\) & \(4.25\times 10^{-2}\) \\ \hline \(100\times 25\) & 7.24 hrs & \(4.55\times 10^{-4}\) & \(4.02\times 10^{-3}\) & \(1.56\times 10^{-3}\) & \(2.82\times 10^{-2}\) & \(5.44\times 10^{-3}\) & \(4.60\times 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: MISMIP testcase with the SSA model under different mesh resolutions. The relative squared errors of the DeepONet model against the training dataset \(\{\beta_{l}\}_{l=21}^{1300}\) and testing dataset \(\{\beta_{l}\}_{l=1}^{20}\) under various correlation lengths \(l\). The clock time used to train DeepONet based on a given mesh resolution remains the same for different correlation lengths.
Figure 4: The DeepONet prediction at \(t=99\) yr for an exemplary training case (\(\beta_{25}\)) corresponding to correlation lengths \(l=20,40,80\) km. (a) - (c): \(\log_{10}(\beta)\); (d) - (f): the thickness \(H\); (g)-(i): The modulus of the predicted velocity \(\mathbf{\tilde{u}}^{NN}\)]. The relative squared errors are \(3.61\times 10^{-4}\), \(2.57\times 10^{-3}\), and \(7.96\times 10^{-3}\) for the correlation lengths \(l=80\), \(40\), and \(20\) km, respectively. The simulation data associated with \(\{\beta_{l}\}_{l=21}^{300}\) is used as the training data.
the last epoch (\(300,000\)), the relative squared errors of these three DeepONet models on the testing data are \(3.74\times 10^{-3}\), \(3.59\times 10^{-3}\), and \(2.16\times 10^{-3}\), respectively. The comparison of results in Figs. 8a and b shows that training DeepONet with MOLHO simulation data yields higher prediction accuracy than the low-order SSA data, which is consistent with our observation for the MISMIP testcase. In the following, we will only consider the MOLHO model, given that it better describes the ice sheet dynamics compared to the SSA model, and it can be well approximated by our DeepONet model. We also observe in Fig. 8c that the employment of the self-adaptive weighting scheme significantly improves the training and testing performance in the Humboldt glacier testcase, reducing the testing error by \(40\%\).
We further study the impact of using self-adaptive weights in Figs. 9 and 10, where we show the prediction errors at different \(\beta\) samples for different choices of adaptive weighting schemes. In Fig. 9 we report the results for a \(\beta\) sample taken from the training dataset, whereas in Fig. 10 we consider a sample from the testing dataset. In both cases, the DeepONet model trained with the self-adaptive weighting scheme with \(m(\lambda)=\lambda^{4}\) yields the best performance, which is consistent with the results in Fig. 8. The self-adaptive weighting scheme especially helps mitigate the prediction errors in the interior of the domain and the region at the outlet (i.e., northwest) region. Given the improved prediction, in the following sections we will present the DeepONet models trained with the self-adaptive weighting scheme with \(m(\lambda)=\lambda^{4}\).
Figure 5: The DeepONet prediction for an exemplary test case (\(\beta_{6}\)) with the correlation length \(l=20\) km: (a) \(\log_{10}(\beta)\); (b) and (c) are the maps of reference velocity modulus \(|\mathbf{\bar{u}}|\) at \(t=18\) and \(t=94\) yr, respectively; (d) and (e) are the point-wise errors of the velocity modulus between the reference and DeepONet predictions \(|\mathbf{\bar{u}}-\mathbf{\bar{u}}^{NN}|\) at \(t=18\) and \(t=94\) yr, respectively, where the corresponding relative squared errors are \(5.67\times 10^{-2}\) and \(4.88\times 10^{-2}\).
Figure 6: Observations for Humboldt glacier for the initial year 2007. Left: bed topography [m] from [45], Center: ice surface elevation [m], Right: thickness [m]. Additional details on the collection and processing of these data can be found in [29].
### Hybrid model: DeepONet embedded in finite element solver
In this section we study the accuracy and cost of the hybrid ice flow model with respect to the finite element model. As explained in Section 3, the hybrid model approximates at each time step the operator \(\mathcal{G}\) with the trained DeepONet model \(\mathcal{G}_{\theta}\). Because the DeepONet approximation is much cheaper than the finite element approximation, the hybrid solver is significantly more efficient than a traditional finite element solver. We study the approximation properties and computational savings of using the hybrid model for computing the evolution of the Humboldt glacier thickness over time, and then focus in particular on how well the hybrid model can approximate the glacier mass change. We finally show how the hybrid model can be used to produce statistics of the glacier mass loss.
#### 5.2.1 Thickness evolution over time
In this section we compare the ice thickness computed with the finite-element model, and with the hybrid model. We take 8 samples of beta (not used to train the DeepONet) from distribution (9). We then run the finite-element and the hybrid models for 150 years. Results of the comparison are shown in Fig. 11. The plot on the left shows the variability of the thickness, over time, with respect to the samples of \(\beta\), using the same model. The plot on the right shows the relative difference between the ice thickness computed with the finite-element model and the one computed with the hybrid model. The relative differences due to the models are significantly smaller than the variability with respect to the different samples. Moreover, for \(t<100\) years, which is the period used for training the DeepONet, the relative differences between the two models are small, \(3\%\) at most. Differences increase in the extrapolation region (\(100-150\) years), however the increase is mostly linear, which signifies robustness of the hybrid approximation.
Figure 8: The loss plots of DeepONet training for different ice models: (a) SSA; (b) MOLHO; (c) MOLHO with self-adaptive (SA) weighting scheme (\(m(\lambda)=\lambda^{4}\)). The simulation data associated with \(\{\beta_{i}\}_{i=21}^{300}\) is used as the training data while \(\{\beta_{i}\}_{i=1}^{20}\) is used as testing data. At the final epoch (\(300,000\)), the corresponding testing MSEs of these three DeepONet models are \(4.23\times 10^{-6},4.02\times 10^{-6}\), and \(2.42\times 10^{-6}\), indicating the enhanced generalization by using the self-adaptive weighting scheme.
Figure 7: Mean value of the basal friction \(\bar{\beta}\) (left) and two samples of the basal friction using (9).Units: [Pa yr / m].
#### 5.2.2 Glacier mass-loss over time
As explained in the introduction, the mass change of a glacier over the years is one of the most important quantities of interest in ice sheet modeling because it directly affects the net amount of water added to the oceans and hence the potential sea level rise. In this work, we compute the mass of the glacier only considering the ice that is above flotation, because changes in the mass of ice that is afloat do not affect the sea level; for details, see [46]. In Fig. 12, we show the mass change (mass at time \(t\) minus mass at time \(t_{0}=0\)) as a function of time for the same samples of the basal friction used for Fig. 11. While there are some small discrepancies between the finite-element and hybrid models, the two model are in very good agreement overall, especially in the first 100 years, which are within the period of ice simulation data used for training the DeepONet model, with the largest difference being \(\approx 10\%\). We also note that the qualitative behaviors of the two models are very similar in the extrapolation region (\(100-150\) years).
#### 5.2.3 Computing statistics on quantity of interest using Hybrid model
Finally, we demonstrate how the hybrid model can be effectively used to compute statistics of the glacier mass change. We take unseen 2000 samples of \(\beta\) from distribution (9), and run both the hybrid model and the finite-element model for 100 years and 150 years for each sample. We then compute the glacier mass change (using only the ice above flotation) and show histograms (Fig. 13) of the mass-change distribution, comparing the differences between the reference finite element model and the hybrid model. The results demonstrate that the hybrid model can accurately compute the statistics of mass change, and therefore has the potential to be used to significantly accelerate the uncertainty quantification analysis for sea-level projections due to ice-sheet mass change. The discrepancies between the results computed with the reference finite element model and the hybrid model are likely small in practical applications, and, if needed, they can be corrected using a multifidelity approach where the hybrid model is used as low-fidelity model and the finite-element model as the high-fidelity model; see e.g., [47]. The figure also shows the impact of training the DeepONets using self-adaptive weights and uniform weights. It seems that the use of self-adaptive weights in training can lead to a small bias in the hybrid modeling to underestimate the mass loss. More investigation is needed
Figure 9: The DeepONet prediction for an exemplary training case (\(\beta_{23}\)) at \(t=99\) yr: (a) basal friction \(\beta\) in [Pa yr/m]; (b) Thickness \(H\) in [m]; (c) the reference velocity modulus \(|\bar{\mathbf{u}}|\) in [m/yr]; (d) the point-wise errors ([m/yr]) of the DeepONet; (e) the point-wise errors ([m/yr]) of the DeepONet trained with self-adaptive weighting scheme \(m(\lambda)=\lambda^{2}\); (f) the point-wise errors ([m/yr]) of the DeepONet trained with self-adaptive weighting scheme \(m(\lambda)=\lambda^{4}\). The relative squared errors corresponding to (d)-(f) are \(6.29\times 10^{-4}\), \(5.00\times 10^{-4}\), and \(4.18\times 10^{-4}\), respectively.
to understand the cause of this bias and to confirm that this phenomenon is general and not specific to this particular glacier and the settings we used.
#### 5.2.4 Computational saving using Hybrid model
Table 3 shows the computational times for running the finite element and hybrid models, when using the MOLHO approximation. Overall, we see almost a 5-fold speedup when using the hybrid model over the finite element model. The total computational costs includes time to allocate memory, initialize data and for intput/output. If we only consider the time to solve the coupled model system (10), we have a 11-fold speedup. The evaluation of the DeepONet takes only 4.99s of the 9.46s taken to solve the hybrid model. We believe that there is margin for improvement in real applications. While we trained the DeepONet on GPUs, our prototype FEniCS code can only run on CPUs, therefore the results in this section refers to simulations run on CPUs. We expect that the DeepONet would benefit more from running on GPUs than a the classic finite element model, because it is still challenging to efficiently run implicit nonlinear solvers on GPUs (see [48], in the context of a production ice sheet model), whereas modern machine learning code can take full advantage of GPUs. The cost of the finite element solver scales with increasing mesh resolutions whereas DeepONet can maintain the same level of predictive accuracy and efficiency for various mesh resolutions (as shown in Table 2). Moreover, we expect that an hybrid model would be significantly more efficient, compared to the corresponding finite element code, when higher-order approximations of the velocity solver are considered. In fact, a Stokes solver can be an order of magnitude slower than the MOLHO model considered here, whereas we expect the cost of the DeepONet to be fairly independent from the model chosen for the velocity solver, as we observed when comparing the SSA and the MOLHO DeepONet models.
Figure 10: The DeepONet prediction for an exemplary testing case (\(\beta_{6}\)) at \(t=92\) yr: (a) \(\beta\) in [Pa yr/m]; (b) Thickness \(H\) in [m]; (c) the reference velocity modulus \(|\overline{\mathbf{u}}|\) in [m/yr]; (d) the point-wise errors ([m/yr]) of the DeepONet; (e) the point-wise errors ([m/yr]) of the DeepONet trained with self-adaptive weighting scheme \(m(\lambda)=\lambda^{2}\); (f) the point-wise errors ([m/yr]) of the DeepONet trained with self-adaptive weighting scheme \(m(\lambda)=\lambda^{4}\). The relative squared errors corresponding to (d)-(f) are \(3.36\times 10^{-3}\), \(7.66\times 10^{-3}\), and \(2.88\times 10^{-3}\), respectively.
## 6 Summary
We developed a hybrid model for ice sheet dynamics by combining a classic finite-element discretization for the ice thickness equation with a DeepONet approximation of the ice momentum equation, which is the most expensive part of a traditional ice sheet computational model. A distinctive feature of our hybrid model is that it can handle high-dimensional parameter spaces, which is critical for accounting for the uncertainty in parameter fields like the basal friction coefficient. We demonstrated that the hybrid model can accurately compute the dynamics of a real glacier an order of magnitude faster than a traditional ice sheet model. As explained in Section 5.2.4, the computational savings are likely to be larger when using production ice-sheet codes. Moreover, we showed that the hybrid model produces accurate statistics of the mass loss of the Humboldt glacier over a period of one hundred years and can therefore be used to accelerate uncertainty quantification analysis of sea-level projections due to ice sheets. Future research directions include scaling up our approach to target larger problems, such as using higher-resolution data or targeting the evolution
Figure 11: Left: relative difference over time between the ice thickness \(H_{\beta}\) associated to sample \(\beta\) and the mean ice thickness. Right: relative difference over time between the ice thickness computed with the finite element model and the hybrid model.
Figure 12: Mass change [gigatons] over time for different samples of the basal friction coefficient computed using the finite element model (left) and the hybrid model (right).
\begin{table}
\begin{tabular}{c|c c} \hline Times per sample (s) & Total & Solving Eq. (10) \\ \hline Finite-element model & 123.30 & 105.20 \\ Hybrid model & 24.15 & 9.46 \\ \hline Ratio & 19.59 \% & 8.99\% \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of computational time per sample between the finite-element and hybrid models when using MOLHO model for the velocity solver. The provided average times are estimated based on 50 simulations with different friction fields.
of the entire Greenland ice sheet, and performing uncertainty quantification analysis using the hybrid model.
## 7 Acknowledgements
The authors wish to thank L. Lu for helpful discussions, K. C. Sockwell for co-developing the ice-sheet code, and T. Hillebrand for generating the Humboldt grid.
The work is supported by the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project and under the SciDAC-BER Probabilistic Sea Level Projections from Ice-Sheets and Earth System Models (ProSPect) partnership. The authors also acknowledge the support from the UMI Seed Grant and the Minnesota Supercomputing Institute (MSI) at the University of Minnesota for providing resources that contributed to the research results reported within this paper.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Pacific Northwest National Laboratory (PNNL) is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830. The computational work was performed with resources from PNNL Institutional Computing at Pacific Northwest National Laboratory.
This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States
Figure 13: Histogram of the distribution of the Humboldt mass change over a period of 100 years and 150 years. The histogram has been generated by simulating the mass change corresponding to 2000 samples from distribution (9). The DeepONet used for the results on the right (b and d) has been trained using self-adaptive weights, whereas uniform weights have been used for the results on the left (a and c).
Government. |
2303.07419 | Mapping the Decline with Redshift of Dusty Star-forming Galaxies Using
JWST and SCUBA-2 | We use JWST NIRCam observations of the massive lensing cluster field A2744 to
develop a red galaxy selection of f(F444W) > 1 uJy and f(F444W)/f(F150W) > 3.5
that picks out all 9 >4.5-sigma ALMA 1.1 or 1.2 mm sources and 17 of the 19
>5-sigma SCUBA-2 850 micron sources in the covered areas. We show that by using
the red galaxies as priors, we can probe deeper in the SCUBA-2 850 micron
image. This gives a sample of 44 >3-sigma SCUBA-2 850 micron sources with
accurate positions, photometric redshifts, and magnifications. To investigate
why our red galaxy selection picks out the 850 micron sources, we next analyze
an extended sample of 167 sources with f(F444W) >0.05uJy and f(F444W)/f(F150W)
>3.5. We find that the fainter f(F444W) sources in this sample are too faint to
be detected in the SCUBA-2 850 micron image. We also show that there is a
strong drop between z<4 and z>4 (a factor of around 5) in the ratio of the
far-infrared luminosity estimated from the 850 micron flux to the nuLnu(5000)
at rest-frame 5000A. We argue that this result may be due to the high-redshift
sources having less dust content than the lower redshift sources. | A. J. Barger, L. L. Cowie | 2023-03-13T19:00:01Z | http://arxiv.org/abs/2303.07419v2 | # Mapping the Decline with Redshift of Dusty Star-forming Galaxies Using JWST and SCUBA-2
###### Abstract
We use JWST NIRCam observations from 1.5 \(\mu\)m to 4.44 \(\mu\)m of the massive lensing cluster field A2744 to show that extreme red selected galaxies (\(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\)) pick out all \(9>4.5\sigma\) ALMA 1.1 or 1.2 mm sources and 17 of the \(19>5\sigma\) SCUBA-2 850 \(\mu\)m sources in the covered areas. Next, we use the red selected galaxies as priors to probe deeper in the SCUBA-2 850 \(\mu\)m image, identifying a sample of \(44>3\sigma\) SCUBA-2 850 \(\mu\)m sources with accurate positions, photometric redshifts, and magnifications. We show that there is a strong drop (a factor of around 5) in the ratio of the far-infrared luminosity to the bolometric luminosity at rest-frame 5000 A between \(z<4\) and \(z>4\), which we argue is due to the high-redshift sources having much lower extinction than the lower redshift sources.
cosmology: observations -- galaxies: evolution -- galaxies: starburst 0000-0002-4882-887X]A. J. Barger
0000-0002-8807-7886]L. L. Cowie
## 1 Introduction
Distant, dusty, extremely luminous galaxies (e.g., Smail et al., 1997; Barger et al., 1998; Hughes et al., 1998; Eales et al., 1999) are some of the most powerfully star-forming galaxies in the universe and are significant contributors to the total star formation history from \(z\sim 2\) to at least \(z\sim 5\)(e.g., Barger et al., 2000, 2014; Chapman et al., 2005; Wardlow et al., 2011; Casey et al., 2013; Swinbank et al., 2014; Cowie et al., 2017; Zavala et al., 2021). These dusty star-forming galaxies (DSFGs) (also known as submillimeter galaxies, or SMGs) are most easily found through wide-field submillimeter/millimeter imaging on single-dish telescopes, such as the James Clerk Maxwell Telescope (JCMT) with the SCUBA-2 camera (Holland et al., 2013), or, in the near future, the Large Millimeter Telescope with the ToITEC camera (Wilson et al., 2020).
The natural limit of single-dish submillimeter/millimeter surveys is the depth where confusion--the blending of sources, or where the noise is dominated by unresolved contributions from fainter sources--becomes important. For example, Cowie et al. (2017) give a confusion limit of 1.65 mJy for 850 \(\mu\)m observations using SCUBA-2 on the 15 m JCMT.
The lack of positional accuracy is also a major problem when trying to ascertain the properties of DSFGs. Such identifications are critical for estimating photometric redshifts, modeling spectral energy distributions (SEDs), and determining morphologies. Historically, deep radio interferometric images were used to identify counterparts to SMGs (e.g., Barger et al., 2000; Smail et al., 2000; Ivison et al., 2002; Chapman et al., 2003), while, more recently, submillimeter/millimeter interferometry with the Submillimeter Array (SMA; Ho et al., 2004), NOEMA, and, most powerfully, the Atacama Large Millimeter/submillimeter Array (ALMA) have become essential tools for obtaining accurate positions, as well as for resolving some single-dish sources into multiple submillimeter/millimeter sources (e.g., Wang et al., 2011). However, interferometric observations have small fields of view, which make direct searches (e.g., Dunlop et al., 2017; Gonzalez-Lopez et al., 2017; Franco et al., 2018; Umehata et al., 2018; Casey et al., 2021; Fujimoto et al., 2023), or even follow-up observations of sources detected in single-dish surveys (e.g., Barger et al., 2012; Walter et al., 2012; Hodge et al., 2013; Chen et al., 2014; Cowie et al., 2018; Stach et al., 2019; Jones et al., 2021; Cooper et al., 2022; Cairns et al., 2023), quite costly.
It has been recognized for some time that galaxies with extremely red infrared colors, such as the
4.5 \(\mu\)m selected KIEROs of Wang et al. (2012) or the \(H\)-4.5 \(\mu\)m (also \(H\)-3.6 \(\mu\)m) selected HIEROs of Caputi et al. (2012), Wang et al. (2016), and Alcalde Pampliega et al. (2019), are effective in picking out submillimeter/millimeter galaxies (Wang et al., 2012, 2019). However, the advent of JWST, with its extremely deep, very high spatial resolution near-infrared (NIR) observations, is set to revolutionize this field.
Using the CEERS JWST NIRCam data (JWST-ERS-1345), Barrufet et al. (2022) described the selection and properties of dark galaxies with 4.44 \(\mu\)m to 1.6 \(\mu\)m flux ratios \(>8.3\) (based on the Caputi et al. 2012 selection from Spitzer and HST). They showed that these are very dusty galaxies extending over a wide range of redshifts (\(z=2\)-8). Although they suggested that their dark galaxies may be higher redshift, lower star formation rate (SFR) extensions of submillimeter/millimeter selected DSFGs, they did not match to the submillimeter/millimeter data in the field to relate their dark galaxies to DSFGs directly.
In the present paper, we demonstrate using observations of the massive lensing cluster field A2744 how ideally suited JWST NIRCam data are to finding DSFGs. The structure of the paper is as follows. In Section 2, we introduce the published datasets that we use in our analysis. In Section 3, we give our NIRCam color selection criteria that identify all of the known ALMA sources in the field. In Section 4, we use these criteria to find NIRCam counterparts to nearly all of the SCUBA-2 sources in the NIRCam-observed region, which allows us to obtain accurate positions for the SCUBA-2 sources. In Section 5, we invert this procedure and use our NIRCam color selected sample as priors to obtain deeper submillimeter measurements in the SCUBA-2 images. In Section 6, we use the photometric redshifts and magnifications of our NIRCam color selected sample to compare the far-infrared (FIR) luminosities to the rest-frame
Figure 1: SCUBA-2 850 \(\mu\)m image of the massive lensing cluster field A2744 from Cowie et al. (2022). The footprints of the JWST NIRCam F444W image released by Paris et al. (2023) (red) and the combined ALMA mosaics from AFFS and ALCS (green) are overlaid.
optical luminosities. In Section 7, we summarize our results.
We assume a cosmology of \(H_{0}=70.5\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}=0.27\), and \(\Omega_{\Lambda}=0.73\)(Larson et al., 2011) throughout.
## 2 Data
A2744 is one of the six Hubble Frontier Field clusters (HFF; Lotz et al., 2017). It was the target of multiple JWST NIRCam programs (JWST-ERS-1324, JWST-GO-2561, JWST-DDT-2756). The combined images and a catalog were released by Paris et al. (2023) for the GLASS team, and a catalog was released by Weaver et al. (2023) for the UNCOVER team.
A2744 has both deep SCUBA-2 observations from Cowie et al. (2022) and deep ALMA mosaics from the ALMA Lensing Cluster Survey (ALCS; 1.2 mm; Kohno, 2019; Fujimoto et al., 2023) and the ALMA Frontier Fields Survey (AFFS; 1.1 mm; Gonzalez-Lopez et al., 2017). The AFFS has a \(5\sigma\) threshold of 0.28 mJy. The very central regions of A2744 are relatively rich in luminous DSFGs. The mosaicked ALMA images, together with deeper follow-up ALMA observations (ALMA program #2017.1.01219.S; PI: F. Bauer), have yielded 9 \(>4.5\sigma\) ALMA sources. These are listed in Table 9 of Cowie et al. (2022), along with their known spectroscopic (specz) and photometric (photz) redshifts. Note that we have updated the specz of the second A2744 source in the Cowie et al. (2022) table from \(z=2.482\) to \(z=2.585\) based on the ALMA CO observations of F. Bauer (priv. comm.); see Kokorev et al. (2023) for a detailed analysis of this source.
In Figure 1, we show the areas covered by the JWST NIRCam F444W data (red) and the combined ALMA mosaics (green) overlaid on the SCUBA-2 850 \(\mu\)m matched filter image, which is described in Cowie et al. (2022).
## 3 JWST NIRCam Properties of ALMA Sources
We used the ALMA positions to find the counterparts in the JWST NIRCam images of Paris et al. (2023). As we show with thumbnails in Figure 2, all \(9>4.5\sigma\) ALMA sources have bright red NIRCam counterparts, which closely match in position, and in some cases shape, to the ALMA images (white contours). In Table 1, we summarize the NIRCam F444W and F150W (4.44 \(\mu\)m and 1.5 \(\mu\)m) fluxes for these sources. We also give the 850 \(\mu\)m and 450 \(\mu\)m fluxes measured from the SCUBA-2 images at the ALMA positions.
In Figure 3 (left), we plot \(f_{\rm F444W}/f_{\rm F150W}\) for the \(9>4.5\sigma\) ALMA sources (red squares) and for the full JWST NIRCam sample in the area covered by the combined ALMA mosaics (black dots). All 9 ALMA sources lie in the upper right corner defined by \(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\) (red lines). In what follows, we will use this flux/color selection to identify our sample of _red selected galaxies_.
Some of the other sources lying in the red selected galaxy region are detected at lower significance in the ALMA data. After combining one galaxy (second thumbnail in top row of Figure 2) that is divided into 4 parts in the Paris et al. (2023) catalog, there are 16 JWST NIRCam sources that satisfy our red galaxy selection criteria. Of these, 10 have \(>3\sigma\) detections in the AFFS mosaic, which has a minimum 1.1 mm rms of 0.055 mJy. One of the \(9>4.5\sigma\) ALMA sources is not included in this number, since it was detected in longer follow-up ALMA observations (ALMA program #2017.1.01219.S; PI: F. Bauer). Thus, in total, we have 11 ALMA \(>3\sigma\) sources, or 69% of our red selected galaxy sample.
In determining our selection criteria, we used \(f_{\rm F444W}/f_{\rm F150W}>3.5\), which matches most closely to previous definitions of dark galaxies. (As we discussed in the Introduction, these have often used the Spitzer 4.5 \(\mu\)m to HST 1.6 \(\mu\)m flux ratio.) However, there may be other flux ratios we would want to consider.
The use of \(f_{\rm F444W}\) as our long-wavelength anchor is clear, since it is the reddest JWST NIRCam band, but as we illustrate in Figure 3 (right), another shorter wavelength band, such as F115W, could replace F150W. In this case, F444W/F115W \(>6\) (horizontal line) contains all \(9>4.5\sigma\) ALMA sources. There is only a slightly higher contamination level (non-ALMA sources above the horizontal line) than for \(f_{\rm F444W}/f_{\rm F150W}>3.5\) (non-ALMA sources to the right of the vertical line), but both selections are comparably effective. The figure also emphasizes that using multiple colors only marginally improves our selection.
Use of the F150W band also avoids any contamination by \(z\sim 10\) galaxies. Castellano et al. (2022) report the detection of seven such galaxies in the A2744 JWST NIRCam field. While these galaxies are fainter than our flux selection threshold in F444W with fluxes in the 0.03 to 0.35 \(\mu\)Jy range, by construction, they satisfy our red color threshold in \(f_{\rm F444W}/f_{\rm F115W}\) but not in \(f_{\rm F444W}/f_{\rm F150W}\). That is, they are flat at longer wavelengths and break at F115W.
## 4 JWST NIRCam Counterparts to SCUBA-2 Sources
We next wish to see if we can obtain accurate positions for low-resolution single-dish submillimeter/millimeter
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Name & \multicolumn{2}{c}{ALMA} & \multicolumn{2}{c}{SCUBA-2} & \multicolumn{2}{c}{JWST NIRCam} \\ & R.A. & Decl. & 850 \(\mu\)m & 450 \(\mu\)m & \(f_{\rm F444W}\) & \(f_{\rm F150W}\) & \(f_{\rm F444W}/f_{\rm F150W}\) \\ & \multicolumn{2}{c}{J2000.0} & \multicolumn{2}{c}{(mJy)} & \multicolumn{2}{c}{(\(\mu\)Jy)} & \multicolumn{2}{c}{(\(\mu\)Jy)} \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline
1 & 3.5825000 & -30.385473 & 1.82(0.27) & 3.07(2.93) & 7.21 & 0.73 & 9.84 \\
2 & 3.5764582 & -30.413166 & 6.37(0.27) & 13.6(3.01) & 5.32 & 0.84 & 6.27 \\
3 & 3.5850000 & -30.381777 & 3.27(0.28) & 9.30(2.97) & 20.2 & 4.40 & 4.60 \\
4 & 3.5732501 & -30.383472 & 3.37(0.27) & 21.8(3.01) & 29.2 & 7.34 & 3.97 \\
5 & 3.5796666 & -30.378389 & 1.67(0.29) & 5.38(3.04) & 19.5 & 1.87 & 10.4 \\
6 & 3.5720000 & -30.382944 & 4.18(0.27) & 20.7(3.03) & 44.0 & 5.38 & 8.17 \\
7 & 3.5920832 & -30.380472 & -0.3(0.29) & 4.88(3.02) & 8.20 & 0.22 & 36.4 \\
8 & 3.5812500 & -30.380196 & 1.78(0.28) & -6.5(3.01) & 3.32 & 0.13 & 25.0 \\
9 & 3.5824583 & -30.377167 & 0.49(0.30) & 8.30(3.07) & 2.14 & 0.35 & 5.96 \\ \hline \end{tabular} Note. – The columns are (1) source number, (2) and (3) R.A. and decl. of the JWST NIRCam counterpart, if there is one, or the 850 \(\mu\)m position otherwise, (4) 850 \(\mu\)m fluxes and errors (in parentheses) and (5) 450 \(\mu\)m fluxes and errors (in parentheses) measured at each NIRCam counterpart position (these are lower than the peak fluxes); for the two sources without NIRCam counterparts, the 850 \(\mu\)m and 450 \(\mu\)m fluxes are measured at the SCUBA-2 position, (6) F444W flux, (7) F150W flux, and (8) the ratio of these fluxes for the sources with NIRCam counterparts, and (9) ‘1’ if the source corresponds to a detected ALMA source in Table 1 and ‘0’ if it does not.
\end{table}
Table 1: SCUBA-2 and JWST NIRCam Fluxes of the \(9>4.5\sigma\) ALMA Millimeter Sources
Figure 2: Three-color JWST NIRCam images (blue = F115W, green = F150W, and red = F444W) for the 9 \(>\) 4.5\(\sigma\) ALMA sources in the A2744 field. The thumbnails are 12\({}^{\prime\prime}\) on a side, or \(\sim\) 100 kpc at \(z=2\). The ALMA continuum emission is shown with white contours. The redshifts are marked as spectroscopic (specz) or photometric (photz).
sources by finding their red selected galaxy counter-parts. We use the SCUBA-2 imaging of A2744, which has been slightly deepened over that presented in Cowie et al. (2022). The reduction, extraction, and cataloging follow that of Cowie et al. (2022), providing 850 \(\mu\)m and 450 \(\mu\)m imaging with central rms noise of 0.26 and 2.8 mJy, respectively. The noise quoted here is the white noise; we add a confusion noise of 0.37 mJy in quadrature when selecting sources from the 850 \(\mu\)m image.
As shown in Figure 1, the JWST NIRCam observations are mostly well positioned on the SCUBA-2 image, and here we focus on the SCUBA-2 area also observed by NIRCam. In this area, there are 19 850 \(\mu\)m sources (\(>5\sigma\)) stretching down to an 850 \(\mu\)m flux of 2.4 mJy. For each 850 \(\mu\)m source, we determine the nearest NIRCam source that satisfies our \(f_{\rm F444W}\) and \(f_{\rm F444W}/f_{\rm F150W}\) selection criteria. We find that 17 of the 19 850 \(\mu\)m sources have such counterparts within a 4\({}^{\prime\prime}\) match radius, the rough uncertainty in the 850 \(\mu\)m position. We show these counterparts in Figure 4, omitting those that are ALMA sources and hence already shown in Figure 2. We list all 19 sources in Table 2.
The full NIRCam area covers 172,052 arcsec\({}^{2}\), and there are 198 sources satisfying our selection criteria, giving a surface density of 0.0011 arcsec\({}^{-2}\). This corresponds to a probability of 0.057 of seeing such a source in a 4\({}^{\prime\prime}\) radius circle. Thus, we expect one false positive in our sample of 19 sources. Measurements of random positions in the field give a similar contamination rate. Consequently, nearly all 17 NIRCam identifications in Table 2 are real.
In combination, the SCUBA-2 sample with red color positions and the ALMA sample give 21 directly detected submillimeter/millimeter sources with accurate positions in the field.
## 5 JWST NIRCam Selection of Scuba-2 Sources
We can now invert the procedure of the previous two sections and use the color-selected JWST NIRCam sources as priors to probe more deeply in the SCUBA-2 image and to avoid the effects of confusion present in a direct search.
We restrict to the portion of the SCUBA-2 image where the 850 \(\mu\)m rms white noise is \(<0.5\) mJy (twice the central noise) and where the area is covered by the NIRCam footprint (see Figure 1). There are 17,620 NIRCam sources in this region with \(f_{\rm F444W}>0.01\)\(\mu\)Jy. Of these, 156 have \(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}\)\(>3.5\), satisfying our red selected galaxy criteria.
We take these 156 sources as our priors and measure the 850 \(\mu\)m flux and error at each NIRCam position in the SCUBA-2 image. We make the same measurement for all of the \(f_{\rm F444W}>0.01\)\(\mu\)Jy sources in the region, excluding the priors. In Figure 5, we show the distribution of measured 850 \(\mu\)m flux for the two populations.
It is clear from Figure 5 that there is a significant 850 \(\mu\)m flux associated with the priors, which have a mean 850 \(\mu\)m flux of 1.15 mJy, while the full F444W population has a mean of \(-0.01\) mJy and a standard deviation of 0.54 mJy. 58 of the priors have 850 \(\mu\)m fluxes \(>1.1\) mJy. However, not all of these measured 850 \(\mu\)m fluxes are real, with some being contaminated by the wings of neighboring 850 \(\mu\)m sources.
In order to deal with contamination and eliminate any double-counting, we adopt the following procedure: We identify the brightest 850 \(\mu\)m peak within 4\({}^{\prime\prime}\) from
Figure 3: (Left) \(f_{\rm F444W}/f_{\rm F150W}\) vs. \(f_{\rm F444W}\) for the JWST NIRCam sources that lie in the area covered by the ALMA data (black dots). The red lines delineate our selection region (\(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\)). All \(9>4.5\sigma\) ALMA sources satisfy these criteria (red squares). (Right) \(f_{\rm F444W}/f_{\rm F150W}\) vs. \(f_{\rm F444W}/f_{\rm F115W}\) for sources with \(f_{\rm F444W}>1\)\(\mu\)Jy that lie in the area covered by the ALMA data (black dots). The \(9>4.5\sigma\) ALMA sources are again shown with red squares. The red vertical line shows our \(f_{\rm F444W}/f_{\rm F150W}>3.5\) selection, while the red horizontal line shows an alternate \(f_{\rm F444W}/f_{\rm F115W}>6\) selection. The solid portions of the lines delineate a selection region that uses both criteria, but the improvement is marginal.
Figure 4: Three-color JWST NIRCam images (blue = F115W, green = F150W, and red = F444W) for the \(>5\sigma\) SCUBA-2 sources with accurate NIRCam positions. Sources with already known accurate positions from ALMA (Figure 1) are not shown. The numbering and positions are given in Table 2. The thumbnails are \(12^{\prime\prime}\) on a side, or roughly 100 kpc at \(z=2\).
a prior. We then measure the flux at the position of this prior, convolve it with the SCUBA-2 matched-filter PSF, and subtract it to form a new cleaned image. We repeat this procedure in order of decreasing 850 \(\mu\)m flux, using the cleaned SCUBA-2 image until all of the priors are used. Finally, we measure the fluxes for the remaining priors that did not have flux measurements in the initial procedure at their source positions in the cleaned image. We compare the actual SCUBA-2 850 \(\mu\)m image in Figure 6 (left) with the final cleaned image in Figure 6 (right).
This procedure reduces the number of priors with \(>3\sigma\) 850 \(\mu\)m fluxes above 1.1 mJy to 43. It recovers all 17 directly detected sources with JWST NIRCam counterparts in Table 2. These 43 sources contain an 850 \(\mu\)m extragalactic background light (EBL) of 10.2 Jy deg\({}^{-2}\), which is about a quarter of the total EBL (Fixsen et al., 1998). In Figure 7, we show two examples of faint 850 \(\mu\)m sources found by using the priors.
Using the same cleaning procedure that Cowie et al. (2022) used for the direct SCUBA-2 search, we now search the cleaned image for additional 850 \(\mu\)m sources without priors, finding 21 with fluxes above 1.1 mJy (\(>3\sigma\)). We note that both sources 13 and 19 from Table 2 are contained in the 21 sources detected in the residual image. While not all of these 21 sources are necessarily real, combining them with the 43 found with priors gives a fraction of 67% that are picked out by our red selected galaxy priors.
Before proceeding, we note again (see Section 3) that the Paris et al. (2023) catalog contains a small number of cases where a single object is split into multiple components. Replacing these with single objects reduces the sample to 148. There are a further 4 objects that appear to be red stars (e.g., Nonino et al., 2023) based on the SExtractor (Bertin and Arnouts, 1996) star classifier and visual inspection. As expected, none of the stars are detected at 850 \(\mu\)m.
In summary, of our 144 non-star red selected galaxy priors, we find 43 with fluxes above 1.1 mJy (\(>3\sigma\)). There is one additional prior with a \(>3\sigma\) 850 \(\mu\)m detection whose flux is below 1.1 mJy. Thus, 30% of the priors have \(>3\sigma\) 850 \(\mu\)m counterparts.
In \(\sim 20\%\) of these 44 cases, the 850 \(\mu\)m flux could be associated with two priors. In the ALMA covered area, sources 4 and 6 in Table 1 (see Figure 2 for their thumbnails) provide such an example. This percolentage is slightly higher than the 13% (68% confidence range 7%-19%) of SCUBA-2 sources above 2.25 mJy (\(>4\sigma\)) in the GOODS-S that have multiple ALMA counterparts (Cowie et al., 2018). While we assign all of the 850 \(\mu\)m flux to the nearest prior, the other prior could be partially contributing. Allowing for this possibility could increase the percentage of the non-star red selected galaxy priors with \(>3\sigma\) 850 \(\mu\)m counterparts to 37%.
## 6 Discussion
In order to see how fainter red sources, with \(f_{\rm F444W}\)\(<1\)\(\mu\)Jy, might fit into our \(f_{\rm F444W}/f_{\rm F150W}>3.5\) selection, we now extend our \(f_{\rm F444W}\) limit down to 0.05 \(\mu\)Jy. We show this extended selection in Figure 8, where we mark the sources with \(>3\sigma\) 850 \(\mu\)m detections in red. For the rest of this discussion, we combine the small number of multiple component objects from the Paris et al. (2023) catalog into single objects. We also exclude the four red stars in the region (shown as star symbols in Figure 8). In order to avoid double-counting 850 \(\mu\)m detections, we exclude \(f_{\rm F444W}/f_{\rm F150W}>3.5\) sources that are closer than 4'' to a neighboring source with a brighter F444W flux. However, we note that none of the subsequent results are significantly affected if we remove this condition. Consistent with our previous selection, there are very few \(>3\sigma\) 850 \(\mu\)m detections of sources with \(f_{\rm F444W}<1\)\(\mu\)Jy (only 1 of 44; the one \(3.2\sigma\) source has \(f_{850\,\mu m}=1.4\) mJy).
The brightest F444W sources are more likely to be detected at 850 \(\mu\)m. There are 47 sources with \(f_{\rm F444W}\)\(>10\)\(\mu\)Jy. Of these, 26 (55%) have \(>3\sigma\) 850 \(\mu\)m detections. There is also a strong preference for the reddest sources to be detected at 850 \(\mu\)m, as might be expected if the extinction is higher in these galaxies. For our red selected galaxies with \(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\), the sources with the highest \(f_{\rm F444W}/f_{\rm F150W}\) (\(>8.3\)) have 10 of 17 galaxies (59%) with \(>3\sigma\) 850 \(\mu\)m detections.
Figure 5: Distribution of the measured 850 \(\mu\)m flux for the 156 red selected galaxy priors (black histogram) and for the full F444W sample (red dashed histogram), renormalized to match the peak of the priors histogram.
While the general preference for the brightest F444W sources with the reddest colors to be 850 \(\mu\)m detected is clear, we would like to understand in more detail how \(f_{\rm F444W}/f_{\rm F150W}>3.5\) galaxies that are not 850 \(\mu\)m detected are related to those that are 850 \(\mu\)m detected. In order to carry out this analysis, we need redshift estimates and magnifications.
We searched the literature for speczs, but, given the extreme colors of these galaxies, only a very small number have them. The UNCOVER catalog (Weaver et al., 2023) gives photzs based on the EAZY code (Brammer et al., 2008) and magnifications based on the source positions and redshifts. However, it does not fully cover the JWST NIRCam image from Paris et al. (2023), which limits the sample. We adopt the Weaver et al. (2023) photzs, but for sources with photzs above 9, we set them to 9. Our final sample contains 166 \(f_{\rm F444W}>0.05\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\) galaxies with both photzs and
Figure 6: (Left) Deep portion of the SCUBA-2 850 \(\mu\)m image in the JWST NIRCam footprint. (Right) SCUBA-2 850 \(\mu\)m image after removing the 850 \(\mu\)m fluxes corresponding to the red selected galaxy priors.
Figure 7: Two examples of faint 850 \(\mu\)m sources found using the red selected galaxy priors. Both sources lie in the JWST NIRCam footprint but not in the combined ALMA mosaics from AFFS and ALCS. The underlying images are three-color JWST (blue = F115W, green = F150W, and red = F444W). The 850 \(\mu\)m image is shown with the white contours (0.6 and 1.2 mJy per beam) with the local peak marked with the diamond. The priors are shown with green squares. The pixels are 0\(\farcs\)03, and the fields are 30\(\arcsec\) on a side. The 850 \(\mu\)m flux is shown in the lower-left corner.
magnifications. Seven of these are X-ray sources in the Chandra catalog of Wang et al. (2016a) and appear to be luminous active galactic nuclei (AGNs).
We next compute the demagnified fluxes at a rest-frame wavelength of 5000 A (the longest rest-frame wavelength observed by JWST NIRCam at the highest redshifts), along with the corresponding demagnified luminosities, \(L_{\nu}^{d}(5000)\).
In Figure 9, we plot the photz versus \(L_{\nu}^{d}(5000)\). Nearly all of the \(>3\sigma\) 850 \(\mu\)m detected sources (marked in red) have \(L_{\nu}^{d}(5000)>5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\) (vertical black line), which corresponds to \(\nu L_{\nu}^{d}(5000)>3\times 10^{43}\) erg s\({}^{-1}\). There are 37 sources with \(L_{\nu}^{d}(5000)<5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\), of which only one has a \(>3\sigma\) 850 \(\mu\)m detection. In contrast, there are 129 sources with \(L_{\nu}^{d}(5000)>5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\), 44 of which have \(>3\sigma\) 850 \(\mu\)m detections. The mean demagnified 850 \(\mu\)m flux (hereafter, we will use \(f_{850\,\mu\rm m}^{d}\) for the demagnified 850 \(\mu\)m flux) of the 129 sources is 0.57 mJy. However, nearly all of this is concentrated in the low-redshift population. For \(z<4\), the mean \(f_{850\,\mu\rm m}^{d}\) is 0.66 mJy, while for \(z>4\), it has dropped to 0.08 mJy.
Sources with \(f_{850\,\mu\rm m}^{d}=1\) mJy have a demagnified FIR (8-1000 \(\mu\)m) luminosity of \(L_{\rm FIR}^{d}\sim 4\times 10^{45}\) erg s\({}^{-1}\) at \(z>1\)(e.g., Cowie et al., 2017). We now characterize the sources with the parameter
\[R=\frac{(4\times 10^{45})f_{850\ \mu\rm m}^{d}\ (\rm in\,mJy)}{(6\times 10^{14} )L_{\nu}^{d}(5000)}\,. \tag{1}\]
This is the ratio of the FIR luminosity to the bolometric luminosity at rest-frame 5000 A. The \(R\) parameter will vary with extinction and SED, but for a typical ultra-luminous infrared galaxy such as Arp 220, \(R\) computed with the Silva et al. (1998) SED is 20. In contrast, the ratio for the luminosity in the 1000-40000 A range to the bolometric luminosity at rest-frame 5000 A for Arp 220 is 2.4. Thus, \(\sim 90\%\) of its light emerges in the FIR.
For sources with \(L_{\nu}^{d}(5000)>5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\), the \(R\) parameter drops from a mean of 54 at \(z<4\) to 8 at \(z>4\). A Mann-Whitney test shows only a 0.0016 probability that the two samples are consistent. The rapid drop in the FIR luminosity relative to the bolometric luminosity at rest-frame 5000 A as one moves to higher redshifts suggests that the high-redshift galaxies have much less dust extinction.
In Figure 10, we show \(f_{850\,\mu\rm m}^{d}\) versus \(L_{\nu}^{d}(5000)\) for the two redshift ranges. We also show the mean \(f_{850\,\mu\rm m}^{d}\) in three \(L_{\nu}^{d}(5000)\) ranges (diamonds). We overlay the curves for \(R=30\) (gold) and \(R=6\) (purple), which provide reasonable fits to these mean values at \(z<4\) and \(z>4\), respectively.
From Figure 10, we can also see that the absence of \(>3\sigma\) 850 \(\mu\)m detections at fainter \(L_{\nu}^{d}(5000)\) is simply a
Figure 8: \(f_{\rm F444W}/f_{\rm F150W}\) vs. \(f_{\rm F444W}\) for the full sample in the JWST NIRCam plus SCUBA-2 footprint with \(f_{\rm F444W}>0.05\)\(\mu\)Jy (black dots). Open squares show sources that meet our \(f_{\rm F444W}/f_{\rm F150W}>3.5\) selection criterion, with those having \(>3\sigma\) 850 \(\mu\)m detections marked in red. For the \(f_{\rm F444W}/f_{\rm F150W}>3.5\) sources, we only show those that are more than 4\({}^{\prime\prime}\) from a brighter source satisfying this color criterion to avoid double-counting 850 \(\mu\)m detections. The small number of multiple component objects from the Paris et al. (2023) catalog have been combined into single objects. The four red stars in the region are shown with star symbols. None of the stars are detected at 850 \(\mu\)m.
Figure 9: Photz vs. \(L_{\nu}^{d}(5000)\) for the full sample in the JWST NIRCam plus SCUBA-2 footprint with photz, \(f_{\rm F444W}>0.05\)\(\mu\)Jy, and \(f_{\rm F444W}/f_{\rm F150W}>3.5\) (small squares). The small number of multiple component objects from the Paris et al. (2023) catalog have been combined into single objects. Sources with \(f_{\rm F444W}/f_{\rm F150W}>8.3\) are shown with larger symbols, while those with \(>3\sigma\) 850 \(\mu\)m detections are marked in red. We only show sources that are more than 4\({}^{\prime\prime}\) from a brighter source satisfying \(f_{\rm F444W}/f_{\rm F150W}>3.5\) to avoid double-counting 850 \(\mu\)m detections. We do not show the four red stars in the region.
selection effect--the 850 \(\mu\)m flux has likely become too faint to detect. At \(L_{\nu}^{d}(5000)=2.5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\), we expect a mean \(f_{850\,\mu\rm{m}}^{d}\) of 0.1 mJy for \(R=30\), comparable to the minimum 3\(\sigma\) value reached in our measurements.
## 7 Summary
We showed that our JWST NIRCam red selection criteria of \(f_{\rm F444W}>1\)\(\mu\)Jy and \(f_{\rm F444W}/f_{\rm F150W}>3.5\) locates all of the known ALMA 1.1 mm and 1.2 mm sources and 17 of the 19 SCUBA-2 850 \(\mu\)m (\(>5\sigma\)) sources in the A2744 cluster field in the JWST NIRCam covered areas. Using these red selected galaxies as priors, we were able to probe much more deeply in the SCUBA-2 data, finding \(44>3\sigma\) 850 \(\mu\)m sources (this procedure recovers the 17 directly detected sources).
We analyzed this sample using photzs and gravitational lensing magnifications from the UNCOVER catalog of Weaver et al. (2023). All but one of the \(>3\sigma\) 850 \(\mu\)m detections lie at \(z<4\), and all but one have demagnified luminosities at a rest-frame wavelength of 5000 A of \(L_{\nu}^{d}(5000)>5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\).
The redshift dependence in the \(>3\sigma\) 850 \(\mu\)m detections appears to be a result of a significant decrease in extinction at the higher redshifts. Parameterizing this with the quantity \(R\), which is the ratio of the FIR luminosity to the bolometric luminosity at rest-frame 5000 A, we find a drop of around 5 between \(z<4\) and \(z>4\). The drop means that the high-redshift sources have less dust and reradiate less of their light into the FIR.
In contrast, the \(L_{\nu}^{d}(5000)\) dependence appears to be a simple sensitivity issue, with the sources \(<5\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\) being too faint to be detected in the SCUBA-2 850 \(\mu\)m image.
We gratefully acknowledge support for this research from a Kellett Mid-Career Award and a WARF Named Professorship from the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation (A. J. B.) and NASA grant NNX17AF45G (L. L. C.). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.00999.S, ADS/JAO.ALMA#2015.1.01425.S, ADS/JAO.ALMA#2017.1.01219.S, and ADS/JAO.ALMA#2018.1.00035.L. ALMA is a partnership of ESO (representing its member states), NSF (USA),and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan, Academia Sinica
Figure 10: \(f_{850\,\mu\rm{m}}^{d}\) vs. \(L_{\nu}^{d}(5000)\) for the (a) \(z<4\) and (b) \(z>4\) full sample in the JWST NIRCam plus SCUBA-2 footprint with photzs, \(f_{\rm F444W}>0.05\)\(\mu\)Jy, and \(f_{\rm F444W}/f_{\rm F150W}>3.5\) (small squares). The small number of multiple component objects from the Paris et al. (2023) catalog have been combined into single objects. X-ray sources are enclosed in green squares. Sources with \(f_{\rm F444W}/f_{\rm F150W}>8.3\) are shown with larger symbols, while those with \(>3\sigma\) 850 \(\mu\)m detections are marked in red. We only show sources that are more than 4\({}^{\prime\prime}\) from a brighter source satisfying \(f_{\rm F444W}/f_{\rm F150W}>3.5\) to avoid double-counting 850 \(\mu\)m detections. We do not show the four red stars in the region. The solid diamonds show the mean values in the \(L_{\nu}^{d}(5000)\) ranges \(10^{28}\) to \(5\times 10^{28},5\times 10^{28}\) to \(1.5\times 10^{29}\), and \(1.5\times 10^{29}\) to \(4\times 10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\). The curves correspond to \(R=30\) (gold) and \(R=6\) (purple) (see Equation 1).
Institute of Astronomy and Astrophysics, the Korea Astronomy and Space Science Institute, the National Astronomical Observatories of China and the Chinese Academy of Sciences (Grant No. XDB09000000), with additional funding support from the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. ALMA, JCMT
|
2310.07521 | Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity | This survey addresses the crucial issue of factuality in Large Language
Models (LLMs). As LLMs find applications across diverse domains, the
reliability and accuracy of their outputs become vital. We define the
Factuality Issue as the probability of LLMs to produce content inconsistent
with established facts. We first delve into the implications of these
inaccuracies, highlighting the potential consequences and challenges posed by
factual errors in LLM outputs. Subsequently, we analyze the mechanisms through
which LLMs store and process facts, seeking the primary causes of factual
errors. Our discussion then transitions to methodologies for evaluating LLM
factuality, emphasizing key metrics, benchmarks, and studies. We further
explore strategies for enhancing LLM factuality, including approaches tailored
for specific domains. We focus two primary LLM configurations standalone LLMs
and Retrieval-Augmented LLMs that utilizes external data, we detail their
unique challenges and potential enhancements. Our survey offers a structured
guide for researchers aiming to fortify the factual reliability of LLMs. | Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang | 2023-10-11T14:18:03Z | http://arxiv.org/abs/2310.07521v3 | # Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
###### Abstract
This survey addresses the crucial issue of factuality in Large Language Models (LMMs). As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital. We define the "factually issue" as the probability of LLMs to produce content inconsistent with established facts. We first delve into the implications of these inaccuracies, highlighting the potential consequences and challenges posed by factual errors in LLM outputs. Subsequently, we analyze the mechanisms through which LLMs store and process facts, seeking the primary causes of factual errors. Our discussion then transitions to methodologies for evaluating LLM factuality, emphasizing key metrics, benchmarks, and studies. We further explore strategies for enhancing LLM factuality, including approaches tailored for specific domains. We focus two primary LLM configurations--standalone LLMs and Retrieval-Augmented LLMs that utilizes external data--we detail their unique challenges and potential enhancements. Our survey offers a structured guide for researchers aiming to fortify the factual reliability of LLMs. We consistently maintain and update the related open-source materials at [https://github.com/wangcunxiang/LLM-Factuality-Survey](https://github.com/wangcunxiang/LLM-Factuality-Survey).
Large language Models, Factuality, Knowledge, Retrieval, Domain-Specificity, Evaluation, Enhancement
## 1 Introduction
The quest for mastery of knowledge has been a foundational aspiration in the development of artificial intelligence systems. Historically, seminal works by McCarthy et al. (1955) and Newell and Simon (1976) have underscored the significance of knowledge representation and reasoning in AI systems. For instance, the Cyc project embarked on an ambitious journey to codify common-sense knowledge, aiming to provide AI systems with a comprehensive understanding of the world (Lenat, 1995). Concurrently, endeavors like the WordNet project by Miller et al. (1990) sought to create lexical databases that capture semantic relationships between words, thereby aiding AI systems in grasping the nuances of human language.
Amidst these pioneering efforts, the emergence of Large Language Models (LLMs), such as ChatGPT (OpenAI, 2022b), GPT-4 (OpenAI, 2023) and LLMa(Touvron et al., 2023a,b), has been seen as a significant leap in both academics and industries, especially towards AI systems possessing vast factual knowledge (De Cao et al., 2021; OpenAI, 2023; Petroni et al., 2019a). The advantages of using LLMs as knowledge bases carriers are manifold. Firstly, they reduce the overhead and costs associated with building and maintaining dedicated knowledge bases (AlKhamissi et al., 2022; Petroni et al., 2019c; Wang et al., 2023b). Additionally, LLMs offer a more flexible approach to knowledge processing and utilization, allowing for context-aware reasoning and the ability to adapt to novel information or prompts (Huang and Chang, 2023; Sun et al., 2023a). Yet, with their unparalleled capabilities, concerns have arisen about the potential of LLMs to generate non-factual or misleading content (Bender et al., 2021; Bubeck et al., 2023; OpenAI, 2023). In light of these advancements and challenges, this survey seeks to delve deeply into the LLMs, exploring both their potential and the concerns surrounding their factual accuracy.
Understanding the factuality of Large Language Models is more than just a technical challenge; it's essential for the responsible use of these tools in our daily lives. As LLMs become more integrated into services like search engines (Microsoft, 2023), chatbots (Google, 2023; OpenAI, 2022b), and content generators (Cui et al., 2023b), the information they provide directly influences decisions, beliefs, and actions of millions of people. If an LLM provides incorrect or misleading information, it can lead to misunderstandings, spread false beliefs, or even cause harm, especially for those domains that demand high factual accuracy (Ling et al., 2023b), such as health (Tang et al., 2023; Thirunavukarasu et al., 2023), law (Huang et al., 2023a), and finance (Wu et al., 2023). For instance, a physician relying on an LLM for medical guidance might inadvertently jeopardize patient health, a corporation leveraging LLM insights might make ill-informed market decisions, or an attorney misinformed by an LLM might falter in legal proceedings (Curran et al.,
2023). In addition, with the advancement of LLM-based agents, the factuality of LLMs is becoming even more potent. A driver or an autonomous driving car might rely on LLM-based agents for planning or driving, where serious
Fig. 1: Taxonomy of research on factuality in Large Language Models that consists of the issue, evaluation, analysis and enhancement.
factual mistakes made by LLMs could cause irreversible damage. By studying the factuality of LLMs, we aim to ensure that these models are both powerful and trustworthy.
A surge of research has been directed towards evaluating LLMs' factuality, which encompasses diverse tasks like factoid question answering and fact checking. Beyond evaluation, efforts to improve the factual knowledge of LLMs have been notable. Strategies have ranged from retrieving information from external knowledge bases to continual pretraining and supervised finetuning. Yet, despite these burgeoning efforts, a holistic overview that covers the full spectrum of factuality in LLMs remains elusive. While there are existing surveys in the field, such as those by Chang et al. (2023) and Wang et al. (2023), that delve into the evaluation of LLMs and their factuality, they only scratch the surface of the broader landscape. There are also a bunch of recent studies focusing on hallucinations in LLMs (Rawte et al., 2023, 2023). But we differentiate between the hallucination issue and the factuality issues in Sec 2.2. Moreover, these surveys often overlook key areas we emphasize, like domain-specific factuality or the challenge of outdated information. While Ling et al. (2023) explores domain specialization in LLMs, our survey takes a more expansive look at the broader issues of factuality. To the best of our understanding, our work is the first comprehensive study on the factuality of large language models.
This survey aims to offer an exhaustive overview of the factuality studies in LLMs, delving into four key dimensions: Sec 2) The definition and impact of the factuality issue (Nori et al., 2023; Pranshu Verma, 2023); Sec 3) Techniques for evaluating factuality and its quantitative assessment (Huang et al., 2023; Min et al., 2023); Sec 4) Analyzing the underlying mechanisms of factuality in LLMs and identifying the root causes of factual errors (Kotha et al., 2023; Liu et al., 2023); and Sec 5) Approaches to enhance the factuality of LLMs (Du et al., 2023; He et al., 2022). Notably, we categorize the use of LLMs into two primary settings: LLMs without external knowledge, such as ChatGPT (OpenAI, 2022b) and Retrieval-Augmented LLMs, such as BingChat (Microsoft, 2023). The complete structure of this survey is illustrated in Figure 1. Through a detailed examination of existing research, we seek to shed light on this critical aspect of LLMs, helping researchers, developers, and users harness the power of these models responsibly and effectively.
## 2 Factuality Issue
In this section, we describe the issue of factuality in large language models, as well as the impact.
### _Large Language Models_
There is no well-accepted and exact definition of large language models in the literature (Chang et al., 2023; Huang and Chang, 2022; Zhao et al., 2023). We mainly consider the decoder-only generative pre-trained language modes with emergent abilities, such as ChatGPT (OpenAI, 2022b) and LLaMA (Touvron et al., 2023, 2023). We also include some work that is based on models with encoder-decoder architectures, such as TS (Raffel et al., 2020). We do not talk about work that is only based on the Encoder-only models, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), in this survey. To be specific, our survey includes the following LLMs:
* [leftmargin=*,noitemsep,nolistsep]
* **General Domain LLMs:** GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), ChatGPT (OpenAI, 2022b), GPT-4 (OpenAI, 2023), GPT-Neo (Black et al., 2021), GPT (Zhang et al., 2022b), LLaMA (Touvron et al., 2023), LLaMA-2 (Touvron et al., 2023), lucite (Computer, 2023; Together, 2023), Claude (Cla), Falcon (Almazrouei et al., 2023), MPT Team (2023b), Vicuna (Chiang et al., 2023), FLAN-T5 Chung et al. (2022), BLOOM (Scao et al., 2022), Baichuan & Baichuan2 (Yang et al., 2023), PaLM (Chowdhery et al., 2022), Gopher (Rae et al., 2022a), Megatron-LM (Shoeybi et al., 2019), SAIL (Luo et al., 2023), Codex (Chen et al., 2021), Bard (Google, 2023), GLM & ChatGLM (Zeng et al., 2022), InternLM (Team, 2023), StableBelauga (Mahan et al., Claude (Cla), Alpaa (Taori et al., 2023), New Bing (Microsoft, 2023), Ziya-LLaMA (Zhang et al., 2022a), BLOOMZ (Munenighoff et al., 2022), Chinese-LLaMA (Cui et al., 2023c), Phoenix (Chen et al., 2023), and others.
* **Domain-specify LLMs:** BloombergGPT (Wu et al., 2023), EcomGPT (Li et al., 2023), BioGPT (Luo et al., 2022), LawGPT (Nguyen, 2023), Lawyer LLaMA (Huang et al., 2023), ChatLaw (Cui et al., 2023), BioMedLM (Venigalla et al., 2022), HuatuoGPT (Zhang et al., 2023), ChatDoctor (Li et al., 2023), MedicalGPT (Xu, 2023), Bentsao (Huatuo as its original name (Wang et al., 2023c), Zhongjing (Yang et al., 2023c), LLM-AMT (Wang et al., 2023), DISC-MedLLM (Bao et al., 2023), Cohortgpt (Guan et al., 2023), Deid-gpt (Liu et al., 2023), Doctorglm (Xiong et al., 2023), MedChatZH (Tan et al., 2023), K2 (Deng et al., 2023), HouYi (Bai et al., 2023), GrammarGPT (Fan et al., 2023), FoodGPT (Qi et al., 2023), ChatHome (Wen et al., 2023), and others.
### _Factuality_
By factuality in LLMs we refer to the capability of large language models for generating contents that follow factual information, which encompasses commonsense, world knowledge and domain facts. The source of such factual information can be grounded to dictionaries, Wikipedia or textbooks from different domains 1. A series of work have discussed whether LLMs can serve as knowledge bases to store factual knowledge (AlKhamissi et al., 2022; Pan et al., 2023; Yu et al., 2023)
Footnote 1: We only consider cases where the meaning is clear and the truthfulness can be determined in this survey. Furthermore, we only take into account undigurted facts. Minor errors that may exist in reliable sources are not within the scope of our consideration.
Existing work focus on measuring factuality in LLMs qualitatively (Chern et al., 2023; Lin et al., 2022b), discussing the mechanism for storing knowledge (Chen et al., 2023; Meng et al., 2022) and tracing the source of knowledge issues (Gou et al., 2023; Kandpal et al., 2023). The factuality issue for LLMs receive relatively the most attention. Several instances are shown in Table I. For instance, an LLM might be deficient in domain-specific factual knowledge, such as medicine or law domain. Additionally, the LLM might be
unaware of facts that occurred post its last update. There are also instances where the LLM, despite possessing the relevant facts, fails to reason out the correct answer. In some cases, it might even forget or be unable to recall facts it has previously learned. The factuality problem is closely related to several hot topics in the field of Large Language Models, including **Hallucinations**(Ji et al., 2023a; Zhao et al., 2023b), **Outdated Information**(Nakano et al., 2022; Qin et al., 2023), and **Domain-Specificity**(e.g., Health (Wang et al., 2023c; Xiong et al., 2023a), Law (Cui et al., 2023b), Finance (Wu et al., 2023)). At their core, these topics address the same issue: the potential for LLMs to generate contents that contradicts certain facts, whether those contents arise out of thin air, outdated information, or a lack of domain-specific knowledge.
Therefore, we consider these three topics to fall within the scope of the factuality problem. However, it is important to note that while these topics are related, they each have a unique focus. Both hallucinations and factuality issues in LLMs pertain to the accuracy and reliability of generated
\begin{table}
\begin{tabular}{p{42.7pt} p{142.3pt} p{142.3pt}} \hline \hline Category & Cause & Example Dialog & Notes and references \\ \hline \multirow{4}{*}{Model-level causes} & Domain knowledge deficit & **Q:** CEO of Assicurazioni General? & BloombergGPT is a finance \\ & & **GPT-NeoX:** Antonio De Lorenzo, Simone Gambarini, Enrico Zanetti & domain-specific language model. Wu et al. (2023) \\ & & **FLAN-T5-XXL:** John M Forsyth, Christopher K Peters, \{empty string\} & \\ \cline{3-4} & **Outdated information** & **Qu:** When was Kyiv attacked by Russia? & Kyiv was attacked by Russia on 25 \\ \cline{3-4} & & **ChatGPT:**As of my last knowledge update in September 2021, Russia had not launched an attack on Kyiv. & February 2022. \\ \cline{3-4} & & **Q:** Who is Tom Cruise’s mother? & From Berglund et al. (2023). \\ \cline{3-4} & & **A:** Mary Lee Pfeiffer & It is clear that the model knows \\ \cline{3-4} & & **Q:** Who is Mary Lee Pfeiffer’s mother is Lee Pfeiffer, but it fails to reason that Lee Pfeiffer has a son named Tom Cruise. \\ \hline \hline \end{tabular}
\end{table} TABLE I: Examples of different kinds of factual error produced by large language models. We category the factual error types by the causes of them, whose details can be found in Sec 4.2
content, they address distinct aspects. Hallucinations primarily revolve around LLMs generating baseless or unwarranted content. Drawing from definitions by Ji et al. (2023); OpenAI (2023), hallucinations can be understood as the model's inclination to "produce content that is nonsensical or untruthful in relation to certain sources." This is different from factuality concerns, which emphasize the model's ability to learn, acquire, and utilize factual knowledge. To illustrate the distinction: If an LLM, when prompted to craft "a fairy tale about a rabbit and a wolf making friends," produces a tale about "a rabbit and a dog becoming friends," it's exhibiting hallucination. However, this isn't necessarily a factuality error. If the generated content contains accurate information but diverges from the prompt's specifics, it's a hallucination but not a factuality issue. For instance, if the LLM's output includes more details or different elements than the prompt specifies but remains factually correct, it's a case of hallucination. Conversely, if the LLM avoids giving a direct answer, states "I don't know," or provides a response that's accurate but omits some correct details, it's addressing factuality, not hallucination. Furthermore, it's worth noting that hallucination can sometimes produce content that, while deviating from the original input, remains factually accurate. For a more structured comparison between the factuality issue and hallucination, refer to Table II. Outdated information, on the other hand, focuses on instances where previously accurate information has been superseded by more recent knowledge. Lastly, domain-specificity emphasize the generation of content that requires specific, specialized knowledge. Despite these differences, all three topics contribute to our understanding of the broader factuality problem in LLMs.
**Setting:** In this survey, our primary focus is on two specific settings: 1. Standard LLMs: Directly using LLMs for answering and chatting (OpenAI, 2022, 2023); 2. Retrieval-Augmented LLMs: The retrieval-augmented generation (Liu, 2022; Microsoft, 2023). The latter is of particular interest as retrieval mechanisms are among the most prevalent methods for enhancing the factuality of LLMs. This involves not just generating accurate responses but also correctly selecting pertinent knowledge snippets from the myriad of retrieved sources.
While summarization tasks--where the goal is to produce summaries that stay true to the source input--have seen research on factuality (Maynez et al., 2020; Tam et al., 2023; Tang et al., 2022), we opted not to focus heavily on this domain in our survey. There are a few reasons for this decision. Firstly, the source inputs for summarization often contain content that is not factual. Secondly, summarization introduces unique challenges like ensuring coherence, conciseness, and relevance, which deviate from the focus of this survey. It is also worth noting that Pu et al. (2023) found LLMs to produce fewer factual errors or hallucinations compared to humans across various summarization benchmarks. However, we will still discuss some works in this area, particularly those that overlap with retrieval settings.
### _Impact_
The factuality problem significantly impacts the usability of LLMs. Some of these issues have even led to losses at the societal or economic level (Pranshu Verma, 2023; Sands, 2023), drawing the attention of many users, developers, and researchers (Ji et al., 2023).
Factuality issues also impacted the legal field, with a lawyer in the United States facing sanctions for submitting hallucinated case law in court. One court has mandated that lawyers indicate the portions generated by generative AI in their submitted materials (Curran et al., 2023). In addition, as part of a research study, a fellow lawyer asked ChatGPT to generate a list of legal scholars with a history of sexual harassment. ChatGPT generated a list that included a law professor. ChatGPT claimed that the professor attempted to touch a student on a class tripnd referenced an article from The Washington Post in March 2018. However, the fact is that this article does not exist, nor does the mentioned class trip (Pranshu Verma, 2023). Besides, a mayor in Australia, discovered false claims made by ChatGPT stating that he was personally convicted of bribery, confessed to charges of bribery and corruption, and received a prison sentence. In response, he plans to initiate legal action against the company responsible for ChatGPT, accusing them of defamation for disseminating untrue information about him. This could
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline Factual and Non-Hallucinated & Factually correct outputs. \\ \hline Non-Factual and Hallucinated & Entirely fabricated outputs. \\ \hline \hline \multirow{7}{*}{Hallucinated but Factual} & 1. Outputs that are unfaithful to the prompt but remain factually correct (Cao et al., 2022). 2. Outputs that deviate from the prompt’s specifics but don’t touch on factuality, e.g., a prompt asking for a story about a rabbit and wolf becoming friends, but the LLM produces a tale about a rabbit and a dog befefinding each other. 3. Outputs that provide additional factual details not specified in the prompt, e.g., a prompt asking about the capital of France, and the LLM responds with ”Paris, which is known for the Effort Tower.” \\ \hline \multirow{7}{*}{Non-Factual but Non-Hallucinated} & 1. Outputs that are unfaithful to the prompt but remain factually correct (Cao et al., 2022). 2. Outputs that deviate from the prompt’s specifics but don’t touch on factuality, e.g., a prompt asking for a story about a rabbit and wolf becoming friends, but the LLM produces a tale about a rabbit and a dog befefinding each other. 3. Outputs that provide additional factual details not specified in the prompt, e.g., a prompt asking about the capital of France, and the LLM responds with ”Paris, which is known for the Effort Tower.” \\ \hline \multirow{7}{*}{Non-Factual but Non-Hallucinated} & 1. Outputs where the LLM states”I don’t know” or avoids a direct answer. 2. Outputs that are partially correct, e.g., for the question, ”Who landed on the moon with Apollo 112” If the LLM responds with just ”Neil Armstrong,” the answer is incomplete but not hallucinated. 3. Outputs that provide a generalized or vague response without specific details, e.g., for a question about the causes of World War II, the LLM might respond with ”It was due to various political and economic factors.” \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison between the factuality issue and the hallucination issue.
potentially be the first defamation case of its kind involving an artificial intelligence chatbot (Sands, 2023).
A recent study (Nori et al., 2023) provides a comprehensive evaluation of GPT-4's performance in medical competency examinations and benchmark datasets. The evaluation utilizes the text-only version of GPT-4 and investigates its ability to address medical questions without any training or fine-tuning. The assessment is conducted using the United States Medical Licensing Examination (USMLE) Kung et al. (2023) and the MultiMedQA benchmark Singhal et al. (2023), comparing GPT-4's performance against earlier models like GPT-3.5 and models specifically fine-tuned on medical knowledge. The results demonstrate that GPT-4 significantly outperforms its predecessors, achieving scores on the USMLE that exceed the passing threshold by more than 20 points and delivering the best overall performance without specialized prompt crafting or domain-specific fine-tuning.
While large language models show promise on medical datasets, the introduction of automation in the healthcare field still requires extreme caution Shen et al. (2023). Existing metrics and benchmarks are often developed for highly focused problems. Evaluating LLM outputs in supporting real-world decision-making poses challenges Singhal et al. (2023), including the stability and robustness of personalized recommendations and inferences in real-world contexts. Using large language models carries significant risks, including inaccurate ranking recommendations Hirosawa et al. (2023) (e.g., differential diagnosis) and sequencing Zhang et al. (2023c) (e.g., information gathering and testing), as well as factual errors Shen et al. (2023), particularly important omissions and erroneous responses.
## 3 Factuality Evaluation
Evaluating the factuality of LLMs is pivotal for ensuring the reliability and trustworthiness of their generated content (Lee et al., 2022; Pezeshkpour, 2023). As LLMs become increasingly integrated into various applications, from information retrieval to content generation, the accuracy of their outputs becomes paramount. In this section, we delve into the evaluation metrics and benchmarks used for assessing the factuality of LLMs, studies that have undertaken such evaluations, and domain-specific evaluation.
### _Factuality Evaluation Metrics_
In this subsection, we delve into metrics established for evaluating the factuality of LLMs. As the problem formulation is akin to natural language generation (NLG) (Celikyilmaz et al., 2021; Ji et al., 2023b), we introduce several automatic evaluation metrics typically used for NLG, as well as specifically examining the metrics for factuality.
We categorize these metrics into the following groups: (1) Rule-based evaluation metrics; (2) Neural evaluation metrics; (3) Human evaluation metrics; and (4) LLM-based evaluation metrics. We list those metrics in Table III.
#### 3.1.1 Rule-based evaluation metrics
Most assessments of factuality in large language models use rule-based evaluation metrics, due to their consistency, predictability, and ease of implementation; they allow for reproducible outcomes through a systematic method. However, they can be rigid and may not account for nuances or variations in language use, context interpretation, or colloquial expressions. This means language models rated highly by these metrics may still produce content that feels unnatural or inauthentic to human readers.
**Exact Match:** An "exact match" refers to a situation where the generated text precisely matches a specific input or reference text. This means that the LLM produces output that is identical, word-for-word, to the provided input or reference text. Exact matches are often used in NLG when you want to replicate or repeat a piece of text without any variations or alterations.
**Common metrics:** Many factuality evaluation measurements use commonly adapted metrics, such as Accuracy, Precision, Recall, AUC, F-Measure, Calibration score, Brier score, and other common metrics used in probabilistic forecasting and machine learning, specifically in tasks involving probabilistic predictions. The common definition of these metrics involves the use of correctly predicted labels and ground-truth labels. As the input and output of LLMs are human-readable sentences, there is no unified method to convert the sentences into labels. Most evaluation will define their own way. That is, these scores are frequently not used in isolation, but rather in combination. For instance, BERTScore (Zhang* et al., 2020) uses BERT for determining the Precision and Recall, then uses F-measures to get a final weighted score. In the following, we will describe the most simple form of these scores.
The _Calibration score_, used in (Kadavath et al., 2022; Lin et al., 2022a) measures the agreement between predicted probabilities and observed frequencies. A perfectly calibrated model should, over a large number of instances, see the predicted probability of an outcome match the relative frequency of that outcome.
The _Brier Score_, used in (Kadavath et al., 2022) is a metric used in probabilistic forecasting to measure the accuracy of probabilistic predictions. It calculates the mean squared difference between the predicted probability assigned to an event and the actual outcome of the event. The Brier Score ranges from 0 to 1, where 0 indicates a perfect prediction and 1 indicates the worst possible prediction. In other words, the lower the Brier Score, the better the accuracy of the prediction. It's worth noting that this metric is appropriate for binary and categorical outcomes, but not for ordinal outcomes. For binary outcomes, the Brier Score can be calculated as follows:
\[BS=\frac{1}{N}\sum_{i=1}^{N}(forecast_{i}-actual_{i})^{2} \tag{1}\]
where \(forecast_{i}\) is the predicted probability, \(actual_{i}\) is the actual outcome (0 or 1), and \(N\) is the total number of forecasts made.
**MC1 (Single-true) and MC2 (Multi-true):** are widely recognized metrics in multi-choice question answering, particularly in TruthfulQA (Lin et al., 2022b). MC1: For a given question accompanied by several answer choices, the objective is to identify the sole correct answer. The model's selection is determined by the answer choice to
which it allocates the highest log probability of completion, independent of the other choices. The score is calculated as the straightforward accuracy across all questions. MC2: Presented with a question and multiple reference answers labeled as true or false, the score is derived from the normalized total probability assigned to the collection of true answers.
**BLEU:**(Papineni et al., 2002), also known as Bilingual Evaluation Understudy metric is commonly employed in the context of factuality evaluation. This metric calculates the co-occurrence frequency of phrases in two sentences, based on a weighted average of matched n-gram phrases. This helps in quantitatively assessing the factual consistency between the generated text and its reference. The BLEU score is computed as
\[BLEU=BP*\exp\left(\frac{1}{N}*\sum_{n=1}^{N}P_{n}\right) \tag{2}\]
where _(1)_\(BP\) (Brevity Penalty) accounts for the length of the candidate translation. If the candidate translation is shorter than the reference, \(BP\) is less than 1, which reduces the BLEU score. _(2)_\(P_{n}\) is the n-gram precision, which measures the portion of n-grams in the candidate translation that are also present in the reference translation. _(3)_\(N\) is the maximum order of n-grams considered (commonly up to 4 in much of the literature) and \(\sum_{n=1}^{N}P_{n}\) is the sum of \(\log P_{n}\) for \(n\) from 1 to \(N\). - \(\exp\) denotes the exponential function.
**ROUGE:** The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metric (Lin, 2004) serves as a measure of the similarity between the generated text and the reference text, with the similarity grounded on recall scores. Primarily used in the field of text summarization, the ROUGE metric incorporates four distinct types. These include ROUGE-n, which assesses n-gram co-occurrence statistics, and ROUGE-l, which measures the longest common subsequence. ROUGE-w provides evaluation based on the weighted longest common subsequence, while ROUGE-s measures skip-bigram co-occurrence statistics. These diverse metrics collectively provide a comprehensive measure of the factual accuracy of generated text. ROUGE score can be calculated in various ways based on the length of n-grams (unigram, bigram, etc.) used. The simplest version, ROUGE-n score can be compactly represented as:
\[\text{ROUGE-n}=\frac{\sum_{S\in RS}\sum_{gram_{n}\in S}Count_{ match}(gram_{n})}{\sum_{S\in RS}\sum_{gram_{n}\in S}Count(gram_{n})} \tag{3}\]
where: _(1)_\(RS\) denotes the set of Reference Summaries, _(2)_\(gram_{n}\)_ represents an n-gram within the reference summary _S, (3)_\(Count_{match}(gram_{n})\) signifies the number of times that n-gram \(gram_{n}\) appears in both the generated text and the reference summary, _(4)_\(Count(gram_{n})\) is the frequency of occurrence of the n-gram \(gram_{n}\) within the reference summary.
**METEOR:** The Metric for Evaluation of Translation with Explicit Ordering (METEOR) (Banerjee and Lavie, 2005) aims to address several shortcomings presented by BLEU. These include deficiencies in recall, the absence of higher order n-grams, an absence of explicit word-matching between the generated and reference text, and the use of geometric averaging of n-grams. METEOR overcomes these by introducing a comprehensive measure, calculated based on the harmonic mean of the unigram precision and recall.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Evaluation Type** & **Metric** & **Description, Purpose/Usage, and Reference/Findings** \\ \hline \multirow{6}{*}{Rule-based} & Exact Match & Correct matches between generated and input/reference text. \\ & & **Accuracy**: Measures overall correctness of model prediction. \\ & Common Metrics & **Precision**: Ability of model to classify instances positively. \\ & (Accuracy, Precision, Recall, & **Recall**: Ability of model to identify all positive instances. \\ & F-Measureure, Calibration, Brier) & **F-Measure**: Combines both precision and recall into a single score. \\ & & **Calibration**: Agreement between predicted probabilities and observed frequencies. \\ & & **Brier**: Measures accuracy of probabilistic predictions. \\ & MC1 and MC2 (Lin et al., 2022b) & Used to identify the correct answer from multi-choice questions. \\ & BLEU (Papineni et al., 2002) & Determines frequency of phrase co-occurrence in two sentences. \\ & ROUGE (Lin, 2004) & Calculates similarity between reference and generated text. \\ & METEOR(Lowe et al., 2017) & Comprehensive measure based on harmonic mean of unigram precision and recall. \\ & QUIP-Score (Weller et al., 2023) & Calculates how much of generated text consists of exact spans found in a text corpus. \\ \hline \multirow{6}{*}{Neural} & ADEM Lowe et al. (2017) & Uses a Hierarchical RNN to evaluate the quality of responses in language models. \\ & BERTScore (Zhang et al., 2020) & Pre-trained embeddings from BERT used to evaluate sentence similarity. \\ & BLEURT (Sellam et al., 2020) & Trains BERT on larger dataset and rate similarity of sentences. \\ & BATScore (Vuan et al., 2021) & Evaluates quality using pre-trained sequence-to-sequence models. \\ \hline \multirow{3}{*}{Human} & AIS, Auto-AIS & \\ & (Rashkin et al., 2023) & Evaluates if output is backed by evidence. \\ & (Gao et al., 2023a) & \\ & FACIScore (Min et al., 2023) & Measures factual accuracy of LLMs breakpointing generated content into atomic facts. \\ \hline \multirow{6}{*}{LLM-based} & GPTScore (Fu et al., 2023) & Evaluates quality of AI output. It is efficient and avoids the annotation requirement. \\ & GPT-judge (Lin et al., 2022b) & Evaluates truthfulness of LLM answers. It is used in evaluating truthfulness \\ \cline{1-1} & Truthfulness and & Truthfulness Measures the honesty of LLM information. \\ \cline{1-1} & Informativeness & It is used to evaluate the factuality of information. \\ \cline{1-1} & (Lin et al., 2022b) & Informativeness Evaluates the relevance and value of LLM responses \\ \cline{1-1} & LLM-Eval (Lin and Chen, 2023) & Evaluates the quality of a conversation. It is adaptable in various scenarios. \\ \hline \hline \end{tabular}
\end{table} TABLE III: Evaluation Metrics for the Factuality of LLMs.
This offers potentially enhanced appraisals of factuality in the generated text.
**QUIP-Score:**(Weller et al., 2023) is an n-gram overlap measure. It quantifies the degree to which a generated passage consists of exact spans found in a text corpus. The QUIP-Score serves to evaluate the 'grounding' ability of LLMs, specifically assessing whether model-generated answers can be directly located within the underlying text corpus. It is defined by comparing the precision of the character n-gram from the generated output to the pre-training corpus. This is formally illustrated by generation \(Y\) and the text corpus \(C\):
\[\mathrm{QUIP}(Y;C)=\frac{\sum_{\mathrm{gram}_{n}\in Y}\mathbb{F}_{C}\left( \mathrm{gram}_{n}\right)}{\left|\mathrm{gram}_{n}\in Y\right|}, \tag{4}\]
where \(\mathbb{F}(.)\)is an indicator function:
\[\mathbb{F}(.)=\begin{cases}1,&\text{if gram}\in C\\ 0,&\text{otherwise}\end{cases} \tag{5}\]
Therefore, a score of 0.5 implies that 50% of the n-grams derived from the generated text can be found in the pre-training corpus. The authors calculate a macro-average of this value over a collection of generations, which results in a single performance figure representative of a specific test dataset.
#### 3.1.2 Neural Evaluation Metrics
These metrics operate by comparing the output of these models with a standard or reference text by learning evaluator models. This category primarily comprises three prominent metrics, namely ADEM, BLEURT, and BERTScore. Each metric approaches the evaluation slightly differently, yet all aim to assess the semantic and lexical alignment between machine-generated text and its reference counterpart, thus ensuring the factuality of the generated content.
**ADEM:** The automatic Dialogue Evaluation Model (ADEM) metric is a cogent tool utilized to gauge the quality of responses generated by language models in a conversation. Developed by Lowe et al. (2017), this metric trains a Hierarchical RNN, in a semi-supervised fashion, to predict ratings for the machine-generated responses. The ADEM's assessment primarily hinges on a dialogue context, designated as \(\mathbf{c}\), alongside the model's response, labeled as \(\mathbf{\hat{r}}\), and a reference response, specified as \(\mathbf{r}\). These elements, when encoded via the hierarchical RNN, inform the ADEM which then predicts a score, reflecting the proximity of the model's response to factual accuracy and relevance:
\[score=(\mathbf{c}^{\top}M\mathbf{\hat{r}}+\mathbf{r}^{\top}N\mathbf{\hat{r}}- \alpha)/\beta, \tag{6}\]
where \(M,N\) are learnable parameters and \(\alpha,\beta\), represent scalar constants that serve as initial values.
**BERTScore:** the BERTScore metric, as introduced by Zhang et al. (2020), utilizes pre-trained embeddings from BERT to gauge the similarity between two sentences. This is done by assigning embeddings to a referenced sentence, "x", and a model-generated sentence, "\(\hat{x}\)", denoted as "x" and "\(\hat{\mathbf{x}}\)", respectively. The recall, precision, and F1-scores are then calculated to quantify the similarity between "x" and "\(\hat{\mathbf{x}}\)". This score provides a measure of the generated sentence's factuality, and thus, the reliability of the large language model itself:
\[R_{\mathrm{BERT}} =\frac{1}{\left|x\right|}\sum_{x_{i}\in\hat{x}}\max_{\hat{x}_{j} \in\hat{x}}\mathbf{x}_{i}^{\top}\mathbf{\hat{x}}_{j}, \tag{7}\] \[P_{\mathrm{BERT}} =\frac{1}{\left|\hat{x}\right|}\sum_{\hat{x}_{j}\in\hat{x}}\max_ {i\in\hat{x}}\mathbf{x}_{i}^{\top}\mathbf{\hat{x}}_{j},\] (8) \[F_{\mathrm{BERT}} =2\frac{P_{\mathrm{BERT}}\cdot R_{\mathrm{BERT}}}{P_{\mathrm{BERT }}+R_{\mathrm{BERT}}}, \tag{9}\]
where the recall and precision elements are calculated based on a token-match approach. The recall score stems from comparing each token in the reference sentence "\(x\)" to the corresponding token in the generated sentence "\(\hat{x}\)". Conversely, the precision score is derived by comparing each token in "\(\hat{x}\)" to a token in "\(x\)". A greedy matching strategy is employed to pair tokens that demonstrate the highest degree of similarity. This method provides a comprehensive analysis of how precise and accurate the language model's output aligns with the factual reference sentence.
**BLEURT:** the Bilingual Evaluation Understudy with Representations from Transformers (BLEURT) metric (Sellam et al., 2020) employs a unique pre-training scheme, where BERT is initially trained on a significant corpus of synthetic sentence pairs. This is reinforced by multiple lexical and semantic-level supervision signals, used concurrently. Following this pre-training stage, BERT is further fine-tuned on rating data, and its objective is to estimate human rating scores accurately. The initial stage of pre-training is essential for this metric because it significantly enhances the model's robustness, thereby effectively transforming it into a secure bulwark against quality drifts inherent in generative systems.
**BARTScore:**(Yuan et al., 2021) is a metric proposed to evaluate the quality of the generated text, such as in applications like machine translation and summarization. This metric conceives this evaluation as a problem of text generation, modeled using pre-trained sequence-to-sequence models. BARTScore uses BART, an encoder-decoder-based pre-trained model, to translate generated text to and from a reference point, earning higher scores when the text is more accurate and fluent. This metric offers several variants that can be flexibly applied in an unsupervised manner for different perspectives of text evaluation, such as fluency or factuality. Tests have shown that BARTScore can outperform existing top-scoring metrics in the majority of test settings across multiple datasets and perspectives.
#### 3.1.3 Human Evaluation Metrics
Human evaluation in factuality assessment is crucial due to its sensitivity to nuanced elements of language and context that may elude automated systems. Human evaluators excel at interpreting abstract concepts and emotional subtleties that can significantly inform the accuracy of evaluation. However, they are subject to limitations such as subjectivity, inconsistency, and potential for error. On the other hand, automated evaluations offer consistent results, and efficient processing of large data sets, and are ideal for tasks needing quantitative measurements. They also provide an objective benchmark for model performance comparison. Overall, an
ideal evaluation framework might blend automated evaluation's scalability and consistency with human evaluation's ability to interpret complex linguistic concepts.
**Attribution:** is a metric to verify that the output of LLMs is sharing only verifiable information about the external world. As proposed by [14], Attributable to Identified Sources (AIS) is a human evaluation framework that adopts a binary concept of attribution. A text passage \(y\) is deemed attributable to a set \(A\) of evidence if, and only if, an arbitrary listener would agree with the statement "According to \(A\), \(y^{\prime}\)" within the context of \(y\). The AIS framework awards a full score (1.0) if every element of content in passage \(y\) can be linked to the evidence set \(A\). Conversely, it gives a score of zero (0.0) if this condition is not met.
Based on AIS, Gao et al. (2023a) propose a more fine-grained, sentence-level extension of AIS called Auto-AIS, where annotators assign AIS scores to each sentence, and an average score across all sentences is reported. This procedure effectively measures the percentage of sentences that are fully attributable to the evidence. Context, such as surrounding sentences and the question the text answers, is provided to annotators for more informed judgment. A limit is also set for the number of evidence snippets in the attribution report to maintain conciseness.
During model development, an automated AIS metric is defined to approximate human AIS evaluations, using a natural language inference model, which correlates well with AIS scores. Before computing the scores, they improve accuracy by decentralizing each sentence in context.
**FAcScore:**[15] is a novel evaluation metric designed to assess the factual precision of long-form text generated by LLMs. The challenge of evaluating the factuality of such text arises from two main issues: (1) the generated content often contains a mix of supported and unsupported information, making binary judgments insufficient, and (2) human evaluation is both time-consuming and expensive. To address these challenges, FAcScore breaks down a generated text into a series of atomic facts--short statements that each convey a single piece of information. Each atomic fact is then evaluated based on its support from a reliable knowledge source. The overall score represents the percentage of atomic facts that are supported by the knowledge source. The paper conducted extensive human evaluations to compute FAcScores for biographies generated by several state-of-the-art commercial LLMs, including InstructGPT, ChatGPT, and the retrieval-augmented PerplexityAI. The results revealed that these LLMs often contain factual inaccuracies, with FAcScores ranging from 42% to 71%. Notably, the factual precision of these models tends to decrease as the rarity of the entities in the biographies increases.
#### 3.1.4 LLM-based Metrics
Using LLMs for evaluation offers efficiency, versatility, reduced reliance on human annotation, and the capability of evaluating multiple dimensions of conversation quality in a single model call, which improves scalability. However, potential issues include a lack of established validation, which can lead to bias or accuracy problems if the LLM used for evaluation is not thoroughly vetted. The decision process to identify suitable LLMs and decoding strategies can be complex and pivotal to obtaining accurate evaluations. The range of evaluation may also be limited, as the focus is often on open-domain conversations, possibly leaving out assessments in specific or narrow domains. While reducing human input can be beneficial, it can also miss out on crucial interaction quality aspects better evaluated by human judges, such as emotional resonance or nuanced understandings.
**GPTScore:**[11] is a new evaluation framework designed to assess the quality of output from generative AI models. To provide these evaluations, GPTScore taps into the emergent capabilities of 19 different pre-trained models, such as zero-shot instruction, and uses them to judge the generated texts. These models vary in scale from 80M to 175B. Testing across four text generation tasks, 22 aspects of evaluation, and 37 related datasets, has demonstrated that GPTScore can effectively evaluate text per instructions in natural language. This attribute allows it to sidestep challenges traditionally encountered in text evaluation, like the need for sample annotations and achieving custom, multi-faceted evaluations.
**GPT-judge:**[15] is a finetuned model based on the GPT-3-6.7B, which is trained to evaluate the truthfulness of answers to questions in the TruthfulQA dataset. The training set consists of triples in the form of question-answer-label combinations, where the label can be either true or false. The model's training set includes examples from the benchmark and answers generated by other models assessed by human evaluation. In its final form, the GPT-judge uses examples from all models to evaluate the truthfulness of responses. This training includes all questions from the dataset, with the goal being to evaluate truth, not generalize new questions.
The study conducted by the authors [15] focuses on the application of GPT-judge in assessing _Truthfulness_ and _Informativeness_ using the TruthfulQA Dataset. The authors undertook the fine-tuning of two distinct GPT-3 models to evaluate two essential aspects: Truthfulness, which pertains to the accuracy and honesty of information provided by the LLM, and Informativeness, which measures how effectively the LLM conveys relevant and valuable information in its responses. From these two fundamental concepts, the authors derived a combined metric denoted as _truth * info_. This metric represents the product of scalar scores for both truthfulness and informativeness. It not only quantifies the extent to which questions are answered truthfully but also incorporates the assessment of informativeness for each response. This comprehensive approach prevents the model from generating generic responses like "I have no comment" and ensures that responses are not only truthful but also valuable. These metrics have found widespread deployment in evaluating the factuality of information generated by LLMs [16, 15].
**LLM-Eval:**[15] is a novel evaluation methodology for open-domain dialogues with LLMs. Unlike conventional evaluation methods which rely on human annotations, ground-truth responses, or multiple LLM prompts, LLM-Eval uses a unique prompt-based evaluation
process employing a unified schema to assess various elements of a conversation's quality during a single model function. Extensive evaluations of LLM-Eval's performance using multiple benchmark datasets indicate it is effective, efficient, and adaptable compared to traditional evaluation practices. Further, the authors stress the necessity of selecting appropriate LLMs and decoding strategies for precise evaluation outcomes, underscoring LLM-Eval's versatility and dependability in assessing open-domain conversation systems across a variety of circumstances.
### _Benchmarks for Factuality Evaluation_
In this section, we delve into the benchmarks that are prominently employed to assess the factuality of LLMs. Specific benchmarks tailored for evaluating factuality in LLMs are tabulated in Table IV.
MMLU [11] and TruthfulQA [12] stand as two pivotal benchmarks in the realm of evaluating the factuality of LLMs [10, 11, 12]. The MMLU benchmark is proposed to measure a text model's multitask accuracy across a diverse set of 57 tasks. These tasks span a wide range of subjects, from elementary mathematics to US history, computer science, law, and more. The benchmark is designed to test both the world knowledge and problem-solving ability of models. The findings from the paper suggest that while most recent models perform at near random-chance accuracy, the largest GPT-3 model showcased a significant improvement. However, even the best models still have a long way to go before achieving expert-level accuracy across all tasks [10]. TruthfulQA is a benchmark designed to assess the truthfulness of a language model's generated answers. The benchmark consists of 817 questions spanning 38 categories, including health, law, finance, and politics. These questions were crafted in such a way that some humans might answer them falsely due to misconceptions or false beliefs. The goal for models is to avoid generating these false answers that they might have learned from imitating human texts. The TruthfulQA benchmark serves as a tool to highlight the potential pitfalls of relying solely on LLMs for accurate information and emphasizes the need for continued research in this area.
HaluEval [13] is a benchmark designed to understand and evaluate the propensity of LLMs like ChatGPT to generate hallucinations. A hallucination, in this context, refers to content that either conflicts with the source or cannot be verified based on factual knowledge. The HaluEval benchmark offers a vast collection of generated and human-annotated hallucinated samples, aiming to evaluate the performance of LLMs in recognizing such hallucinations. The benchmark utilizes a two-step framework, termed "sampling-then-filtering", based on ChatGPT to generate these samples. Additionally, human labelers were employed to annotate hallucinations in ChatGPT responses. The HaluEval benchmark is a comprehensive tool that not only evaluates the hallucination tendencies of LLMs but also provides insights into the types of content and the extent to which these models are prone to hallucinate.
BigBench [14] focuses on the capabilities and limitations of LLMs. It comprises 204 tasks from diverse domains such as linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and more. The benchmark is designed to evaluate tasks believed to be beyond the capabilities of current language models. The study evaluates the performance of various models, including OpenAI's GPT models, on BIG-bench and compares them to human expert raters. Key findings suggest that model performance and calibration improve with scale but are still suboptimal when compared to human performance. Tasks that involve a significant knowledge or memorization component show predictable improvement, while tasks that exhibit "break-through" behavior at a certain scale often involve multiple steps or components.
Huang et al. [13] propose C-Eval, the first comprehensive Chinese evaluation suite. It can be used to evaluate the advanced knowledge and reasoning abilities of foundational models within a Chinese context. The evaluation suite comprises multiple-choice questions spanning 52 diverse disciplines, with four levels of difficulty: middle school, high school, college, and professional. Additionally, C-Eval Hard is introduced for very challenging subjects within the C-Eval suite, which demands advanced reasoning abilities to solve. Their evaluations of state-of-the-art LLMs, including both English and Chinese-oriented models, have shows that there is significant room for improvement as only GPT-4 managed to achieve an average accuracy of over 60%. The authors focus on assessing LLMs' advanced abilities within a Chinese context. The researchers assert that LLMs intended for a Chinese environment should be evaluated based on their knowledge of Chinese users' primary interests, such as Chinese culture, history, and laws. With C-Eval, the authors aim to guide developers in understanding the abilities of their models from multiple dimensions to facilitate the development and growth of foundational models for Chinese users. Simultaneously, C-Eval has not just introduced a whole suite but subsets that can serve as individual benchmarks, thereby assessing certain model abilities and analyzing key strengths and limitations of foundational models. Experiment results have shown that although GPT-4, ChatGPT, and Claude were not exclusively tailored for Chinese data, they emerged as the top performers on C-Eval.
SelfAware [15] aims to investigate whether models recognize what they don't know. This dataset encompasses two types of questions: unanswerable and answerable. The dataset comprises 2,858 unanswerable questions gathered from various websites and 2,337 answerable questions extracted from sources such as SQuAD, HotpotQA, and TriviaQA. Each unanswerable question is confirmed as such by three human evaluators. In the conducted experiments, GPT-4 achieves the highest F1 score of 75.5, compared to a human score of 85.0. Larger models tend to perform better and in-context learning can enhance performance.
The Pinocchio benchmark [16] serves as an extensive evaluation platform, emphasizing factuality and reasoning for LLMs. This benchmark encompasses 20,000 varied factual queries from diverse sources, timeframes, fields, regions, and languages. It tests an LLM's capability to discern combined facts, process both organized and scat
tered evidence, recognize the temporal evolution of facts, pinpoint minute factual disparities, and withstand adversarial inputs. Each reasoning challenge within the benchmark is calibrated for difficulty to allow for detailed analysis.
Kasai et al. (2022) introduce a dynamic QA platform named REALTIMEQA. This platform is unique in that it announces questions and evaluates systems on a regular basis, specifically weekly. The questions posed by REALTIMEQA pertain to current events or novel information, challenging the static nature of traditional open domain QA datasets. The platform aims to address instantaneous information needs, pushing QA systems to provide answers about recent events or developments. Their preliminary findings indicate that while GPT-3 can often update its generation results based on newly-retrieved documents, it sometimes returns outdated answers when the retrieved documents lack sufficient information.
Vu et al. (2023) present a dynamic benchmark called FreshQA designed to evaluate up-to-date world knowledge of LLMs. Its questions range from those that are never-changing to those that are fast-changing, as well as questions based on false premises. The aim is to challenge the
static nature of LLMs and test their adaptability to ever-changing knowledge through human evaluations. They develop a reliable evaluation protocol that uses a two-mode system: RELAXED and STRICT for a comprehensive understanding of model performance, ensuring that answers are confident, definitive, and accurate. They also provide a strong baseline named FRESHPROMPT, which seeks to enhance LLM performance by integrating real-time data from search engines. Initial experiments reveal that, outdated training data weakens the performance of LLMs and the FRESHPROMPT method can significantly enhance it. The research underscores the need for LLMs to be refreshed with current information to ensure their relevance and accuracy in a constantly evolving world.
FreshQA (Vu et al., 2023) is a dynamic benchmark designed to evaluate up-to-date world knowledge of LLMs. Its questions range from those that are never-changing to those that are fast-changing, as well as questions based on false premises. The aim is to challenge the static nature of LLMs and test their adaptability to ever-changing knowledge through human evaluations. They develop a reliable evaluation protocol that uses a two-mode system: RELAXED and STRICT for a comprehensive understanding of model performance, ensuring that answers are confident, definitive, and accurate. They also provide a strong baseline named FRESHPROMPT, which seeks to enhance LLM performance by integrating real-time data from search engines. Initial experiments reveal that, outdated training data weakens the performance of LLMs and the FRESHPROMPT method can significantly enhance it. The research underscores the need for LLMs to be refreshed with current information to ensure their relevance and accuracy in a constantly evolving world.
Several benchmarks, such as BigBench (Srivastava et al., 2023) and C-Eval (Huang et al., 2023b), encompass subsets that extend beyond the realm of factual knowledge or factuality. In this work, we specifically emphasize and focus on
those subsets related to factuality.
There are benchmarks primarily designed for Pre-trained Language Models (PLMs) that can also be adapted for LLMs. Some studies use them for evaluating LLM's factuality, but they are not so widely-used, so we have chosen to exclude them from the table for clarity. These benchmarks predominantly encompass knowledge-intensive tasks, as highlighted by Petroni et al. (2021). Those benchmarks include NaturalQuestions (NQ) Kwiatkowski et al. (2019), TriviaQA (TQ) Joshi et al. (2017), OTT-QA Chen et al. (2021), AmbigQA Min et al. (2020) and WebQuestion (WQ) Berant et al. (2013) of Open-domain Question Answering (QA) task, the HotpotQA Yang et al. (2018), 2WikiMultihopQA Ho et al. (2020), IIRC Ferguson et al. (2020), MuSiQue Trivedi et al. (2022) of multi-step QA, the EL15 Fan et al. (2019) of the Long-form QA task, the FEVER Thorne et al. (2018), FM2 Eisenschlos et al. (2021), HOVER Jiang et al. (2020), FEVEROUS Aly et al. (2021), 2FSChecking task, the T-REx Elsahar et al. (2018), zsRE Levy et al. (2017) and LAMA Petroni et al. (2019) to examine factual knowledge contained in pretrained language models, the WikBio Lebret et al. (2016) of biography generation task, the RoSE Liu et al. (2023), WikiAsp Hayashi et al. (2021) of summarization task, the KILT Petroni et al. (2021) of comprehensive knowledge intensive tasks, the MassiveText Rae et al. (2022b), Curation Corpus (Curation, 2020), Wikitext103 Merity et al. (2016), Lambda Paperno et al. (2016), C4 Raffel et al. (2020b), Pile Gao et al. (2020) of language modeling task, the WoW Dinan et al. (2019), DSTC7 track2 Galley et al. (2019), DSTC11 track5 Zhao et al. (2023a) of dialogue task, the RealToxicityPrompts Gehman et al. (2020) of toxicity reduction, the CommaQA Khot et al. (2022), StrategyQA Geva et al. (2021), TempQuestions Jia et al. (2018), IN-FOTABS Gupta et al. (2020) of diverse reasoning tasks.
Some studies Manakul et al. (2023); Min et al. (2023) also provide a small dataset, but they mainly concentrate on the evaluation metrics or methods for factuality, we choose to discuss them in the next subsection.
### _Factuality Evaluation Studies_
In this section, we delve into studies that evaluate factuality in LLMs without introducing a specific benchmark, focusing primarily on those whose main contribution lies in the evaluation methodology. We spotlight works that have pioneered evaluation techniques, metrics, or have offered distinctive insights into the factuality evaluation of LLMs.
Manakul et al. (2023) use an evaluation process that encompasses several key steps. Initially, synthetic Wikipedia articles are generated using GPT-3, focusing on individuals from the Wikibio dataset. Subsequently, manual annotation is performed at the sentence level, classifying sentences as "Major Inaccurate", "Minor Inaccurate", or "Accurate", with "Major Inaccurate" denoting sentences unrelated to the topic. Passage-level scores are derived by averaging sentence-level labels, and identifying cases of total inaccuracies through score distribution analysis. Inter-annotator agreement is assessed using Cohen's \(\kappa\) Cohen (1960). Evaluation metrics primarily employ precision-recall curves (PR-Curves), distinguishing between "Non-Factual Sentences" (a specific subset), and "Factual Sentences." These PR-Curves elucidate the trade-off between precision and recall for different detection methods.
Wang et al. (2023) ask several LLMs, including ChatGPT, GPT-4 OpenAI (2023), BingChat Microsoft (2023) to answer open questions from NaturalQuestions Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). They manually estimate the accuracy of those LLMs on open question answering, and find that though LLMs can achieve nice performance but still far away from perfect.
Pezeshkpour (2023) propose a new metric for measuring whether a certain type of knowledge is present in LLM. This metric is based on information theory and measures knowledge by analyzing the probability distribution of predictions made by LLM before and after injecting target knowledge. The accuracy of GPT-3.5 in the Knowledge Probing task is tested on the T-REx Elsahar et al. (2018) and LAMA Petroni et al. (2019) datasets.
Varshney et al. (2023) point out a common real-world occurrence where users often ask questions based on false premises. These questions are challenging for state-of-the-art models. This necessitated the creation of a new evaluation dataset. To this end, the authors have conducted a case study and compiled a set of 50 such adversarial questions, all of which the GPT-3.5 model answered incorrectly. The aim is to create a challenging experimental setup to assess the performance of models faced with such questions. In order to enhance the evaluation, corresponding true premise questions were created for each of the false premise questions. This allow for a holistic evaluation of the model's performances, taking into consideration both correct and incorrect premises. The authors make sure to evaluate the complete answers given by the model for correct and incorrect questions - in this context, it's not enough for an answer to be partially correct, the entire answer needs to be accurate to be marked as correct. For example: For the false premise question "Why does Helium have an atomic number of 1?", the corresponding true premise question is "Why does Hydrogen have an atomic number of 1?".
FACTOOL Chern et al. (2023) is a tool designed to function as a factuality detector, with the primary purpose of auditing generative chatbots and assessing the reliability of their outputs. This tool is employed to evaluate several contemporary chatbots, including GPT-4, ChatGPT, Claude, Bard Google (2023), and Vicuna Chiang et al. (2023). Notably, FACTOOL itself leverages the capabilities of GPT-4. For the evaluation process, the researchers have curated a diverse set of prompts: 30 from knowledge-based question answering (KB-QA), 10 each from code, math, and scientific domains. The KB-QA prompts were sourced from a prior study, code prompts were taken from HumanEval, math prompts from another distinct study, while the scientific prompts were crafted by the authors themselves. The evaluation metrics included both claim-level and response-level accuracies for each chatbot. To offer a more comprehensive and equitable evaluation, a weighted claim-level accuracy is used. The weighting is determined based on the proportion of prompts from each category. The findings are illuminating. GPT-4 emerge as the top performer in terms of both weighted claim-level factual accuracy and response-level accuracy among all the chatbots assessed. Another intrigu
ing observation is that chatbots that underwent supervised fine-tuning, such as Vicuna-13B, exhibited commendable performance in standard scenarios like KB-QA. However, their performance dip in more intricate scenarios, including those involving math, code, and scientific queries.
Kadavath et al. (2022) investigate whether language models can evaluate the accuracy of their own assertions and predict which questions they can answer correctly. It is found that larger models are well-calibrated on diverse multiple-choice and true/false questions if given in the appropriate format. The approach to self-evaluation on open-ended tasks is to ask the models to initially suggest answers, and then evaluate the probability (P[True]) that their answers are correct. This resulted in compelling performance, calibration, and scaling on a diverse range of tasks. Furthermore, self-evaluation performance improved when the models are allowed to consider many of their own suggestions before predicting the validity of a specific one.
Yu et al. (2023) explore whether the internal knowledge of LLMs can replace the retrieved documents on knowledge intensive tasks. They ask LLMs, such as InstructGPT (OpenAI, 2022a), to directly generate contexts given a question rather than retrieving from database. They find the generated documents contain the golden answers more often than the top retrieved documents. Then they feed the generated docs and retrieved docs to the Fusion-in-Decoder model (Izacard and Grave, 2021b) for knowledge-intensive tasks such as Open-domain QA (Kwiatkowski et al., 2019) and find the generated docs are more effective than the retrieved docs, suggesting that the LLMs contain enough knowledge for knowledge-intensive tasks.
Menick et al. (2022b) propose a task named _Self-supported QA_ to evaluate LLMs' ability in also producing citations when generating answers. Authors ask humans to evaluate whether the responses of their proposed model GopherCite are plausible and whether they are supported by the accompanying quote evidence on datasets such as NQ, ELIS, TruthfulQA.
Chen et al. (2023c) propose CONNER, a framework that evaluates LLMs as generators of knowledge. It focuses on six areas: Factuality, Relevance, Coherence, Informativeness, Helpfulness, and Validity. It evaluates whether the generated information can be backed by external proof (Factuality), is relevant to the user's query (Relevance), and is logically consistent (Coherence). It also checks if the knowledge provided is novel or surprising (Informativeness). The Extrinsic evaluation measures whether the knowledge enhances downstream tasks (Helpfulness) and its results are factually accurate (Validity).
In the realm of factuality evaluation, the Model Editing task holds a unique position, focusing on refining the internal knowledge of models. This task comes with its own set of specialized evaluation metrics. Predominantly, the Zero-Shot Relation Extraction (zsRE) Levy et al. (2017) and CounterFact Meng et al. (2022) serve as the primary benchmarks for assessing Model Editing techniques. When evaluating these methods, several key criteria emerge: Reliability: Post-editing, the model should consistently generate the intended output. Generalization: The model should be adept at producing the target output even when presented with paraphrased inputs. Locality: Edits should be localized, ensuring that facts not related to the specific edit remain intact. However, given the intricate web of interconnected facts, recent studies Cohen et al. (2023a); Yao et al. (2023b); Zhong et al. (2023b) have advocated for a more holistic evaluation approach. They introduce broader criteria for fact updates, encompassing aspects like portability, logical generalization, among others.
Some work's main contributions lie at the methods to improve the factuality of LLMs, and their evaluation part may also be informative and important when people reach the related study. So we choose to list evaluation parts in the Table VI but not to discuss their evaluation in detail.
### _Evaluating Domain-specific Factuality_
To assess the performance and factuality of these specialized LLMs, a plethora of datasets and benchmarks have been proposed across various domains. These resources not only serve as critical tools for evaluating the capabilities of LLMs but also facilitate advancements in specialized applications. We summarize them in Table VII. The distinction between this subsection and the previous two lies in its focus. This subsection delves deeper into factuality evaluation tailored to specific domains, while Sec 3.2 and 3.3 primarily concentrate on general factuality evaluation, with only a portion of their content dedicated to datasets evaluating factuality within specific domains.
**Finance:** Xie et al. (2023b) designed a financial natural language understanding and prediction evaluation benchmark dubbed FLARE, based on their collected financial instruction tuning dataset FIT. This benchmark is used to evaluate their FinMA model. It randomly selects validation sets from FIT to choose the best model checkpoint and utilizes distinct test sets for evaluation. FLARE is a broader variant compared to the existing FLUE benchmark Shah et al. (2022) as it also encapsulates financial prediction tasks like stock movement prediction in addition to standard NLP tasks. The FLARE dataset includes several subtasks, such as sentiment analysis (FPB, FiQA-SA), news headline classification (Headline), named entity recognition (NER), question answering (FinQA, ConvFinQA), and stock movement prediction (BigData22, ACL18, CIKM18). Performance is gauged via a variety of metrics for each task, such as the accuracy and weighted F1 Score for sentiment analysis, entity-level F1 score for named entity recognition, and accuracy and Matthews correlation coefficient for stock movement prediction. Several methods, including their own FIT-fine-tuned FinMA and other LLMs (BloombergGPT, GPT-4, ChatGPT, BLOOM, GPT-NeoX, OPT-66B, Vicuna-13B) are dedicated to their comparison. BloombergGPT's performance is assessed in various shot scenarios, while the zero-shot performance is reported for the remaining results. Some of the baselines depend on human evaluations given that LLMs without fine-tuning fail to generate instruction-defined answers. Conversely, FinMA's results are conducted on a zero-shot basis and can be evaluated automatically. To enable direct comparison between the performance of FinMA and BloombergGPT, despite the former not releasing their test datasets, test datasets were constructed with the same data distribution.
Li et al. (2023e) propose EcomInstruct benchmark, which experiment investigates the performance of their EcomGPT
language model in comparison to baseline models such as BLOOM and BLOOMZ. Categories of these baseline models include pre-trained large models with decoder-only architecture like BLOOM, and instruction-following language models like BLOOMZ and ChatGPT. The evaluation metric of the EcomInstruct involves converting all tasks to generative paradigms and using text generation evaluation metrics like ROUGE-L. Classification tasks were evaluated with precision, recall, and F1 scores. The EcomInstruct dataset, comprising 12 tasks across four major categories, is divided into training and testing sections. The EcomGPT is trained on 85,746 instances of E-commerce data. The performance of the model is assessed based on its capacity to generalize unseen tasks or datasets, with emphasis on cross-language and cross-task paradigm settings.
**Medicine:** Wang et al. (2023d) propose a localized medical benchmark called CMB, or the Comprehensive Medical Benchmark in Chinese. CMB is rooted entirely in the native Chinese linguistic and cultural framework. While traditional Chinese medicine is a significant part of this evaluation, it does not make up the entire benchmark. Both prominent LLMs, such as ChatGPT and GPT-4, and localized Chinese LLMs, including those specializing in the health domain, are evaluated using CMB. However, the benchmark is not designed as a leaderboard competition, but rather as a tool for self-assessment and understanding the progression of models in this field. CMB embodies a comprehensive, multi-layered medical benchmark in Chinese, comprised of hundreds of thousands of multiple-choice questions and complex case consultation questions. This wide range of queries covers all clinical medical specialties and various professional levels, seeking to evaluate a model's medical knowledge and clinical consultation capabilities comprehensively.
Li et al. (2023b) introduce Huatuo-26M dataset, the largest Chinese medical Question and Answer (QA) dataset to date, including over 26 million high-quality medical QA pairs. It covers a wide range of topics, such as diseases, symptoms, treatments, and drug information. The dataset is a valuable resource for anyone looking to improve AI applications in the medical field, such as chatbots and intelligent diagnostic systems. The Huatuo-26M dataset is gathered and integrated from various sources, including online medical encyclopedias, online medical knowledge bases, and online medical consultation records. Each QA pair in the dataset contains a problem description and a corresponding answer from a doctor or expert. Although a significant proportion of the Huatuo-26M dataset is constituted by the online medical consultation records, the text format data from these records is not publicly available, for unspecified reasons. This dataset is expected to be instrumental for multiple types of research and AI applications
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Reference & Domain & Task & Datasets & Metrics & Evaluated LLMs \\ \hline \multirow{4}{*}{Xie et al. (2023b)} & \multirow{4}{*}{Finance} & Sentiment analysis, News headline classification Named entity recognition Question answering Stock movement prediction & & F1, Acc, Avg F1, Entity F1, EM, MCC & GPT-4, BloombergGPT, FinMA-7B, 30B, 7B-full, Vicuna-7B \\ \hline \multirow{2}{*}{Li et al. (2023e)} & \multirow{2}{*}{Finance} & \multirow{2}{*}{134 E-com tasks} & \multirow{2}{*}{EcomInstruct} & Micro-F1, Macro-F1, ROUGE & BLOOM, BLOOMZ, ChatGPT, EconGPT \\ \cline{4-5} & & & & & GPT-4, ChatGTM2-6B, ChatGPT, DoctorGLM, Baiduan-13B-chat, Huatuo-GTP, MedicalGPT, ChatGTM-Oscult, ChatGLM-Med, BentSao, BianQue-2 \\ \hline \multirow{2}{*}{Li et al. (2023b)} & \multirow{2}{*}{Medicine} & \multirow{2}{*}{Multi-Choice QA} & \multirow{2}{*}{CMB} & \multirow{2}{*}{Acc} & GPT-4, ChatGTM2-6B, ChatGPT, DoctorGLM, Baiduan-13B-chat, Huatuo-GTP, MedicalGPT, ChatGTM-Oscult, ChatGLM-Med, BentSao, BianQue-2 \\ \hline \multirow{2}{*}{Li et al. (2023b)} & \multirow{2}{*}{Medicine} & \multirow{2}{*}{Generative-QA} & \multirow{2}{*}{Huatuo-26M} & BLEU, ROUGE, GLEU & \multirow{2}{*}{T5, GPT2} \\ \cline{4-5} & & & & GLEU, ROUGE, GLEU & \\ \hline \multirow{4}{*}{Jin et al. (2023)} & \multirow{2}{*}{Medicine} & Nomenclature, Genomic location, Functional analysis, Sequence alignment & \multirow{2}{*}{GeneTuring} & \multirow{2}{*}{Acc} & GPT-2, BioGPT, BioMedLM, GPT-3, ChatGPT, New Bing \\ \hline \multirow{4}{*}{Guha et al. (2023)} & \multirow{4}{*}{Law} & Issue-spotting, Rule-recall, Rule-application, Rule-conclusion, Interpretation, & \multirow{2}{*}{LegalBench} & \multirow{2}{*}{Acc, EM} & GPT-4, GPT-3.5, Claude-1, Incite, OPT Falcon, LLaMA-2, FLAN-T5... \\ \cline{4-5} & & & & & F1, Acc, ROUGE-L, Normalized log-distance... & \multirow{2}{*}{ChatGPT, InterLM-Chat, StableBuluga2...} \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Benchmarks for domain-specific factuality evaluation. The table presents the domain, tasks, datasets, and the LLMs evaluated in the respective study.
in the medical field. These areas of application extend to Natural Language Processing tasks like developing QA systems, text classification, sentiment analysis, and Machine Learning model training tasks like disease prediction and personalized treatment recommendation. Consequently, the dataset is favorable for developing AI applications in the medical field, from intelligent diagnosis systems to medical consultation chatbots.
Jin et al. (2023) use nine tasks related to NCBI resources to evaluate the proposed GeneGPT model. The dataset used is GeneTuring benchmark Hou and Ji (2023), which contains 12 tasks with 50 question-answer pairs each. The tasks are divided into four modules - Nomenclature (Gene alias, Gene name conversion), Genomic location(Gene SNP association, Gene location, SNP location), Functional analysis(Gene disease association, Protein-coding genes), Sequence alignment (DNA to human genome, DNA to multiple species). Two settings of GeneGPT are assessed: a full setting where all prompt components are used, and a slim setting using only two components. The performance of GeneGPT is compared against various baselines including general-domain GPT-based LLMs like GPT-2, GPT-3, and ChatGPT. Additionally, GPT-2-sized biomedical domain-specific LLMs such as BioGPT and BioMedLM are evaluated. The new Bing, a retrieval-augmented LLM with access to relevant web pages, is also assessed. The result evaluation of the compared methods is based on the results reported in the original benchmark and are manually evaluated. However, the evaluation of the proposed GeneGPT method is determined through automatic evaluations.
**Law:** LegalBench Guha et al. (2023) is a benchmark for legal reasoning introduced due to the increasing use of LLMs in the legal field. It consists of 162 tasks covering six types of legal reasoning and was collaboratively constructed through an interdisciplinary process with significant contributions from legal professionals. The tasks designed aim to either demonstrate practical legal reasoning capabilities or measure reasoning skills that are of interest to lawyers. To facilitate discussions between different fields relating to LLMs in law, LegalBench tasks correspond to popular legal frameworks for describing legal reasoning, thus creating a shared language between lawyers and LLM developers. The paper not only describes LegalBench, but also presents an evaluation of 20 different open-source and commercial LLMs and highlights the types of research opportunities that LegalBench can provide.
LawBench Fei et al. (2023) is an evaluation framework dedicated to assessing the capabilities of LLMs in relation to legal tasks. The context-specificity and high-stakes nature of the law field make it crucial to have a clear grasp of LLMs' legal knowledge and their ability to execute legal tasks. LawBench dives deep into three cognitive evaluations of LLMs: the capability to memorize crucial legal details, understanding legal texts, and applying legal knowledge to resolve complicated legal problems. A total of 20 diverse tasks have been put together, covering five main task categories: single-label classification, multi-label classification, regression, extraction, and generation. Throughout the evaluation process, 51 LLMs were extensively tested under LawBench, compiling a spectrum of language models including 20 multilingual LLMs, 22 Chinese-oriented LLMs, and 9 law-specific LLMs. The results concluded that GPT-4 ranked as the superior LLM in the law domain, significantly surpassing its competitors. Despite the noted improvements when LLMs were fine-tuned on law-specific texts, the study acknowledged that there is still a long road ahead in achieving highly reliable LLMs for legal tasks.
## 4 Analysis of Factuality
In the previous Section 3, we provide quantitative statistics related to evaluating factuality. In this section, we delve deeper, exploring the underlying mechanisms that influence factuality in large language models.
### _Analysis of Factuality_
This subsection delves into intriguing analyses concerning the factuality of LLMs, focusing on aspects that aren't directly tied to evaluation, or enhancement. Specifically, we explore the mechanisms through which LLMs process, interpret, and produce factual content. The subsequent sections offer an in-depth examination of different dimensions of factuality in LLMs, ranging from their knowledge storage, and awareness to their approach to managing conflicting data. 2
Footnote 2: While some research Neeman et al. (2023); Wang et al. (2021) applies some similar analyses to Pretrained Language Models. This review excludes them as they are not typically considered Large Language Models.
#### 4.1.1 Knowledge Storage
The language model serves as a repository of knowledge, storing a multitude of information about the world within its parameters Petroni et al. (2019). However, the organization of this knowledge within LLMs remains largely mysterious. Meng et al. (2022) introduced a methodology called causal tracing to measure the indirect impact of hidden states or activations. This technique was employed to illustrate that factual knowledge is primarily stored in the early-layer feed-forward networks (FFNs) of such models. Similarly, Geva et al. (2021) also suggests that a substantial portion of factual information is encoded within the FFN layers. They conceptualize the input of the FFN as a query, the first layer as keys, and the second layer as values. Consequently, the intermediate hidden dimension of the FFN can be interpreted as the number of memories within the layer, and the intermediate hidden state represents a vector comprising activation values for each memory. As a result, the final output of the FFN can be understood as the weighted sum of activated values. The authors further demonstrate that the value vectors often encapsulate human-interpretable concepts and knowledge Geva et al. (2023, 2022). In addition, Chen et al. (2023) made an intriguing finding that the language model contains language-independent neurons that express multilingual knowledge and degenerate neurons that convey redundant information by applying the integrated gradients method Lundstrom et al. (2022). Nevertheless, it is important to note that the aforementioned studies primarily focus on the representation of individual facts, and the comprehensive understanding of how factual knowledge is precisely organized
and interconnected within these models remains an ongoing challenge.
#### 4.1.2 Knowledge Completeness and Awareness
This subsubsection delves into the intriguing realm of LLMs' self-awareness, their capacity to discern their knowledge gaps, and the balance between their internally generated knowledge and externally retrieved information. We delve into the dichotomy between parametric knowledge and retrieved knowledge, exploring the promises and challenges these models bring to the forefront of knowledge-intensive tasks.
**Knowledge Awareness:** Several studies have investigated the knowledge awareness of Large Language Models, specifically assessing whether LLMs can accurately estimate the correctness of their own responses. Most of these studies treat LLMs as "black boxes," prompting the models to report their confidence levels or calculating the perplexity of the model's output as an indicator of response likelihood. Gou et al. (2023) explore the model's ability to validate and iteratively refine its outputs, akin to how humans interact with tools. The authors find that solely relying on self-correction without external feedback can lead to marginal improvements or even diminished performance. Ren et al. (2023) experiment with settings either augmented or not with external document retrieval to determine whether models recognize their own knowledge boundaries. Their findings indicate that LLMs possess an inaccurate perception of their factual knowledge boundaries and tend to be overly confident about their responses. LLMs often fail to fully harness the knowledge they possess; however, retrieval enhancement can somewhat compensate for this shortcoming. Yin et al. (2023) introduce a dataset named "SelfAware" to test if models recognize what they don't know, encompassing both answerable and unanswerable questions. The experiment suggests that models do possess some capacity to discern their own knowledge gaps, but they are still far from human levels. GPT-4 outperforms other models, instructions and InContext-Learning Dong et al. (2023) can enhance a model's discriminatory ability. Kadavath et al. (2022) focus on LLM self-assessment based on Language Model calibration using multiple-choice questions. Their findings revealed that the "none of the above" option decreased accuracy, larger models showed better calibration, and RLHF hindered model calibration levels. However, simply adjusting the temperature parameter can rectify this issue. Azaria and Mitchell (2023) assess the truthfulness of statements generated by LLMs, by using the model's internal state and hidden layer activations. The authors, employing a feedforward neural network, can classify if the model is misleading by utilizing the hidden output states.
**Parametric Knowledge vs Retrieved Knowledge:** Yu et al. (2023) explore whether the internal knowledge of LLMs can replace the retrieved documents on knowledge-intensive tasks. They ask LLMs, such as InstructGPT, to directly generate contexts given a question rather than retrieving them from the database. They find the generated documents contain the golden answers more often than the top retrieved documents. Then they feed the generated docs and retrieved docs to the Fusion-in-Decoder model (Izacard and Grave, 2021) for knowledge-intensive tasks such as Open-domain QA (Kwiatkowski et al., 2019) and find the generated docs are more effective than the retrieved docs, suggesting that the LLMs contain enough knowledge for knowledge-intensive tasks.
On the contrary, these observations have been contested in subsequent investigations. Kandpal et al. (2023) underscore the dependency of LLMs on the number of associated documents seen during pre-training. They argue that the success in answering fact-based questions is highly linked to the number of documents containing the topic of the question that were encountered in pre-training. The study further posited the necessity of scaling models extensively to achieve competitive performance for questions with minimum representation in the training data. Adding to these concerns, Sun et al. (2023) critically evaluate the factual knowledge base of LLMs, using a specifically designed Head-to-Tail benchmark comprised of 18K question-answer pairs. The results show that the understanding of factual knowledge, particularly related to torso-to-tail entities, by currently available LLMs is suboptimal.
In summary, while LLMs show promise in handling knowledge-intensive tasks, their dependency on pre-training information and limitations in factual accuracy remain significant hurdles. It underscores the need for further advancements in the field and the importance of incorporating complementary methods, such as retrieval augmentation, to enhance the learning of long-tail knowledge in LLMs.
#### 4.1.3 Contextual Influence and Knowledge Conflict
This sub-subsection examines the interplay between an LLM's inherent parametric knowledge and the provided contextual knowledge, exploring both the model's capacity to utilize context and its behavior when confronted with conflicting information.
**Contextual Influence on Generation:** Some works explore the model's capacity to utilize context, for example, Li et al. (2023) observe that larger models tend to rely on their parametric knowledge, even when faced with counterfactual contexts. This suggests that as models increase in size, they might grow more confident in their internal knowledge, potentially sidelining external context. However, the introduction of irrelevant contexts can still influence their outputs. The balance between controllability (relying on relevant context) and robustness (resisting irrelevant context) emerges as a challenge in LLM training. The study indicates that reducing context noise improves controllability, but the effect on robustness remains to be seen. In contrast, Zhou et al. (2023) propose prompt templates to guide LLMs towards more faithful text generation. Among these, opinion-based prompts prove most effective, indicating that when LLMs are queried about opinions, they adhere more closely to the context. Interestingly, the study finds that using counterfactual context enhances the model's faithfulness, while original context, sourced from platforms like Wikipedia, might induce a simplicity bias, leading LLMs to answer questions without heavily relying on the context. Chen et al. (2023) conduct a comprehensive evaluation of LLMs' ability to effectively utilize retrieved information. The study reveals that while retrieved documents can boost
LLM performance, the presence of noise in these documents can hinder it. Yue et al. (2023) investigate the nature of LLM-generated content in relation to provided references. They categorize the generated content as attributable, contradictory, or extrapolatory to the reference. Both fine-tuned models and instruction-based LLMs struggle to accurately evaluate the alignment between generated content and references, underscoring the challenge of ensuring that LLMs produce content consistent with the provided context.
**Handling Knowledge Conflicts:** A series of studies are interested in LLMs' behavior when confronted with conflicting information. Longpre et al. (2021) introduce the concept of knowledge conflicts, where the provided context contradicts the model's learned information. Their findings suggest that such conflicts lead to increased prediction uncertainty, especially for in-domain examples. Observations across models, ranging from T5-60M to 11B, indicate that larger models tend to default to their parametric knowledge. Moreover, there's an inverse relationship between retrieval quality and the tendency to rely on internal knowledge: the more irrelevant the evidence, the more the model defaults to its parametric knowledge. Chen et al. (2022) conduct experiments on typical ODQA models, including FiD and RAG. Their results show that FiD models rarely resort to memorization (less than 3.6% for NQ) compared to RAG models. Instead, FiD primarily grounds its answers in the provided evidence. Interestingly, when confronted with conflicting retrieved passages, models tend to fall back on their parametric knowledge. Xie et al. (2023) explore the behavior of recent LLMs, including ChatGPT and GPT-4. Contrary to findings on smaller LMs, they discover that LLMs can be highly receptive to external evidence, even if it contradicts their parametric memory, provided the external evidence is coherent and convincing. Additionally, LLMs exhibit a strong confirmation bias, especially when presented with evidence that aligns with their parametric memory. This bias becomes even more pronounced for widely accepted knowledge. In scenarios where no relevant evidence is provided, LLMs tend to express uncertainty. However, when presented with both relevant and irrelevant evidence, they demonstrate an ability to filter out irrelevant information.
In conclusion, while studies like Li et al. (2023) and Zhou et al. (2023) emphasize the challenges and potential solutions in making LLMs more context-aware, others like Yue et al. (2023) and Xie et al. (2023) highlight the inherent biases and limitations of LLMs. The overarching theme is the need for a balanced approach, where LLMs effectively leverage both their internal knowledge and external context to produce accurate and coherent outputs.
### _Causes of Factual Errors_
Understanding the root causes of these factual inaccuracies is crucial for refining these models and ensuring their reliable application in real-world scenarios. In this subsection, we delve into the multifaceted origins of these errors, categorizing them based on the stages of model operation: Model Level, Retrieval Level, Generation Level, and other miscellaneous causes. Table I shows examples of factuality errors caused by different factors.
#### 4.2.1 Model-level Causes
This subsection delves into the intrinsic factors within large language models that contribute to factual errors, originating from their inherent knowledge and capabilities.
**Domain Knowledge Deficit:** The model may lack comprehensive expertise in specific domains, leading to inaccuracies. Every LLM has its limitations based on the data it was trained on. If an LLM hasn't been exposed to comprehensive data in a specific domain during its training, it's likely to produce inaccurate or generalized outputs when queried about that domain. For instance, while an LLM might be adept at answering general science questions, it might falter when asked about niche scientific subfields (Lu et al., 2022).
**Outdated Information:** The model's dependence on older datasets can make it unaware of recent developments or changes. LLMs are trained on datasets that, at some point, become outdated. This means that any events, discoveries, or changes post-dating the last training update won't be known to the model. For example, ChatGPT and GPT-4 are both trained on data up to 2021.09 might not be aware of events or advancements after then.
**Immemorization:** The model does not always retain knowledge from its training corpus. While it's a misconception that LLMs "memorize" data, they do form representations of knowledge based on their training. However, they might not always recall specific, less-emphasized details from their training parameters, especially if such details were rare or not reinforced through multiple examples. For example, ChatGPT has pretrained with Wikipedia, but it still fail in answering some questions from NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), which are constructed from Wikipedia (Wang et al., 2023).
**Forgetting:** The model might not retain knowledge from its training phase or could forget prior knowledge as it undergoes further training. As models are further fine-tuned or trained on new data, there's a risk of "catastrophic forgetting" (Chen et al., 2020; Goodfellow et al., 2015; Wang et al., 2022; Zhai et al., 2023) where they might lose certain knowledge they were previously know. This is a well-known challenge in neural network training, where networks forget previously learned information when exposed to new data, which also happens in large language models (Luo et al., 2023).
**Reasoning Failure:** While the model might possess relevant knowledge, it can sometimes fail to reason with it effectively to answer queries. Even if an LLM has the requisite knowledge to answer a question, it might fail to connect the dots or reason logically. For instance, ambiguity in the input (Liu et al., 2023) can potentially lead to a failure in understanding by LLMs, consequently resulting in reasoning errors. In addition, Berglund et al. (2023) find the LLM traps in the reversal curse, for example, it knows that A is B's mother but fail to answer who is B's son. This is especially evident in complex multi-step reasoning tasks or when the model needs to infer based on a combination of facts (Kotha et al., 2023; Tan et al., 2023).
#### 4.2.2 Retrieval-level Causes
The retrieval process plays a pivotal role in determining the accuracy of LLMs' responses, especially in retrieval
augmented settings. Several factors at this level can lead to factual errors:
**Insufficient Information:** If the retrieved data doesn't provide enough context or details, the LLM might struggle to generate a factual response. This can result in generic or even incorrect outputs due to the lack of comprehensive evidence.
**Misinformation Not Recognized by LLMs:** LLMs can sometimes accept and propagate misinformation present in the retrieved data. This is especially concerning when the model encounters knowledge conflicts, where the retrieved information contradicts its pre-trained knowledge, or multiple retrieved documents contradict each other Li et al. (2015). For instance, Longpre et al. (2021) observed that the more irrelevant the evidence, the more likely the model is to rely on its intrinsic knowledge. Recent studies, such as Pan et al. (2023c), have also shown that LLMs are susceptible to misinformation attacks within the retrieval process.
**Distracting Information:** LLMs can be misled by irrelevant or distracting information in the retrieved data. For example, if the evidence mentions a "Russian movie" and "Directors", the LLM might incorrectly infer that "The director is Russian". Luo et al. (2023a) highlighted this vulnerability, noting that LLMs can be significantly impacted by distracting retrieval results. They further proposed instruction tuning as a potential solution to enhance the model's ability to sit through and leverage retrieval results more effectively.
Additionally, when dealing with long retrieval inputs, the models tend to show their best performance when processing information given at the beginning or end of the input context, according to Liu et al. (2023c). In contrast, the models are likely to experience a significant decrease in performance when they are required to extract relevant data from the middle of these extensive contexts.
**Misinterpretation of Related Information:** Even when the retrieved information is closely related to the query, LLMs can sometimes misunderstand or misinterpret it. While this might be less frequent when the retrieval process is optimized, it remains a potential source of error. For instance, in the ReAct study (Yao et al., 2023a), the rate of errors dropped significantly when the retrieval process was improved.
#### 4.2.3 Inference-level Causes
**Snowballing:** During the generation process, a minor error or deviation at the beginning can compound as the model continues to generate content. For instance, if an LLM misinterprets a prompt or starts with an inaccurate premise, the subsequent content can veer further from the truth (Varshney et al., 2023; Zhang et al., 2023b).
**Erroneous Decoding:** The decoding phase is crucial for translating the model's internal representations into human-readable content (Chuang et al., 2023; Massarelli et al., 2020). Mistakes during this phase, whether due to issues like beam search errors or suboptimal sampling strategies, can lead to outputs that misrepresent the model's actual "knowledge" or intention. This can manifest as inaccuracies, contradictions, or even nonsensical statements.
**Exposure Bias:** LLMs are a product of their training data. If they've been exposed more frequently to certain types of content or phrasing, they might have a bias toward generating similar content, even when it's not the most factual or relevant. This bias can be especially pronounced if the training data has imbalances or if certain factual scenarios are underrepresented (Felkner et al., 2023; Gallegos et al., 2023). The model's outputs, in such cases, reflect its training exposure rather than objective factuality. For example, research by Hossain et al. (2023) suggests that LLMs can correctly identify the gender of individuals who conform to the binary gender system, but they perform poorly when determining non-binary or neutral genders.
## 5 Enhancement
This section discusses methods to enhance factuality in LLMs across different phases, including LLM generation, retrieval-augmented generation, inference-phase enhancements, and domain-specific factuality improvements, outlined in Figure 2.
Table VIII provides a summary of enhancement methods and their respective improvements over baseline LLMs. It's essential to recognize that various research papers may employ distinct experimental settings, such as zero-shot, few-shot, or full settings. Consequently, when examining this table, it's important to note that performance metrics for different methods, even when evaluating the same metric on the same dataset, may not be directly comparable.
### _On Standalone LLM Generation_
When focusing on standalone LLM generation, enhancement strategies can be broadly grouped into three main categories:
_(1) Improving Factual Knowledge from Unsupervised Corpora_ (Sec 5.1.1): This involves refining the training data during pretraining, such as through deduplication and emphasizing informative words (Lee et al., 2022a). Techniques like TOPICPREFIX (Lee et al., 2022b) and sentence completion loss are also explored to enhance this approach.
_(2) Enhancing Factual Knowledge from Supervised Data_ (Sec 5.1.2): Examples in this category include supervised fine-tuning strategies (Chung et al., 2022; Zhou et al., 2023a) focus on finetuning with labelled data or integrating structured knowledge from such as knowledge graphs (KGs) or making precise adjustments to model parameters (Li et al., 2023d).
_(3) Optimally Eliciting Factual Knowledge from the Model_ (Sec 5.1.3, 5.1.4, 5.1.5): This category encompasses methods like Multi-agent collaboration (Du et al., 2023) and innovative prompts (Yu et al., 2023). Additionally, novel decoding methodologies, such as factual-nucleus sampling, are introduced to further improve factuality (Chuang et al., 2023; Lee et al., 2022b).
Some work (Liu et al., 2023b; Sun et al., 2023d) aim for improve the factuality of large multi-modal models, we choose to not emphasize them.
#### 5.1.1 Pretraining-based
Pretraining plays a pivotal role in equipping the model with the factual knowledge derived from the corpus. By emphasizing strategies during this phase, the model's inherent
factuality can be significantly enhanced. This approach is particularly crucial for addressing challenges like immemorization and forgetting.
**Initial Pretraining:** Methods employed during the foundational pretraining of the model.
Lee et al. (2022) develop two distinct tools for dedplicating training data, addressing the issue of redundant texts and long repeated substrings present in the training sets of current LLMs. These tools effectively reduce the recall of memorized texts in model outputs. Remarkably, they achieve similar or even superior accuracy with fewer training steps.
Sadeq et al. (2023) introduce a modification to the Masked Language Model (MLM) training objective used in the pretraining of LLMs. They discover that high-frequency words do not consistently contribute to the model's ability to learn factual knowledge. To address this, they devise a strategy that encourages the language model to prioritize informative words during unsupervised training. This is achieved by masking tokens more frequently based on their informative relevance. To quantify this relevance, they utilize Pointwise Mutual Information (PMI) (Fano and Hawkins, 1961), positing that words with elevated PMI values, in relation to their adjacent words, are likely to be more informatively pertinent. Experimental results indicate that this innovative approach significantly bolsters the efficacy of pretrained language models across various tasks, including factual recall, question answering, sentiment analysis, and natural language inference, in a closed-book setting.
**Continual Pretraining:** Iterative pretraining processes that allow the model to progressively refine and update its knowledge base.
Lee et al. (2022) introduce TOPICpreFIX as a pre-processing method and the sentence completion loss as training objective. Some factual sentences may be unclear when the LM training corpus is chunked, especially when these sentences contain pronouns (e.g., she, he, it). So they prepend TOPICpreFIX (e.g., Wikipedia document name) to sentences in the factual documents to transform each sentence into an independent factual statement. They also introduce the sentence completion loss, with the aim of enabling the model to capture facts from entire sentences rather than just focusing on the associations between sub-words. For implementation, they establish a pivot \(t\) for each sentence and require zero-masking for all token prediction losses before \(t\). This pivot is only necessary during the training phase. Experiments show that such methods can further reduce the factual errors than standard factual-domain adaptive training.
#### 5.1.2 Supervised Finetuning
Supervised fine-tuning leverages labeled datasets to refine the model's performance. This approach serves a dual purpose: it imparts specific task or knowledge base-oriented information to the model and addresses challenges like immemorization and forgetting. Several studies, such as Chung et al. (2022); Zhou et al. (2023), emphasize the pivotal role of supervised fine-tuning in eliciting the inherent knowledge of the base model, subsequently enhancing its reasoning capabilities.
**Continual SFT:** A cyclic fine-tuning approach, where the model undergoes consistent refinement using sequential sets of labeled data.
Moiseev et al. (2022) investigate how to inject structured knowledge from a knowledge graph into LLMs. The approach involves directly training T5 using triplets containing relationship knowledge. Previous methods often describe the triplets using prompts and then train LLMs with masked language model task, but some triplets are not easy to describe. They compare three knowledge-enhancement fine-tuning methods, which are MLM training on the C4 corpus (Raffel et al., 2020), masking the subject or object in KG triplets, and masking the subject or object in the KELM corpus (Agarwal et al., 2021). Experiments show that effectiveness of the latter two methods achieved better exact match scores in closed-book QA tasks (Jiang et al., 2019; Joshi et al., 2017; Kwiatkowski et al., 2019; Welbl et al., 2018). This demonstrates that training directly based on KG triplets is an effective way to inject knowledge into the model
Fig. 2: An overview of methods to enhance factuality in large language models. This encompasses three primary areas: enhancement techniques for pure LLMs, strategies for retrieval-augmented LLMs, and methods employed by domain-specific LLMs to boost their factual accuracy within their respective domains.
Sun et al. (2023c) introduce negative samples and contrastive learning to supervised finetuning process with MLE loss. The sources of these negative samples are either from the Knowledge Graph or are generated by large language models. While traditional contrastive learning can only function at the token or sentence level, the intrinsic value of span information cannot be overlooked, they employ Named Entity Recognition to extract this critical span data. The training process incorporates a blend of MLE loss, standard contrastive learning loss, and span-based contrastive learning loss, with the parameters finely tuned to optimize results. Experiments indicate that the method delivers performance comparable with SOTA KB-based methods but offers significant benefits in efficiency and scalability.
Yang et al. (2023b) presents a development framework for Knowledge Graph-enhanced Large Language Models (KGLLMs), drawing from established technologies. They detail various enhancement strategies, including a before-training enhancement that refines input quality by integrating factual data; a during-training enhancement that synergizes textual and structural knowledge, utilizing tools like graph neural networks and attention mechanisms; multi-task learning which focuses on knowledge-guided pre-training tasks to bolster the factual knowledge acquisition of LLMs; and a post-training enhancement that fine-tunes LLMs for domain-specific tasks using knowledge-rich data. Additionally, the significance of prompt learning in LLMs is highlighted, emphasizing the importance of choosing appropriate prompt templates. They also suggest knowledge graphs as a valuable resource for crafting these templates to harness domain-specific knowledge.
**Model Editing:** Instead of directly finetuning the model, model edit is a more precise approach to enhance the model's factuality. By editing specific areas that are related to the fact, the model can correctly express that fact without compromising other unrelated knowledge. Current editing methods (Yao et al., 2023b) can be categorized into weight-preserved and weight-modified paradigms. When
\begin{table}
\begin{tabular}{l c c c||c c c} \hline \hline Reference & Dataset & Metrics & Baselines \(\rightarrow\) Theirs & Dataset & Metrics & Baselines \(\rightarrow\) Theirs \\ \hline Li et al. (2022c) & NQ & EM & 34.5 \(\rightarrow\) 44.35 (T5 11B) & GSM8K & ACC & 77.0 \(\rightarrow\) 85.0 (ChatGPT) \\ \hline Yu et al. (2023) & NQ & EM & 20.9 \(\rightarrow\) 28.0 (InstructGPT) & TriviaQA & EM & 57.5 \(\rightarrow\) 59.0 (InstructGPT) \\ & & & WebQA & EM & 18.6 \(\rightarrow\) 24.6 (InstructGPT) \\ \hline Chuang et al. (2023) & FACTOR News & ACC & 58.3 \(\rightarrow\) 62.0 (LLMa-7B) & FACTOR News & ACC & 61.1 \(\rightarrow\) 62.5 (LLMa-13B) \\ & FACTOR News & ACC & 63.8 \(\rightarrow\) 65.4 (LLMa-33B) & FACTOR News & ACC & 63.6 \(\rightarrow\) 66.2 (LLMa-65B) \\ & FACTOR Wiki & ACC & 58.6 \(\rightarrow\) 62.2 (LLMa-7B) & FACTOR Wiki & ACC & 62.6 \(\rightarrow\) 66.2 (LLMa-13B) \\ & FACTOR Wiki & ACC & 69.5 \(\rightarrow\) 70.3 (LLMa-33B) & FACTOR Wiki & ACC & 72.2 \(\rightarrow\) 72.4 (LLMa-65B) \\ & TruthfulQA & \%Truth * Info & 32.4 \(\rightarrow\) 44.6 (LLMa-13B) & TruthfulQA & \%Truth * Info & 34.8 \(\rightarrow\) 49.2 (LLMa-65B) \\ \hline Li et al. (2022b) & TruthfulQA & \%Truth * Info & 32.4 \(\rightarrow\) 44.4 (LLMa-13B) & TruthfulQA & \%Truth * Info & 31.7 \(\rightarrow\) 36.7 (LLMa-33B) \\ & TruthfulQA & \%Truth * Info & 34.8 \(\rightarrow\) 43.4 (LLMa-65B) & & & \\ \hline Li et al. (2023d) & NQ & ACC & 46.6 \(\rightarrow\) 51.3 (LLMa-7B) & TriviaQA & ACC & 89.6 \(\rightarrow\) 91.1 (LLMa-7B) \\ & MMLU & ACC & 35.7 \(\rightarrow\) 40.1 (LLMa-7B) & TruthfulQA & \%Truth * Info & 32.5 \(\rightarrow\) 65.1 (Alpaca) \\ & TruthfulQA & \%Truth * Info & 26.9 \(\rightarrow\) 43.5 (LLMa-7B) & TruthfulQA & \%Truth * Info & 51.5 \(\rightarrow\) 74.0 (Vicuna) \\ \hline Cohen et al. (2023b) & LAMA & F1 & 50.7 \(\rightarrow\) 80.8 (ChatGPT) & TriviaQA & F1 & 56.2 \(\rightarrow\) 82.6 (ChatGPT) \\ & NQ & F1 & 60.6 \(\rightarrow\) 79.1 (ChatGPT) & PopQA & F1 & 65.2 \(\rightarrow\) 85.4 (ChatGPT) \\ & LAMA & F1 & 42.5 \(\rightarrow\) 79.3 (GPT-3) & TriviaQA & F1 & 46.7 \(\rightarrow\) 77.2 (GPT-3) \\ & NQ & F1 & 52.0 \(\rightarrow\) 78.0 (GPT-3) & PopQA & F1 & 43.7 \(\rightarrow\) 77.4 (GPT-3) \\ \hline Weller et al. (2023) & TriviaQA & QUIP & 31.6 \(\rightarrow\) 33.6 (ChatGPT) & NQ & QUIP & 32.8 \(\rightarrow\) 34.3 (ChatGPT) \\ & HotpotQA & QUIP & 28.3 \(\rightarrow\) 29.2 (ChatGPT) & ELI5 & QUIP & 24.1 \(\rightarrow\) 26.5 (ChatGPT) \\ & TriviaQA & EM & 77.8 \(\rightarrow\) 78.8 (ChatGPT) & NQ & EM & 32.9 \(\rightarrow\) 34.8 (ChatGPT) \\ & HotpotQA & F1 & 35.7 \(\rightarrow\) 36.6 (ChatGPT) & ELI5 & R-L & 22.7 \(\rightarrow\) 21.7 (ChatGPT) \\ \hline Dhuliawala et al. (2023) & MultiSpanQA & F1 & 39.0 \(\rightarrow\) 48.0 (LLMa 65B) & - & FactScore & 55.9 \(\rightarrow\) 71.4 (LLMa 65B) \\ & & & & - & Avg. \# facts & 16.6\(\rightarrow\) 12.3 (LLMa 65B) \\ \hline Yao et al. (2023a) & HotpotQA & EM & 28.7 \(\rightarrow\) 35.1 (PaLM-540B) & FEVER & ACC & 57.1 \(\rightarrow\) 62.0 (PaLM-540B) \\ \hline Jiang et al. (2023b) & 2WikiMultihopQA & EM & 28.2 \(\rightarrow\) 51 (ChatGPT) & StrategyQA & EM & 72.9 \(\rightarrow\) 77.3 (ChatGPT) \\ & WikiAsp & UniEval & 47.1 \(\rightarrow\) 53.4 (ChatGPT) & ASQA & EM & 33.8 \(\rightarrow\) 41.3 (ChatGPT) \\ & & ASQA-hint & EM & 40.1 \(\rightarrow\) 46.2 (ChatGPT) \\ \hline Izacard et al. (2022) & MMLU & ACC & 42.4 \(\rightarrow\) 56.3 (T5-770M) & MMLU & ACC & 50.4 \(\rightarrow\) 59.9 (T5-3B) \\ & & & & MMLU & ACC & 54 \(\rightarrow\) 65.8 (T5-11B) \\ \hline Shi et al. (2023) & MMLU & ACC & 68.3 \(\rightarrow\) 73.2 (Codex) & NQ & EM & 40.6 \(\rightarrow\) 45.5(Codex) \\ & & & & TriviaQA & EM & 73.6 \(\rightarrow\) 77.3 (Codex) \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Performance of Select Factuality Enhancement Methods. The table displays performance metrics on various datasets for both baseline models and their enhanced counterparts, denoted with a \(\rightarrow\). Due to space constraints, only a subset of datasets, metrics, and models from each work is presented.
modifying the model's weight, KN (Dai et al., 2022) and ROME (Meng et al., 2022) first analyze representations to locate those underlying factual errors and then directly update the relevant weights. Meanwhile, KE (De Cao et al., 2021) and MEND (Mitchell et al., 2022) employ a hypernetwork to learn the necessary weight changes. While effective, the robustness and generalization of directly updating weights remains an open question.
Li et al. (2023) present a technique known as Inference-Time Intervention (ITI) designed to boost the factual accuracy of large language models. ITI adjusts model activations during inference to enhance the truthfulness of responses. This method, categorized under activation editing, is both adjustable and less intrusive, setting it apart from weight editing techniques. Drawing inspiration from prior research, they utilize steering vectors, which are proven effective for style transfer, to guide model activations. They detail a multi-step model selection process involving the calibration of intervention strength hyperparameters, pinpointing truth-telling related heads, and determining their truth-telling directions. The TruthfulQA dataset serves as the foundation for training and validation, with a rigorous 2-fold cross-validation employed to prevent data leakage. Experiments on the TruthfulQA benchmark demonstrate that ITI improves Alpaca's (Taori et al., 2023) truthfulness from 32.5% to 65.1%.
#### 5.1.3 Multi-Agent
Engaging multiple models in a collaborative or competitive manner, enhancing factuality through their collective rowness, helps in the immemorization and reasoning failure problems.
Du et al. (2023) propose an approach to enhance the performance of language models by treating different LLMs as intelligent agents engaged in multi-agent debates. In this method, multiple instances of language models present and debate their respective answers and reasoning processes, ultimately reaching a consensus on the final answer after multiple rounds of debate. If the debate's answers fail to converge, prompts are modified to reduce the stubbornness of the two agents. This approach has been demonstrated to significantly improve mathematical and reasoning abilities across various tasks while enhancing the factual accuracy of generated content. Moreover, this method can be directly applied to existing black-box models, making it applicable to all research tasks using the same prompts.
Cohen et al. (2023) develop a fact-checking mechanism. Drawing parallels to a scenario where a witness is interrogated for the veracity of their claims, they utilize a LLM to gather statements from a QA dataset, which are either factually correct or incorrect. During the statement generation, the model is provided with a golden answer, prompting it to produce both accurate and inaccurate statements, inherently labeling each statement. For every QA pair, another LLM, acting as an interrogator, generates a series of questions. A separate LLM, playing the role of the respondent, answers these queries. This iterative questioning and answering continues until the interrogator is satisfied, culminating in a conclusion. The authors conduct experiments on datasets like LAMA (Petroni et al., 2019), PopQA (Mallen et al., 2023), NaturalQA (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017). The precision is the portion of incorrect claims, out of the claims rejected by the examiner and the recall is the portion of incorrect claims rejected by the examiner, out of all the incorrect claims. Measured by the F1 score, the LM vs LM method consistently outperforms the baseline by a significant margin, ranging from ten to over twenty points across all datasets.
#### 5.1.4 Novel Prompt
Introducing innovative or tailored prompts to extract more factual and precise responses from the LLM, can better assist the model elicit the knowledge in its parameters and improve the reasoning ability.
Yu et al. (2023) introduce a novel approach called Generate-then-Read (GENREAD). Document retrievers are replaced by LLM generators. In this article, the LLM is prompted to generate multiple contextual documents on a given question. The authors clustered these document embeddings, and sampled documents from different clusters to ensure diversity of contextual documents. With these generated in-context demonstrations, LLMs achieved better results on knowledge-intensive tasks than retrieving from external corpus such as Wikipedia.
Weller et al. (2023) introduce a metric called QUIP-Score to measure the dependency on pre-trained data. They index Wikipedia to swiftly determine the dependency of an LLM's response. By using specific prompts, like "Based on evidence from Wikipedia:" they aim to evoke the LLM's recall of content from its training dataset. In addition to grounding prompts, they also introduce anti-grounding prompts to encourage the LLM to respond without referencing its training data, for instance, "Respond without using any information from Wikipedia." The motivation behind this approach is the belief that guiding the LLM to reference more of the knowledge it acquires during pre-training can reduce the generation of incorrect information. To quantify this grounding, they propose the QUIP-Score metric to gauge the similarity between the model's generated content and the most relevant content in Wikipedia. In their experiments conducted on datasets like TQ, NQ, HotpotQA, and ELI5, the results show that while adding the said prompt doesn't significantly improve traditional QA metrics, it notably boosts the scores on the QUIP metric.
Khot et al. (2023) present Decomposed Prompting. Complex tasks are broken down into multiple simpler tasks via prompting and then it can be addressed by task-specific LLMs. For instance, for tasks involving extremely long input sequences, this technique systematically decomposes the input into shorter sequences for individual processing. Notably, the authors observed that when combined with a retrieval module, this approach significantly enhances performance on open domain multi-hop QA tasks.
Dhuliawala et al. (2023) present Chain-of-Verification (CoVe) to reduce factual errors.The CoVe strategy involves the model initially crafting a response, subsequently formulating verification queries to assess its initial draft, independently responding to these queries to maintain unbiased answers, and ultimately producing a validated reply. The CoVe method encompasses four pivotal stages: 1) Drafting an initial reply based on a query using the LLM, 2) Generating verification questions from the query and initial answer
to pinpoint potential errors, 3) Responding to each verification question and comparing these answers with the initial response to detect discrepancies, and 4) If discrepancies are found, producing a revised answer that integrates the verification outcomes. The entire procedure is executed by prompting the same LLM differently to achieve the intended results. Experiments show that Chain-of-Verification can reduce errors in diverse tasks, including list-based questions from Wikidata (Vrandecic and Krotzsch, 2014), closed book MultiSpanQA (Li et al., 2022) and longform text generation (Min et al., 2023).
#### 5.1.5 Decoding
Decoding methodologies, such as beam search and nucleus sampling, play a crucial role in directing the model to produce outputs that are both factual and coherent. By refining the decoding process, challenges like snowballing errors or erroneous decoding, as detailed in Sec 4.2.3, can be effectively addressed.
Lee et al. (2022) propose a new decoding sampling algorithm called factual-nucleus sampling that achieves a better trade-off between generation quality and factuality when compared to prevailing decoding algorithms. They postulate that the randomness of sampling is more harmful to factuality when applied to the generation of the latter portion of a sentence as opposed to its initial segment. So the factual-nucleus sampling algorithm, an adaptation of nucleus sampling, can dynamically adjust the 'nucleus' probability throughout the generation of each sentence, progressively reducing randomness with each successive generation step. The \(\omega\)-bound parameter is provided to prevent the p-value from becoming too small and hurting diversity. Experimental results show that a factual-nucleus sampling algorithm can improve the factuality of generation while maintaining generation quality, e.g., diversity and repetition.
Chuang et al. (2023) propose "Decoding by Contrasting Layers" to mitigate these hallucinations. This approach leverages the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, taking advantage of the known localization of factual knowledge in LLMs. The results of this study demonstrate that DoLa consistently enhances the truthfulness of LLM-generated content across various tasks, such as multiple-choice and open-ended generation tasks, showcasing its potential to significantly improve the reliability of LLMs in generating accurate and truthful facts.
### _On Retrieval-Augmented Generation_
Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach to address certain limitations inherent to standalone LLMs, such as outdated information and the inability to memorize (Chase, 2022; Liu, 2022). These challenges are elaborated upon in Sec 4.2.1. Yet, while RAG offers solutions to some issues, it introduces its own set of challenges, including the potential for insufficient information and the misinterpretation of related data, as detailed in Sec 4.2.2. This subsection delves into various strategies devised to mitigate these challenges. Within the realm of retrieval-augmented generation, enhancement techniques can be broadly categorized into several pivotal areas:
_(1) The Normal Setting of Utilizing Retrieved Text for Generations_ (Sec 5.2.1):
_(2) Interactive Retrieval and Generation_ (Sec 5.2.2): Examples here include the integration of Chain-of-Thoughts steps into query retrieval (He et al., 2022) and the use of an LLM-based agent framework that taps into external knowledge APIs (Yao et al., 2023).
_(3) Adapting LLMs to the RAG Setting_ (Sec 5.2.3): This involves methods like the one proposed by Peng et al. (2023), which combines a fixed LLM with a plug-and-play retrieval module. Another notable approach is REPLUG (Shi et al., 2023), a retrieval-augmented framework that treats the LLM as a black box and fine-tunes retrieval models using language modeling scores.
_(4) Retrieving from Additional Knowledge Bases_ (Sec 5.2.5 and Sec 5.2.4): This category includes methods that retrieve from external parametric memories (Chen et al., 2023) or knowledge graphs (Zhang et al., 2023) to enhance the model's knowledge base.
#### 5.2.1 Normal RAG Setting
**Workflow of Normal RAG Setting:** A normal RAG setting works by retrieving external data and passing it to a LLM during the generation phase. We follow the framework proposed by LlamaIndex (Liu, 2022) and LangChain (Chase, 2022) to decouple the process into the following modules and steps:
_(1) Document leaders_ are used to load documents from various sources. These loaders can fetch different types of documents (HTML, PDF, code) from various locations (private s3 buckets, public websites).
_(2) Document transformers_ are employed to extract relevant parts of the documents. This may involve splitting or chunking large documents into smaller chunks. Different algorithms for this task are employed, optimized for specific document types like code or markdown.
_(3) Text embedding models_ are employed to capture the semantic meaning of the text.
_(4) Vector stores_ are employed to efficiently store and search the embeddings.
_(5) Retrievers_, such as the Parent Document Retriever, Self Query Retriever, and Ensemble Retriever, are used to retrieve the data from the database.
Here, the Parent Document Retriever allows for the creation of multiple embeddings per parent document to retrieve smaller chunks while maintaining a larger context. The Self Query Retriever separates the semantic part of a query from other metadata filters, allowing for more accurate retrieval. The Ensemble Retriever enables the retrieval of documents from multiple sources or using different algorithms.
Borgeaud et al. (2022) suggest scaling the size of the text database for retrieval as a complementary path to scaling language models. There is a pre-collected text database with a total of over 5 trillion tokens. These chunks are stored in the form of key-value pairs, with each chunk as a unit, and similarity retrieval is performed on \(k\)-nearest neighbours from key-value database using the \(L_{2}\) distance on Bert embeddings. The input sequence is splited into chunks, Retrieval Transformer (Retro) model retrieves text similar to the previous chunk to improve the predictions in the
current chunk. The model calculate cross-attention between the input text and the retrieved text chunks to generate better answers. With only 25x fewer parameters of GPT-3, its performance on Pile is quite comparable.
Lazaridou et al. (2022) present a method that capitalizes on the few-shot capabilities of large-scale language models to enhance their grounding in factual and current information. Drawing from semi-parametric language models, the approach conditions LMs using few-shot prompts based on data sourced from Google Search. For any given query, the method retrieves pertinent documents from the web, extracts the top 20 URLs, and processes them to obtain clear text. These documents are segmented into paragraphs, and the most relevant ones are chosen using TF-IDF based on their similarity to the query. The LMs are then conditioned using few-shot prompts that incorporate the retrieved paragraphs. This k-shot prompting technique is augmented with an evidence paragraph, creating a prompt structure that encompasses evidence, query, and response. The method also involves generating multiple answers from the model and reranking them using different probabilistic factorizations. The experimental results indicate that by conditioning on retrieved evidence, the 7B Gopher LM (Rae et al., 2022a) surpassed the performance of the 280B Gopher LM, with relative improvements reaching up to 30% on the NQ (Kwiatkowski et al., 2019) dataset.
#### 5.2.2 Interactive Retrieval
While retrieval systems are designed to source relevant information, they may occasionally fail to retrieve accurate or comprehensive data. Additionally, LLMs might struggle to recognize, or even be misled by, the retrieved content, as detailed in Sec 4.2.2. Implementing an interactive retrieval mechanism can address these challenges, aiding in sourcing more appropriate information and guiding the LLM towards improved content generation. In this subsubsection, we explore methods that employ the Chain-of-Thoughts and Agents mechanisms to achieve effective interactive retrieval.
**CoT-based Retrieval:** In recent studies, there is a growing interest in integrating Chain-of-Thoughts (Wei et al., 2022) steps into query retrieval. He et al. (2022) introduce a method that generates multiple reasoning paths and their corresponding predictions for each query. This process involves retrieving relevant knowledge from external sources like Wikidata (Vrandecic and Krotzsch, 2014), WordNet (Miller, 1992), and ConceptNet (Speer et al., 2017). The faithfulness of each reasoning path determines based on entailment scores, contradiction scores, and MPNet similarities (Song et al., 2020) with the retrieved knowledge. The prediction with the highest faithfulness score is chosen as the final result. This method demonstrates superior performance in tasks such as commonsense reasoning (Geva et al., 2021b), temporal reasoning (Jia et al., 2018), and tabular reasoning (Gupta et al., 2020b), outperforming the baseline CoT reasoning and self-consistency methods (Wang et al., 2023e). On a related note, Trivedi et al. (2023) propose IRCoT, an innovative retrieval technique that interweaves the CoT process. In this approach, each generated sentence during the CoT combines with the question to form a retrieval query. The subsequent reasoning step then produces by the Language Model using both the retrieval results and the prior reasoning. This interleaving method finds to enhance the performance of both retrieval and CoT in Open-domain QA. Experiments proves that this is beneficial for models of different sizes, including GPT-3(175B) (Brown et al., 2020) and Flan-T5 (Chung et al., 2022) families.
FLARE (Jiang et al., 2023b) is a dynamic solution to address the limitations of previous RAG works which either retrieve only once at the onset of generation or do so based on fixed intervals. Single retrievals are insufficient for long-form generation due to the evolving information needs during the process, and fixed intervals for previous token queries can be inappropriate, FLARE determines "when and what to retrieve" dynamically. The decision of "when" is based on whether the current sentence contains a token with a generation probability below a set threshold. If it doesn't, the sentence is accepted and moves to the next generation step; otherwise, retrieval augmented generation occurs. For the "what", the current sentence is used as a query. To address the challenge of low-probability tokens affecting retrieval accuracy, two solutions are proposed: masking low-probability tokens, and using LLM for generation of these tokens as queries. Testing on tasks like Multihop QA (Ho et al., 2020), Commonsense Reasoning (Geva et al., 2021a), Long-form QA (Stelmakh et al., 2022), and Open-domain Summarization (Hayashi et al., 2021) using GPT3.5, results show FLARE outperforms baselines, with both query generation methods showing comparable performance.
**Agent-based Retrieval:** Using an LLM-based agent framework that leverages external knowledge APIs as tools or requesting such APIs as actions.
Yao et al. (2023a) present a new framework named ReAct that integrates Chain-of-Thoughts reasoning with action. Through in-context learning, the LLM's CoT output is transformed into descriptions of reasoning processes and action behaviors. Subsequently, these action descriptions are standardized and executed, with the results being incorporated into the next prompt. In terms of results, for tasks like Fact checking and QA, while CoT has 14% of its correct answers containing incorrect reasoning steps or facts, ReAct only has 6%. In the incorrect answers, 56% of CoT's errors were due to errors in reasoning steps or facts, whereas ReAct has no factual errors. However, ReAct have 23% of its errors resulting from search errors.
Shinn et al. (2023) propose a prompt engineering framework named Reflextion to enable LLMs to reflect on and correct previous errors. They use linguistic feedback to strengthen an agent's actions instead of adjusting model weights. Specifically, the Reflextion Agent, an LLM, first interacts with its environment by generating an "Action" via ReAct (Yao et al., 2023a) or Chain-og-Thoughts (Wei et al., 2022) in few-shot scenarios, which results in an "Observation". This "Observation", whether a reward, error message, or natural language feedback, provides insights on the Agent's current "Action". When the Agent receives a failure signal, it triggers a self-reflextion mechanism, utilizing the LLM, to summarize the reasons for the failure into its Memory module, creating a "long-term memory". On subsequent generations, the Agent can review all past reflextion memories to prevent mistakes. Experimental findings indicate that Reflextion achieves a 10-20% performance
increase over the baseline methods ReACT and CoT on datasets like AlfWorld [16], HotPotQA [23], and HumanEval [17].
Varshney et al. (2023) introduce a comprehensive framework aimed at reducing factual inaccuracies. They use models to recognize entities and generate questions, we recognize these models as tools for the LLM-based agent. During the generation process, pivotal concepts encompassing names, geographical locales, and temporal references are ascertained within the contextual sentence employing entity extraction, keyword distillation, or directives to the LLM. The logit output values corresponding to these discerned concepts act as surrogates for confidence estimates. Should these values fall beneath a predetermined threshold, the mechanism then fetches a pertinent document to corroborate the generated information. The query methodology employed for such a retrieval hinges on posing a binary (Yes/No) query to the LLM. In scenarios where the validation is unsuccessful, the framework directs the model to rectify the erroneous output, either by omission or by substitution, drawing upon the knowledge from the consulted document. Empirical evaluations, specifically in the domain of article generation, underscore the efficacy of the delineated approach. Notably, factual error rates exhibited by GPT-3 witnessed a substantial decline, from 47.5% to a mere 14.5%, when subjected to this methodology. The diagnostic facet of their approach manifests an 80% recall, and the rectification mechanism adeptly rectifies 57.6% of the factually incorrect outputs that were accurately pinpointed.
#### 5.2.3 Retrieval Adaptation
Recent research [14, 23] has highlighted that merely using the retrieved information in LLMs doesn't always enhance their ability to answer factual questions. This underscores the importance of enabling LLMs to better adapt to the retrieved data to produce more accurate content. In this section, we delve into various strategies that facilitate this adaptation. Specifically, we explore three methodological approaches: prompt-based methods, SFT-based methods, and RLHF-based methods.
**Prompt-based:** Leveraging prompts to navigate the retrieval process, ensuring the extraction of pertinent and factual data.
Peng et al. (2023) introduce the LLM-Augmenter system, a system using a fixed LLM combined with a plug-and-play retrieval module to help the LLM perform better in tasks that are particularly sensitive to factual errors. This system enhances its performance by enabling the LLM to use a series of modules (e.g. allowing the LLM to interact with external knowledge) to assist the LLM in generating results grounded in evidence. And they use automated feedback generated by utility functions (e.g., the factuality score of a LLM-generated response) to modify LLM's candidate response options. The author evaluates the system's performance on information-seeking dialog [15] and Wiki QA [17] and experiments show that the system can significantly reduce ChatGPT's errors without sacrificing the fluency and informativeness of the generated content.
**SFT-based:** Optimizing the LLM or retrieval system through training to enhance the alignment between generation tasks and the retrieved content.
Izacard et al. (2022) introduce a comprehensive architecture named ATLAS which is composed of the Contriever [13] retriever of the dual-encoder architecture and the T5 [15] language model with Fusion-in-Decoder [13]. The training objectives for the retriever consist of four components: Attention Distillation [13], where the retriever is trained on the average attention scores from the language model for each article; End-to-end training of Multi-Document Reader and Retriever (EMDR2) [20], which involves using the query and the top-K retrieved articles from the current retriever as input and loss computation against the standard answers to train the retriever; Perplexity Distillation (PDist), where the retriever is trained to predict how much the perplexity of standard answers would improve for each document; Leave-one-out Perplexity Distillation (LOOP), which trains the retriever to predict how much worse the prediction of the language model gets when removing a document from the top-K results. The LM's training objectives consist of three parts: prefix language modeling, masked language modeling, and title to section generation. Besides, they optimize and accelerate retriever training using techniques such as full index update, Re-ranking, and Query-side fine-tuning. Atlas achieves notable accuracy on Natural Questions using only 64 examples, outperforming a 540B model with 50x fewer parameters.
Shi et al. (2023) introduce REPLUG, a retrieval-augmented framework that considers the LLM as a black box, freezes its parameters and tunes retrieval models with supervision signals using language modeling scores. In this framework, the input context and the document are encoded through the dual encoder architecture, cosine similarity is then calculated to retrieve related documents. The likelihood for each retrieved document and the language model scores are computed. Then they can update the retrieval model parameters by minimizing the KL divergence between retrieved document likelihood and the language model"s score distribution. Ablation experiments demonstrate that this method significantly improves the performance of the original language models and the improvements are not coming from ensembling random documents.
Luo et al. (2023) focus on utilizing instruction-tuning to denoise the retrieval results. They gather retrieval outcomes from various search APIs and domains, leading to the creation of a new search-grounded dataset. This dataset encompasses instructions, grounding information, and responses. Notably, it includes both pertinent results and those that are unrelated or disputed. The model needs to learn to ground on useful search results. After fine-tuning the LLMa-7B model on this dataset, the resulting model, named SAIL-7B, exhibits superior performance in transparency-sensitive tasks such as open-ended QA and fact-checking.
**RLHF-based:**Menick et al. (2022) use reinforcement learning from human preferences (RLHP) to train a 280 billion parameter model named GopherCite that generate answers along with high quality supporting evidence. They firstly collect data from existing models and have it rated
by humans. The data is used for fine-tuning and reward model training. A supervised fine-tuning model is trained to produce accurate quotes with proper syntax. A reward model is created to rank model outputs based on overall quality. Finally, a reinforcement learning policy is optimized to align model behavior with human preferences, improving quoting performance. The model may decline to answer when the reward model score is too low. According to human evaluation, the model achieves better supported and plausible rating on the subset of Natural Questions dataset [16] than the previous SOTA (FiD-DPR) [15].
#### 5.2.4 Retrieval on External Memory
Currently, most LLMs enhance their factuality by retrieving knowledge in the form of text snippets from external storage and incorporating them into the context. Some researchers are exploring the storage of knowledge in non-textual forms and integrating this knowledge into models through specialized methods.
Li et al. (2022c) store knowledge in the form of key-value pairs in memory. The key is obtained by encoding knowledge using a Doc Retrieval Embedder, while the value is encoded using a Transformer encoder. Similar to traditional retrieval-based LLMs, the model encodes the input using a Query retrieval embedder and retrieves knowledge from memory. The retrieved value is then integrated into the model's multi-head attention layer through cross-attention to enhance factuality.
G-MAP [17] does not explicitly store knowledge in storage but uses a general domain PLM as external memory. To mitigate catastrophic forgetting during adaptive pretraining, G-MAP introduces a frozen-parameter general domain PLM (PLM-G) during the fine-tuning of the domain-specific PLM (PLM-D). During fine-tuning, the input is provided to both PLM-G and PLM-D. The hidden states from each layer of PLM-G are stored in a cache, and a Memory-Augmented Strategy is used to extract hidden states from certain layers, which are then concatenated and integrated into PLM-D's Memory-Augmented Layer. The study also compared four Memory-Augmented Strategies, with Chunk-based Gated Memory Transfer performing the best.
Fine-tuning methods and some model editing methods store new knowledge in new model parameters through continual pretraining. The difference is that fine-tuning methods store many pieces of knowledge in a matrix parameter, while model editing establishes a new neuron for each piece of knowledge.
Houlsby et al. (2019) propose to add Adapter modules to fine-tune pre-trained deep learning models on a new task with minimal changes to the original model by inserting small, trainable modules between existing layers. During fine-tuning, the main body of the pre-trained model is frozen, and the Adapter module learns knowledge specific to downstream tasks. The Adapter method reduces the computational requirements for model fine-tuning while enhancing the model's factuality for specific domains. However, the addition of the Adapter module also increases the overall parameter count of the model, somewhat reducing the model's inference performance.
To enhance the model's understanding of entities and thereby improve factuality, KALA [14], EaE [15], and Mention Memory [16] store encoded entities in external memory. During generation, the retrieved entity embeddings are integrated into the model layers.
Kang et al. (2022) introduced KALA to reduce overhead and catastrophic forgetting during Adaptive Training. KALA not only establishes a memory for entities and their encodings but also uses a KG to store relationships between these entities. For a given mention in the input, the corresponding entity is first determined. Based on the KG, the encoding of this entity and its neighboring entities is retrieved from memory. Through GNN weighted aggregation, the encoding for this mention's corresponding entity is obtained. Finally, the Knowledge-conditioned Feature Modulation (KFM) is introduced in the model layer, integrating the encoding result into the representation of all tokens involved in the mention.
Fevry et al. (2020) introduce mention detection, entity linking, and MLM during model training. The model queries the top 100 entity embeddings from the entity storage that is closest to the current mention and integrates them using attention.
de Jong et al. (2022)'s TOME model is an improvement over the EaE model. Instead of storing entity embeddings in memory, TOME stores the embeddings of entity mentions. For marked entity mentions in the input, TOME retrieves all related entity mention embeddings from memory and integrates them into the model through a memory attention layer.
Similar to the three methods mentioned above, knowledge plugin [18] also introduce entity-related knowledge. However, instead of integrating the knowledge directly into the model layers, they utilize a pre-trained mapping network. This network maps the entity embeddings to the token embedding space of the Pre-trained Language Model (PLM). Ultimately, the mapped entity embeddings are injected at the input embedding level, facilitating the knowledge insertion process.
To further tackle the factual correction task [19], Gao et al. (2023a) explore the integration of LLMs, such as GPT-3, with search engines to improve their precision and memory. The goal is to use search engines to search for evidence and correct sentences generated by LLMs. The proposed method RAR is that, for each input sentence, a set of questions is generated, and web pages are searched to verify the consistency of information with the input sentence. The paper evaluates the modifications based on attribution and preservation criteria, with both manual and automatic verification methods. The primary evaluation metric is F1, considering both attribution and preservation aspects to assess the effectiveness of the approach in enhancing LLM-generated sentences.
Chen et al. (2023a) is a follow-up work to [10] and [18]. Similar to EFEC, this paper fine-tunes a T5 model to serve as an editor, but it introduces negative samples during fine-tuning. The model PURR is trained to take user questions, perform Google searches to retrieve the top 5 web page summaries (used as positive samples), and generate noise by replacing some
content in these positive samples using the language model. A sequence-to-sequence model is then trained to correct the noisy sentences back to their correct versions. This approach differs from EFEC, which used a mask-and-fill approach. PURR represents an improvement over EFEC, focusing on directly training a language model to edit incorrect sentences into correct ones using Google search for generating positive samples, ultimately leading to increased F1 scores.
#### 5.2.5 Retrieval on Structured Knowledge Source
We discuss studies that retrieves on structured repositories, such as knowledge graphs and databse, to source factual data during generation in this subsubsection.
Zhang et al. (2023) utilize knowledge graphs (KG) for retrieval to tackle factual errors. They observe that there can be inconsistencies between a user's request and the content in the KG. For instance, when a user mentions a full name, the KG might only have its abbreviation, leading to imperfect retrieval results. To rectify this, they propose a method to rephrase the user's request. Their approach involves generating an SQL query based on the user's input and the database metadata using a LLM. And then they query the database and ask the LLM to identify which entity in the sentence corresponds to an entity in the database, thereby creating a mapping. Using the entity names from the database, the LLM is prompted to reformulate the question. If a database query results in multiple rows for a selected column and item, a new question is generated using a greedy approach, prompting the user for more specific details until a conclusive answer is reached. Experiments show that this method exhibits notable enhancements in comparison to contemporary state-of-the-art techniques in mitigating language model inaccuracies.
StructGPT (Jiang et al., 2023) is a general prompt framework to support LLMs reasoning on structured data (e.g., KG, Table, and Database). In general, the core of this framework is that they construct the specialized interfaces to collect relevant evidence from structured data (i.e., reading), and let LLMs concentrate on the reasoning task based on the collected information (i.e., reasoning). Specially, they propose an invoking-linearization-generation procedure to support LLMs in reasoning on the structured data with the help of the interfaces. By iterating this procedure with provided interfaces, our approach can gradually approach the target answers to a given query. Experiments conducted on three types of structured data, including KGQA, TableQA, and Text-to-SQL, show that StructGPT greatly improves the performance of LLMs, under the few-shot or zero-shot settings.
Baek et al. (2023) propose to inject the factual knowledge from knowledge graphs into (large) language models (up to GPT-3.5), by retrieving the relevant facts from knowledge graphs based on their textual similarities with the input question and then injecting them as the prompt of language models. This approach improves the performance of language models on knowledge graph question answering tasks (Sen et al., 2022) by up to 48% on average, compared to baselines without knowledge graphs.
### _Domain Factuality Enhanced LLMs_
Domain Knowledge Deficit is not only an important reason for limiting the application of LLM in specific fields, but also a subject of great concern to both academia and industry. In this subsection, we discuss how those Domain-Specific LLMs enhance their domain factuality.
Table IX lists the domain-factuality enhanced LLMs. Here, we include several domains, including healthcare/medicine (H), finance (F), law/legal (L), geoscience/environment (G), education (E), food testing (FT), and home renovation (HR).
Based on the actual scenarios of Domain-Specific LLMs and our previous categorization of enhancement methods, we have summarized several commonly used enhancement techniques for Domain-Specific LLMs:
_(1) Continual Pretraining:_ A method that involves continuously updating and fine-tuning a pre-trained language model using domain-specific data. This process ensures that the model stays up-to-date and relevant within a specific domain or field. It starts with an initial pre-trained model, often a general-purpose language model, and then fine-tunes it using domain-specific text or data. As new information becomes available, the model can be further fine-tuned to adapt to the evolving knowledge landscape. Continual pretraining is a powerful approach for maintaining the accuracy and relevance of AI models in rapidly changing domains, such as technology or medicine (Yang et al., 2023; Zhang et al., 2023).
_(2) Continual SFT:_ Another strategy for enhancing the factuality of AI models. In this approach, the model is fine-tuned using labeled or annotated data specific to the domain of interest. This fine-tuning process allows the model to learn and adapt to the nuances and specifics of the domain, improving its ability to provide accurate and contextually relevant information. It can be particularly useful in applications where access to domain-specific labeled data is available over time, such as in the case of legal databases, medical records, or financial reports (Bao et al., 2023; Li et al., 2023).
_(3) Train From Scratch:_ It involves starting the learning process with minimal prior knowledge or pretraining. This approach can be likened to teaching a machine learning model with a blank slate. While it may not have the advantage of leveraging pre-existing knowledge, training from scratch can be advantageous when dealing with completely new domains or tasks where there is limited relevant data available. It allows the model to build its understanding from the ground up, although it may require substantial computational resources and time (Ross et al., 2022; Venigalla et al., 2022).
_(4) External knowledge:_ involves augmenting a language model's internal knowledge with information from external sources. This method allows the model to access databases, websites, or other structured data repositories to verify facts or gather additional information when responding to user queries. By integrating external knowledge, the model can enhance its fact-checking capabilities and provide more accurate and contextually relevant answers, especially when dealing with dynamic or rapidly changing information. Below, we introduce these methods (Fan et al., 2023; Wang et al., 2023).
For each Domain-specific LLM, we list its respective enhancement methods, which are presented in Table IX.
#### 5.3.1 Healthcare domain-enhanced LLMs
These LLMs have emerged as powerful tools in the medical field, offering a diverse range of capabilities. These models, such as CohortGPT(Guan et al., 2023), ChatDoctor(Li et al., 2023), DeID-GPTLiu et al., 2023, BioMedLMVenigalla et al., 2022, DoctorGLMXiong et al., 2023, MedChatZHTan et al., 2023, BioGPT(Luo et al., 2022), GeneGPT(Jin et al., 2023), Almanac(Zakka et al., 2023), and MolXPT(Liu et al., 2023), harness the potential of LLMs to revolutionize healthcare. They are equipped with features like classifying unstructured medical text into disease labels, improving performance with knowledge graphs and sample selection strategies, fine-tuning on large datasets of patient-doctor dialogues, enabling automatic medical text de-identification, excelling in medical question-answering tasks, handling traditional Chinese medical question-answering, and outperforming in various biomedical NLP tasks. Some models interact with web APIs for genomics questions, while others specialize in clinical guidelines and treatment recommendations. These LLMs not only demonstrate state-of-the-art performance on healthcare domain but also emphasize
the importance of domain-specific training and evaluation, showcasing their potential in transforming healthcare and clinical decision-making.
Zhang et al. (2023a) present HuatuoGPT, a medical language model that uses data from ChatGPT and doctors, resulting in state-of-the-art performance in medical consultations. It is based on Baichuan-7B and Ziya-LLaMA-13B-Pretrain-v1, continually pre-trained on both distilled data (from ChatGPT) and real-world data (from Doctors).
Likewise, Yang et al. (2023c) introduce Zhongjing, the first Chinese medical language model based on LLaMA, which utilizes a comprehensive training pipeline and a multi-turn medical dialogue dataset. Specifically, it is enhanced with a multi-turn medical dialogue dataset called CMtMedQA, consisting of 70,000 authentic doctor-patient dialogues, enabling complex dialogue and proactive inquiry. The backbone model used is Ziya-LLaMA-13B-v1, and the evaluation dataset is CMtMedQA and huatuo-26M (Li et al., 2023b).
Wang et al. (2023f) unveil a system, LLM-AMT, that improves large-scale language models like GPT-3.5-Turbo and LLaMA-2-13B with medical textbooks, notably enhancing open-domain medical question-answering tasks. At the same time, the external knowledge source is a Hybrid Textbook Retriever comprising 51 textbooks from the MedQA dataset and Wikipedia.
Bao et al. (2023) present DISC-MedLLM, a solution that uses LLMs to provide accurate medical responses in conversational healthcare services, utilizing strategies like medical knowledge graphs, real-world dialogue reconstruction, and human-guided preference rephrasing to create high-quality SFT datasets, applied on Baichuan-13B-Base. The paper uses various datasets for fine-tuning, including Re-constructed AI Doctor-Patient Dialogue, MedDialog, cMedQA, Knowledge Graph QA pairs (CMeKG), Behavioral Preference Dataset (Manual selection), MedMCQA, MOS6, and Alpaca-GPT.
Similarly, Guan et al. (2023) introduce CohortGPT, a model that uses LLMs for participant recruitment in clinical research by classifying complex medical text into disease labels. CohortGPT enhances ChatGPT performance with the use of a knowledge graph as auxiliary information and a CoT sample selection strategy. The tasks involve IU-RR (Preparing a collection of radiology examinations for distribution and retrieval) and MIMIC-CXR (Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports). Li et al. (2023f) introduce ChatDoctor, a refined LLaMA-based. It is fine-tuned using a dataset of 100,000 patient-doctor dialogues, and equipped with a self-directed information retrieval mechanism.
Liu et al. (2023f) introduce DeID-GPT, a framework leveraging GPT-4 for automatic medical text de-identification. Additionally, the paper mentions the use of HIPAA identifiers as an extra knowledge source to enhance the de-identification process.
Venigalla et al. (2022) present BioMedLM, a domain-specific LLM trained on PubMed data, for medical QA tasks. Xiong et al. (2023b) introduce DoctorGLM, a Chinese-focused language model fine-tuned for healthcare-specific tasks. Both Tan et al. (2023a) and Luo et al. (2022) introduce dialogue and generative Transformer language models for traditional Chinese medical question-answering and biomedical NLP tasks, respectively.
Jin et al. (2023) present GeneGPT, a method for teaching LLMs to answer genomics questions using NCBI Web APIs. Zakka et al. (2023) introduce Almanac, a LLM with retrieval capabilities for medical guideline recommendations. Lastly, Liu et al. (2023e) introduce MolXPT, a unified language model adept in molecular property prediction and molecular generation.
#### 5.3.2 Legal domain enhanced LLMs
These LLMs, such as LawGPT (Nguyen, 2023), and ChatLaw (Cui et al., 2023b), have been fine-tuned to provide comprehensive legal assistance, from answering intricate legal queries and generating legal documents to offering expert legal advice. Leveraging extensive corpora of legal text, these models ensure context-aware and accurate responses. Moreover, their continual development involves injecting domain knowledge, designing supervised fine-tuning tasks, and incorporating retrieval modules to address issues like hallucination and ensure high-quality legal assistance. These innovations not only pave the way for more accessible and reliable legal services but also open up new avenues for research and exploration within the legal domain.
Nguyen (2023) introduce LawGPT 1.0, a fine-tuned GPT-3 language model for the legal domain to provide conversational legal assistance, including answering legal questions, generating legal documents, and offering legal advices. The paper mentions the use of a large corpus of legal text for fine-tuning the model to adapt it to the legal domain.
Savelka et al. (2023) evaluate the performance of GPT-4 in generating explanations of legal terms in legislation, comparing a baseline approach to an augmented approach that uses a legal information retrieval module to provide context from case law, revealing improvements in quality and addressing issues of factual accuracy and hallucination.
Huang et al. (2023a) address the challenge of enhancing LLMs like LLaMA for domain-specific tasks, particularly in the legal domain, by injecting domain knowledge during continual training, designing appropriate supervised fine-tune tasks and incorporating a retrieval module to improve factuality during text generation. They release their data and model for further research in Chinese legal tasks.
Cui et al. (2023a) introduce ChatLaw, an open-source legal LLM, designed for the Chinese legal domain. The paper introduces a method to improve model factuality during data screening, and a self-attention method for error handling. The paper uses various datasets for fine-tuning ChatLaw, including a collection of original legal data, data constructed based on legal regulations and judicial interpretations, and crawled real legal consultation data. The primary model used in this paper is Ziya-LLaMA-13B, which serves as the backbone for ChatLaw, tailored for the Chinese legal domain and optimized to handle legal questions and tasks. Additionally, the paper uses of a vector database retrieval method, keyword retrieval, and a self-attention method to enhance the model's performance in the legal domain.
#### 5.3.3 Finance Domain-enhanced LLMs
These LLMs combine sophisticated language models designed specifically for commercial and financial tasks to deliver robust processing capabilities. They focus on creating tailor-made solutions optimized for both financial text analysis and e-commerce settings, trained on datasets containing myriad business-related tasks and copious financial tokens. They are designed to perform a plethora of functions, ranging from understanding and generating instructions for various E-commerce assignments to identifying sentiment, recognizing named entities, and answering questions in financial contexts. The models are further fine-tuned for zero-shot generalization on diverse tasks and benchmarks.
Li et al. (2023) introduce EcomGPT, a language model tailored for E-commerce scenarios, trained on the newly created EcomInstruct dataset, which consists of 2.5 million instruction data spanning various E-commerce tasks and data types. The dataset covers product information, user reviews, and more. It defines atomic tasks and Chain-of-Task tasks to enable comprehensive training for E-commerce scenarios. The backbone model used is BLOOMZ, which is fine-tuned with the EcomInstruct dataset. The evaluation dataset includes 12 tasks, encompassing classification, generation, extraction, and other E-commerce-related tasks.
Wu et al. (2023) introduce BloombergGPT, a specialized 50 billion-parameter language model for the financial domain, trained on a massive 363 billion token dataset, which combines Bloomberg's extensive financial data sources with general-purpose datasets. The dataset used in this paper is an extensive 363 billion token dataset, which includes a significant portion of financial data from Bloomberg's sources (51.27% of the training data). BloombergGPT is based on a decoder-only causal language model architecture known as BLOOM. The evaluation includes various financial NLP tasks such as sentiment analysis, named entity recognition, binary classification, and question answering.
#### 5.3.4 Other Domain-Enhanced LLMs
**Geoscience and Environment domain-enhanced LLMs:** are expertly designed, leveraging vast corpora to provide precise and robust results pertaining to geoscience and renewable energy. K2, a trailblazer in geoscience LLM, was trained on a massive geoscience text corpus and further refined using the GeoSignal dataset. Meanwhile, the HouYi model, another pioneering LLM focusing on renewable energy, harnessed the Renewable Energy Academic Paper dataset, containing over a million academic literature sources. These LLMs are fine-tuned to deliver adept performance in their respective fields, showing substantial capabilities in aligning their responses with user queries and renewable energy academic literature.
Deng et al. (2023) introduce K2, the first LLM designed specifically for geoscience, which is a LLaMA-7B continuously trained on a 5.5 billion token geoscience text corpus and fine-tuned using the GeoSignal dataset. The paper also presents resources like GeoSignal, a geoscience instruction tuning dataset, and GeoBench, the first geoscience benchmark for evaluating LLMs in the context of geoscience.
Bai et al. (2023) present the development of the HouYi model, the first LLM specifically designed for renewable energy, utilizing the newly created Renewable Energy Academic Paper (REAP) dataset, which contains over 1.1 million academic literature sources related to renewable energy, and the HouYi model is fine-tuned based on general LLMs such as ChatGLM-6B.
**Education domain-enhanced LLMs:** are used for assisting education scenarios. An example is GrammarGPT (Fan et al., 2023), which provides an innovative approach to language learning, particularly focusing on error correction in Chinese grammar. It is an open-source LLM designed for native Chinese grammatical error correction, which leverages a hybrid dataset of ChatGPT-generated and human-annotated data, along with heuristic methods to guide the model in generating ungrammatical sentences. The backbone model used is phoenix-inst-chat-7b.
**Food domain-enhanced LLMs:** are language models specifically designed to meet the distinct requirements of food testing protocols. For example, Qi et al. (2023) introduce FoodGPT, a LLM for food testing that incorporates structured knowledge and scanned documents using an incremental pre-training approach, with a focus on addressing machine hallucination by constructing a knowledge graph as an external knowledge base, utilizing the Chinese-LLaMA2-13B as the backbone model and collecting food-related data for training.
**Home renovation domain-enhanced LLMs:** are domain-specific language models tailored for home renovation tasks. For example, Wen et al. (2023) introduce ChatH-ome, which uses a dual-pronged methodology involving domain-adaptive pretraining and instruction-tuning on an extensive dataset comprising professional articles, standard documents, and web content relevant to home renovation. The backbone model is Baichuan-13B, and the evaluation datasets include C-Eval, CMMLU, and the newly created "EvalHome" domain dataset, while the fine-tuning data sources encompass National Standards, Domain Books, Domain Websites, and WuDaoCorpora.
## 6 Conclusion
Throughout this survey, we have systematically explored the intricate landscape of factuality issues within large language models (LLMs). We began by defining the concept of factuality (Sec 2.2) and proceeded to discuss its broader implications (Sec 2.3). Our journey took us through the multifaceted realm of factuality evaluation, encompassing benchmarks (Sec 3.2), metrics (Sec 3.1), specific evaluation studies (Sec 3.3), and domain-specific evaluations (Sec 3.4). We then delved deeper, probing the intrinsic mechanisms that underpin factuality in LLMs (Sec 4). Our exploration culminated in the discussion of enhancement techniques, both for pure LLMs (Sec 5.1) and retrieval-augmented LLMs (Sec 5.2), with a special focus on domain-specific LLM enhancements (Sec 5.3).
Despite the advancements detailed in this survey, several challenges loom large. The evaluation of factuality remains an intricate puzzle, complicated by the inherent variability and nuances of natural languages. The core processes governing how LLMs store, update, and produce facts are yet not fully revealed. And while certain techniques, like continual training and retrieval, show promise, they are
not without limitations. Looking ahead, the quest for fully factual LLMs presents both challenges and opportunities. Future research might delve deeper into understanding the neural architectures of LLMs, develop more robust evaluation metrics, and innovate on enhancement techniques. As LLMs become increasingly integrated into our digital ecosystem, ensuring their factual reliability will remain paramount, with implications that impact across the AI community and beyond.
|
2305.09962 | Mesoscopic fluctuations in entanglement dynamics | Understanding fluctuation phenomena plays a dominant role in the development
of many-body physics. The time evolution of entanglement is essential to a
broad range of subjects in many-body physics, ranging from exotic quantum
matter to quantum thermalization. Stemming from various dynamical processes of
information, fluctuations in entanglement evolution differ conceptually from
out-of-equilibrium fluctuations of traditional physical quantities. Their
studies remain elusive. Here we uncover an emergent random structure in the
evolution of the many-body wavefunction in two classes of integrable -- either
interacting or noninteracting -- lattice models. It gives rise to
out-of-equilibrium entanglement fluctuations which fall into the paradigm of
mesoscopic fluctuations of wave interference origin. Specifically, the
entanglement entropy variance obeys a universal scaling law, in each class, and
the full distribution displays a sub-Gaussian upper and a sub-Gamma lower tail.
These statistics are independent of both the system's microscopic details and
the choice of entanglement probes, and broaden the class of mesoscopic
universalities. They have practical implications for controlling entanglement
in mesoscopic devices. | Lih-King Lim, Cunzhong Lou, Chushun Tian | 2023-05-17T05:43:40Z | http://arxiv.org/abs/2305.09962v2 | # Mesoscopic fluctuations in entanglement dynamics
###### Abstract
Understanding fluctuation phenomena plays a dominant role in the development of many-body physics. The time evolution of entanglement is essential to a broad range of subjects in many-body physics, ranging from exotic quantum matter to quantum thermalization. Stemming from various dynamical processes of information, fluctuations in entanglement evolution differ conceptually from out-of-equilibrium fluctuations of traditional physical quantities. Their studies remain elusive. Here we uncover an emergent random structure in the evolution of the wavefunction in a class of integrable models. It gives rise to out-of-equilibrium entanglement fluctuations which, strikingly, fall into the paradigm of mesoscopic fluctuations of wave interference origin. Specifically, the entanglement entropy variance obeys a universal scaling law, and the distribution displays a sub-Gaussian upper and a sub-Gamma lower tail. These statistics are independent of both the system's microscopic details and the choice of entanglement probes, and broaden the class of mesoscopic universalities. They have practical implications for controlling entanglement in mesoscopic devices.
pacs: 03.67.-a, 03.67.-a, 03.67.-a When an isolated many-body system evolves, entanglement tends to spread. Owing to the diversity of the fate of the wavefunction evolution (e.g., localized or delocalized, thermalized or not thermalized), a wealth of entanglement patterns develop [1; 2; 3; 4; 5; 6; 7]. These patterns are the building blocks of the physics of recently discovered exotic phases of matter [4; 7; 8], and are central to the foundations of statistical mechanics [6; 7]. Understanding the long-time evolution of entanglement, and especially its universal aspects, is indispensable in the study of pattern formation.
To address this issue, one often investigates mesoscopic rather than macroscopic systems. Recent advancements in quantum simulation platforms, ranging from cold atoms, trapped ions to superconducting qubits, have made possible the measurement of information-theoretic observables and the experimental study of entanglement evolution [6; 7; 9]. In these investigations, quantum coherence is maintained across the entire sample, as required also for mesoscopic electronic and photonic devices [10; 11]. At the same time, the relationship between the evolution of entanglement and quantum thermalization in isolated systems is currently under investigations [6; 7]. Since various scenarios for the latter [12; 13; 14; 15; 16; 17] are built upon a basis of wavefunctions with finite spatial extent, emphasis naturally has to be placed on the dynamics of entanglement on the mesoscopic scale.
A prominent feature of mesoscopic systems is the occurrence of unique fluctuation phenomena, when randomness due to quenched disorders [10; 11] or chaos [18; 19] is present. Notably, the conductance - a basic probe of mesoscopic transport - fluctuations have a universal variance, independent of sample size and the strength of randomness [20; 21]. Mesoscopic fluctuations are of wave interference origin and conceptually different from thermodynamic fluctuations. They are related to various entanglement properties [22; 23]. Understanding their universalities is of fundamental importance to mesoscopic physics. Here we uncover a 'random' structure emergent from the dynamical phases in the wavefunction evolution. Treating the information-theoretic observable as an unconventional'mesoscopic' probe, we explore out-of-equilibrium fluctuation phenomena in entanglement evolution, whose origin are similar to that of mesoscopic fluctuations in genuine disordered samples.
In fact, there is a rapid increase of interests in entanglement fluctuations. In particular, understanding out-of-equilibrium entanglement fluctuation properties is a key to the statistical physics of isolated systems [24; 25]. So far studies have focused on the kinematic case [26; 27; 28; 29], where fluctuations arise from random sampling of some pure state ensemble, initiated by Page [26]. Since kinematic theories cannot describe wave effects and dynamical properties of the Schrodinger evolution [30], out-of-equilibrium entanglement fluctuations are beyond the framework of those theories.
Here we develop an analytical theory for fluctuations in long-time dynamics of entanglement in a class of integrable lattice systems, including the Rice-Mele model and the transverse field Ising chain. We find that the wavefunction evolution endows the correlation matrix a random structure, even though the system is neither chaotic nor disordered. Specifically, the time dependence enters through \(N\approx\frac{L}{2}\) dynamical phases \((\omega_{1}t,\ldots,\omega_{N}t)\equiv\mathbf{\omega}t\), with \(L\) being the number of unit cells, so that the instantaneous correlation matrix \(C(t)\) is given by some \(N\)-variable (matrix-valued) function \(\tilde{C}(\mathbf{\varphi})\) for \(\mathbf{\varphi}=\mathbf{\omega}t\); due to the incommensurability of \(\mathbf{\omega}\) an ensemble of random matrices \(\tilde{C}(\mathbf{\varphi})\) then results. Each \(\tilde{C}(\mathbf{\varphi})\) is determined by \(\mathbf{\varphi}\), the virtual disorder realization uniformly distributed over a \(N\)-dimensional torus (Fig. 1). It describes a virtual disordered sample, and determines entanglement properties
of that sample in the same fashion as \(C(t)\) determines system's instantaneous entanglement properties. Consequently, when system's wavefunction evolves, the trajectory \(\mathbf{\varphi}=\mathbf{\omega}t\) sweeps out the entire disorder ensemble, trading the temporal fluctuations of various information-theoretic observables, such as the entanglement entropy and the Renyi entropy, to mesoscopic _sample-to-sample_ fluctuations [20; 21]. In particular, we find that these out-of-equilibrium entanglement fluctuations arise from wave interference, similar to mesoscopic fluctuations. Interestingly, this kind of trajectories play important roles in Chirikov's studies of the relations between mesoscopic physics and quantum chaos [31].
However, there are important differences between ordinary quenched disorders and the randomness emergent from entanglement evolution. As shown below, the latter has a strength \(\sim 1/\sqrt{L}\) and diminishes for \(L\to\infty\). This situation renders canonical mesoscopic theories based on diagrammatical [10; 11] and field-theoretical [32] methods inapplicable, since they require the disorder strength to be independent of the sample size. In addition, because \(C(t)\) is a (block-)Toeplitz matrix and very little [33] is known about the spectral statistics of random Toeplitz matrices, mesoscopic theories based on random matrix methods [34] are inapplicable either. Here we develop a different approach based on the modern nonasymptotic probability theory [35], that relies merely on the statistical independence of the components of \(\mathbf{\varphi}\) and applies to any \(L\). A related approach has recently been used to find novel universalities in mesoscopic transport [36].
Uncovering the random structure, we show that fluctuations in entanglement evolution exhibit intriguing universal behaviors, independent of microscopic details. First, when the variance \(\mathrm{Var}(S)\) of the entanglement entropy \(S\) as well as \(L\) and \(L_{A}\) (the subsystem size) are rescaled by appropriate microscopic quantities, the universal scaling law:
\[\mathrm{Var}(S)=1/L+L_{A}^{3}/L^{2} \tag{1}\]
follows. Second, the statistics of \(S\) is universal and the distribution is asymmetric with respect to its mean \(\langle S\rangle\), displaying a sub-Gaussian upper and a sub-Gamma lower tail. In particular, the probability for large deviation \(\epsilon\) is
\[\mathbf{P}(|S-\langle S\rangle|\geq\epsilon)=\left\{\begin{array}{ll}e^{- \frac{\epsilon^{2}}{2\hbar+}},&\mathrm{for}\,S-\langle S\rangle>0\\ e^{-\frac{\epsilon^{2}}{2\langle\hbar-\epsilon+\epsilon\rangle}},&\mathrm{ for}\,S-\langle S\rangle<0\end{array}\right. \tag{2}\]
where \(\mathfrak{b}_{\pm}\propto\mathrm{Var}(S)\) and \(\mathfrak{c}>0\) depends on the ratio \(L_{A}/L\). Third, Eqs. (1) and (2) hold for other probes, e.g., the
Figure 1: **Emergence of mesoscopic fluctuations in entanglement evolution.****a.** We simulate entanglement entropy evolution of Rice-Mele model up to \(t=10^{4}\) in unit of \(\hbar/J\), \(J\) the amplitude of hopping between two nearest sites (inset, \(L_{A}=25\), \(L=124\)). Its fluctuation statistics (histograms) is shown to be equivalent to the statistics of entanglement entropy fluctuations in an ensemble of virtual disordered samples (dashed line, for \(5\times 10^{5}\) disorder realizations \(\mathbf{\varphi}\)). **b.** These long-time fluctuations differ from growth and damped oscillations appearing in early entanglement evolution (inset). **c.** Simulations show that the nearest-neighbor spacing distribution characterizing spectral fluctuations of the correlation matrix \(C(t)\) (inset) is indistinguishable from that for an ensemble of truly random matrices \(\tilde{C}(\mathbf{\varphi})\), and is semi-Poissonian. **d.** Physically, as system’s wavefunction evolves, the dynamical phases \(\mathbf{\varphi}=(\omega_{1}t,\dots,\omega_{N}t)\) sweeps out an ensemble of ‘mesoscopic samples’ \(\tilde{C}(\mathbf{\varphi})\).
Renyi entropy. These universal fluctuation behaviors are irrespective of the location of \(\langle S\rangle\) in Page's curve [26]. By Eq. (1) at fixed \(L_{A}\) the variance vanishes in the limit \(L\to\infty\) (cf. Fig. 2a), implying the full suppression of temporal fluctuations beyond some critical time, in agreement with a benchmark result of entanglement evolution [1]. In contrast, at fixed \(L\), as \(L_{A}\) increases the variance grows as \(\sim L_{A}^{3}\) eventually (cf. Fig. 2b), which is much faster than \(\sim L_{A}\) displayed by typical extensive quantities. We shall see below this enhanced growth results from quantum interference.
Having summarized the main results, we outline the derivations and present numerical verifications. A complete description is given in Supplemental Information (SI). We focus on the Rice-Mele model subjected to the periodic boundary condition. Generalizations to other models mentioned above are straightforward. Let the system be at the half-filling ground state \(\Psi(0)\). At \(t=0\) we suddenly change parameters of the Hamiltonian. So the pre-quench state \(\Psi(0)\) evolves unitarily under the new Hamiltonian to state \(\Psi(t)\) at later time \(t\). Because \(\Psi(0)\) is a Gaussian state and the system is fermionic, the instantaneous entanglement entropy can be expressed as
\[S(t)=\int d\lambda\,e(\lambda)\operatorname{Tr}_{A}\delta(\lambda-C(t)) \tag{3}\]
using the method in [37; 38; 39; 40]. Here \(e(\lambda)=-\lambda\ln\lambda-(1-\lambda)\ln(1-\lambda)\) is the binary entropy function. \(\operatorname{Tr}_{A}\delta(...)\) gives the spectral density of the correlation matrix \(C(t)\) defined in the unit cell and sublattice sector, labelled by \(i\) and \(\sigma\), respectively; its element \(C_{i\sigma,i^{\prime}\sigma^{\prime}}(t)=\langle\Psi(t)[c_{i\sigma}^{\dagger} c_{i^{\prime}\sigma^{\prime}}|\Psi(t)\rangle\), with \(c_{i\sigma}\) (\(c_{i\sigma}^{\dagger}\)) being the fermion annihilation (creation) operator. The trace is restricted to the subsystem A. When replacing \(e(\lambda)\) by an appropriate function of \(\lambda\), we obtain other entanglement probes such as the Renyi entropy. This kind of expressions indicate that the evolving spectral density underlies out-of-equilibrium behaviors of different entanglement probes. They are analogous to the expressions for probes of mesoscopic transport. Indeed, if we replace \(C(t)\) by the product of transmission matrix and its hermitian conjugate, we transform Eq. (3) to the Landauer formula for conductance with \(e(\lambda)\) changed to \(\lambda\), and to formulates for other transport probes with \(e(\lambda)\) changed to appropriate functions of \(\lambda\)[34].
Because the eigenenergy spectrum displays a reflection and a particle-hole symmetry, when particle eigenenergies \(\frac{\omega_{m}}{2}\) (Planck's constant set to unity) at Bloch momenta \(k_{m}=\frac{2\pi(m-1)}{L}\), \(m=1,...,N=[\frac{L}{2}]+1\), are given, all other particle and all hole eigenenergies are known. Due to the translational invariance of the system, the time parameter enters the correlation matrix through the dynamical phases \(\mathbf{\omega}t\) associated with \(\mathbf{\omega}\equiv(\omega_{1},...,\omega_{N})\). Specifically, we can define a function \(\tilde{C}(\mathbf{\varphi})=C_{0}+C_{1}(\mathbf{\varphi})\) on the \(N\)-dimensional torus. Leaving its detailed form for SI, here we only expose its key properties. First, \(C_{0,1}\) are block-Toeplitz matrices, with elements \((C_{0,1})_{ii^{\prime}}\) being \(2\times 2\) blocks defined in the sublattice sector and depending on the unit cell indexes \(i,i^{\prime}\) via \((i-i^{\prime})\), i.e., \((C_{0,1})_{ii^{\prime}}\equiv(C_{0,1})_{i-i^{\prime}}\). Second, \(C_{0}\) is \(\mathbf{\varphi}\)-independent, whereas \(C_{1}\) is not and its elements take the form of
\[(C_{1})_{l}\equiv\frac{1}{L}\sum_{m=1}^{N}\big{(}R_{l}(k_{m})\cos\varphi_{m}+ I_{l}(k_{m})\sin\varphi_{m}\big{)}, \tag{4}\]
where the elements of blocks, \(R\)'s, \(I\)'s, are complex and depend on \(k_{m}\) (as well as post-quench Hamiltonian parameters). Then \(C(t)\) is given by \(\tilde{C}(\mathbf{\varphi})\) at \(\mathbf{\varphi}=\mathbf{\omega}t\). Similarly, with the introduction of \(S(\mathbf{\varphi})\equiv\int d\lambda\,e(\lambda)\operatorname{Tr}_{A}\delta( \lambda-\tilde{C}(\mathbf{\varphi}))\) in parallel to Eq. (3) (for notational simplicity we use the same symbol \(S\) despite differences in the arguments.), \(S(t)\) is given by \(S(\mathbf{\varphi})\) at \(\mathbf{\varphi}=\mathbf{\omega}t\). This implies that, like \(C(t)\), an evolving entanglement probe depends on \(t\) through the dynamical phases \(\mathbf{\omega}t\). Such dependence has an immediate consequence. That is, because in general the components of \(\mathbf{\omega}\) are incommensurate, after initial growth [1] and damped oscillations [41] due to the traversal of quasiparticle pairs or the incomplete revival of wavefunction (Fig. 1b), an entanglement probe displays quasiperiodic oscillations (Fig. 1a, inset), which are reproducible under the same initial conditions.
To understand fluctuation properties of quasiperiodic oscillations we note that the trajectory \(\mathbf{\varphi}=\mathbf{\omega}t\) generates an ensemble of random matrices \(\tilde{C}(\mathbf{\varphi})\), each of which is determined by the 'disorder realization', \(\mathbf{\varphi}\), and thus is
Figure 2: **Entanglement entropy distribution.** We perform statistical analysis of the temporal fluctuations in the simulated entanglement entropy evolution. **a.** Variation of the distribution with increasing \(L\) at fixed \(L_{A}\). **b.** Same as a., but with increasing \(L_{A}\) at fixed \(L\). **c.** The large deviation probability \(\mathbf{P}(|S-\langle S\rangle|\geq\epsilon)\), with upper and lower tail respectively, is well fitted by Eq. (10) (dashed lines), implying that the upper (squares) tail distribution is sub-Gaussian and the lower (circles) is sub-Gamma. The ratio \(L_{A}/L\) is \(0.1\) (yellow), \(0.2\) (green) and \(0.5\) (blue).
separated into two parts: nonrandom \(C_{0}\) and random \(C_{1}(\mathbf{\varphi})\). The probability measure of this ensemble is induced by the uniform distribution of \(\mathbf{\varphi}\) via Eq. (4). This ensemble has some prominent features (see SI for detailed discussions): First, since \(\varphi_{m}\)'s are statistically independent, Eq. (4) implies that each element randomly fluctuates around its mean, with a magnitude \(\sim 1/\sqrt{L}\). Thus for fixed ratio \(L_{A}/L\) the randomness diminishes in the limit of large matrix size. Second, the elements of two distinct blocks are statistically independent. Third, the average elements decay rapidly with their distance to the main diagonal. These features lead to a semi-Poissonian nearest-neighbor spacing distribution [42],
\[P_{0}(s)=4se^{-2s}, \tag{5}\]
as shown in simulations (Fig. 1c). Strikingly, despite that the Rice-Mele model is integrable and has no extrinsic randomness, the evolving correlation matrix can exhibit level repulsion: \(P_{0}(s\to 0)\!\sim\!s\), which is a distinctive property of quantum chaos [18; 19]. We can demonstrate that the statistical equivalence of the ensemble of \(\tilde{C}(\mathbf{\varphi})\) and the time series \(C(t)\) (Fig. 1c) hinges only on the incommensurability of \(\mathbf{\omega}\) (see SI when this condition is not met). Furthermore, much like that a transmission matrix determines transport properties of a mesoscopic sample, a matrix \(\tilde{C}(\mathbf{\varphi})\) determines \(S(\mathbf{\varphi})\) and other entanglement probes of a virtual mesoscopic sample at the disorder realization \(\mathbf{\varphi}\); consequently, the statistical equivalence between \(C(t)\) and \(\tilde{C}(\mathbf{\varphi})\) leads to the statistical equivalence between out-of-equilibrium and sample-to-sample fluctuations of various entanglement probes, in agreement with simulation results (Fig. 1a).
Exploiting this equivalence, we proceed to study the statistics of entanglement entropy fluctuations. To overcome the difficulties discussed in the introduction with the unusual disorder structure, below we combine the continuity properties of the \(N\)-variable function \(S(\mathbf{\varphi})\) with the nonasymptotic probabilistic method, so-called concentration inequality [35]. This allows us to work out a statistical theory for mesoscopic sample-to-sample fluctuations of \(S(\mathbf{\varphi})\) at total system size \(L\), which is _finite_ so that the disorder strength does not vanish.
In order to study the distribution of \(S(\mathbf{\varphi})\), we introduce the logarithmic moment-generating function \(G(u)\equiv\ln\langle e^{u(S-\langle S\rangle)}\rangle\), with \(u\) being real and \(\langle\cdot\rangle\) denoting the average over \(\mathbf{\varphi}\). Consider the downward fluctuations (i.e., \(S-\langle S\rangle<0\)) first. Because the \(N\) components of \(\mathbf{\varphi}\) are statistically independent, we can apply the so-called modified logarithmic Sobolev inequality [35] to obtain
\[\frac{d}{du}\frac{G}{u}\leq\frac{1}{u^{2}}\frac{\left\langle\left[\sum_{m=1}^ {N}e^{u(S-\langle S\rangle)}\,\phi(-u(S-S_{m}^{-}))\right]\right\rangle}{ \left\langle e^{u(S-\langle S\rangle)}\right\rangle} \tag{6}\]
with \(\phi(x)=e^{x}-x-1\) and \(u\leq 0\). Here \(S_{m}^{-}\) is the maximal values of \(S(\mathbf{\varphi})\), when \(\varphi_{m}\) varies and other arguments are fixed. Observing that the leading \(u\)-expansion of the right-hand side is \(\frac{b_{-}}{2}\), with
\[b_{-}\equiv\sum_{m=1}^{N}\left\langle(S-S_{m}^{-})^{2}\right\rangle, \tag{7}\]
we separate the right-hand side of the inequality into two terms, \(\frac{b_{-}}{2}\) and the remainder. Then, we show that the latter is bounded by \(c_{-}\frac{dG}{du}\) with \(c_{-}\) being a negative constant. So we cast the inequality (6) to
\[\frac{d}{du}\frac{(1+|c_{-}|u)G}{u}\leq\frac{b_{-}}{2}, \tag{8}\]
which can be readily integrated to give \(G\!\!\leq\!\frac{b_{-}}{2}\frac{u^{2}}{1+|c_{-}|u}\). Such bound holds also for Gamma random variables. It generalizes the tail behaviors of the Gamma distribution, giving the so-called sub-Gamma tail [35]. Specifically, following standard procedures, we can use Markov's inequality to turn this bound for \(G\) into a bound for the probability of downward fluctuations. The result is
\[\mathbf{P}(S<\langle S\rangle-\epsilon)\leq e^{-\epsilon^{2}/2(b_{-}+|c_{-}| \epsilon)} \tag{9}\]
for any \(\epsilon\!\!>\!0\). This gives a sub-Gamma lower tail, which crosses over from a Gaussian to an exponential form at \(\epsilon\sim b_{-}/|c_{-}|\).
Similarly, we can study the upward fluctuations (i.e., \(S-\langle S\rangle>0\)). We replace \(S_{m}^{-}\) in the inequality (6) by \(S_{m}^{+}\), which is the minimal \(S(\mathbf{\varphi})\) when \(\varphi_{m}\) varies and other arguments are fixed, and consider \(u>0\). Upon separating \(\frac{b_{+}}{2}\), with \(b_{+}\equiv\sum_{m=1}^{N}\langle(S-S_{m}^{+})^{2}\rangle\), from the right-hand side of the inequality, the remainder is negative. As a result, \(c_{-}\) is replaced by \(0\) and \(G\leq\frac{b_{+}u^{2}}{2}\), giving
\[\mathbf{P}(S>\langle S\rangle+\epsilon)\leq e^{-\epsilon^{2}/(2b_{+})} \tag{10}\]
for any \(\epsilon>0\), which is a sub-Gaussian upper tail.
The inequalities (9) and (10) show that \(S(\mathbf{\varphi})\) concentrates around \(\langle S\rangle\) albeit with different bounds for upward and downward fluctuations. Simulations further show that the exact deviation probability for large downward (upward) fluctuations agrees with the form given by the right-hand side of the corresponding concentration inequality, with \(b_{\pm}\) and \(c_{-}\) as fitting parameters (Fig. 2c). Therefore, for large deviation, the upper (lower) tail distribution has the universal form given by the first (second) line in Eq. (2), and the parameters \(\mathfrak{b}_{\pm}\) and \(\mathfrak{c}\) in Eq. (2) are proportional to \(b_{\pm}\) and \(c_{-}\), respectively. So for large \(\epsilon\) the upper tail is always Gaussian \(e^{-\epsilon^{2}/(2b_{+})}\) while the lower is always exponential \(e^{-\epsilon/(2\mathfrak{c})}\), different from the distribution tails of thermodynamic fluctuations which are symmetric and Gaussian.
With Eq. (2) we find that the variance \(\mathrm{Var}(S)\) is given by \(b_{\pm}\). To calculate the latter, note that by the mean value theorem there exists \(\bar{\varphi}_{m}\) between \(\varphi_{m}\) and \(\varphi_{m}^{\pm}\) (at which \(S_{\pm}^{\pm}\) is reached), so that \((S-S_{m}^{\pm})^{2}\) is given by \((\varphi_{m}-\varphi_{m}^{\mp})^{2}(\partial_{\bar{\varphi}_{m}}S)^{2}\). Then, for large \(L\) the Fourier series of \(\partial_{\varphi_{m}}S\) with respect to \(\varphi_{m}\) are truncated at the
second harmonics, giving \((\partial_{\bar{\varphi}_{m}}S)^{2}{\sim}{\int}\frac{d\varphi_{m}}{2\pi}( \partial_{\varphi_{m}}S)^{2}\). Applying these analyses to the definitions of \(b_{\pm}\), we obtain
\[\text{Var}(S)\propto\left<|\partial_{\mathbf{\varphi}}S|^{2}\right>. \tag{11}\]
This relation is confirmed numerically (Fig. 3a), and the proportionality coefficient is found to be \(\approx 1/8\). Equation (11) uncovers a relation between entanglement entropy fluctuations and continuity properties of the \(N\)-variable function \(S(\mathbf{\varphi})\). It resembles the so-called concentration-of-measure phenomenon, a modern perspective of probability theory [43; 44], where fluctuations of an observable are controlled by its _Lipschitz continuity_. This continuity is a key ingredient of universal wave-to-wave fluctuations in mesoscopic transport [36].
By definition of \(S(\mathbf{\varphi})\), we have \(\partial_{\varphi_{m}}S=\text{Tr}_{A}(\ln(\tilde{C}^{-1}-\mathbb{I})\partial _{\varphi_{m}}C_{1})\). Because of \(C_{1}=\mathcal{O}(1/\sqrt{L})\), we expand the logarithm in \(C_{1}\) up to the first order. Taking into account that \(C_{0}\) is short-ranged, we obtain
\[\partial_{\varphi_{m}}S=-\text{Tr}_{A}\left[(H_{0}+(\partial_{C_{0}}H_{0})C_{ 1})\partial_{\varphi_{m}}C_{1}\right], \tag{12}\]
where \(H_{0}=\ln(C_{0}^{-1}-\mathbb{I})\) is the entanglement Hamiltonian in the absence of disorder. Substituting Eq. (12) into Eq. (11), we find that the two terms in Eq. (12) contribute to the variance separately. The contribution by the first term is \(a/L\) and that by the second is \(bL_{A}^{3}/L^{2}\), and the former (latter) is found to be a subsystem's edge (bulk) effect. Here the coefficient \(a\) is proportional to the square of the size of subsystem's edge, and both \(a\) and \(b\) have no dependence on \(L\), \(L_{A}\). Upon rescaling: \(L\), \(L_{A}\) by \(\sqrt{a/b}\) and \(\text{Var}(S)\) by \(\sqrt{ab}\), we obtain the scaling law (1), which is confirmed by simulations (Fig. 3b-d). By Eq. (1), one enters the regime \(\text{Var}(S)=L^{-1}\) for \(L_{A}\ll L^{1/3}\) (b) and the regime \(\text{Var}(S)=L_{A}^{3}/L^{2}\) for \(L_{A}\gg L^{1/3}\) (c).
Let us consider other entanglement probes such as the second-order Renyi entropy \(S_{2}\). As said above, in this case we have an expression similar to Eq. (3), with \(e(\lambda)\) changed (see SI). Repeating the analysis above, we find for \(S_{2}\) the same relation as (11). Furthermore, we can calculate \(\langle|\partial_{\mathbf{\varphi}}S_{2}|^{2}\rangle\) in the same way as \(\langle|\partial_{\mathbf{\varphi}}S|^{2}\rangle\). As a result, we find that \(\text{Var}(S_{2})\) obeys the same scaling law as Eq. (1). These statistics of \(S_{2}\) are confirmed numerically (Fig. 3). In SI we further show that Eqs. (1), (2) and (11) hold for more general probes.
To understand physically the scaling behavior we use the concept of coherent entangled quasiparticle pair [1]. Consider a quasiparticle inside the subsystem A. When pairing with another outside, it contributes to the bipartite entanglement. Due to the Heisen
Figure 3: **Universal scaling behaviors of the variance.** We perform simulations for both Rice-Mele (RM) model and transverse field Ising chain (TFIC) with different sizes and different quench protocols (I-III) to study the variance of two entanglement probes \(O=S\), \(S_{2}\). **a.** For both \(O\) the data confirm the relation (12). **b-c.** They also confirm the limiting scaling behavior described by the first (second) term of Eq. (1) for sufficiently small (large) \(L_{A}^{3}/L\). **d.** After rescaling \(\text{Var}(O)\), \(L\), \(L_{A}\) all data collapse to the universal curve described by Eq. (1). All theoretical predictions are presented by dashed lines.
berg uncertainty, this particle's position fluctuates with time, leading to the temporal fluctuation \(\Phi(t)\) of the pairing amplitude. In the simplest case, the particle hops virtually from a site \(i\) to \(j\) (in A as well) and back to \(i\). Since the entangled pair is a correlation effect, \(\Phi(t){\sim}\sum_{ij}(C_{1}(\mathbf{\omega}t))_{ij}(C_{1}(\mathbf{\omega}t))_{ji}\) and thus by Eq. (4) \(\Phi(t){\sim}\frac{1}{L^{2}}\sum_{ij}\sum_{mn}e^{i(k_{m}-k_{n})(i-j)}e^{i(\omega _{m}+\omega_{n})t/2}\), where \(k_{m}\), \(\frac{\omega_{n}}{2}\) are respectively the Bloch momentum and the particle eigenenergy associated with the hopping \(i{\rightarrow}j\), and \(k_{n}\), \(\frac{\omega_{n}}{2}\) with \(j{\rightarrow}i\). The variance of a generic entanglement probe is given by
\[\int dt|\Phi(t)|^{2}\sim\frac{1}{L^{4}}\sum_{ij^{\prime}i^{\prime} j^{\prime}}\sum_{mm^{\prime}n^{\prime}}\delta_{\omega_{m}+\omega_{n},\omega_{m^ {\prime}}+\omega_{m^{\prime}}}\] \[\times e^{i((k_{m}-k_{n})(i-j)-(k_{m^{\prime}}-k_{n^{\prime}})(i^ {\prime}-j^{\prime}))}, \tag{13}\]
where \((k_{m}-k_{n})(i-j)\) and \((k_{m^{\prime}}-k_{n^{\prime}})(i^{\prime}-j^{\prime})\) are the phases of the paths: \(i{\rightarrow}j{\rightarrow}i\) and \(i^{\prime}{\rightarrow}j^{\prime}{\rightarrow}i^{\prime}\), respectively. Because \(\omega\)'s are incommensurate, we obtain \((m,n){=}(m^{\prime},n^{\prime})\) or \((n^{\prime},m^{\prime})\). So the first sum is dominated by those terms with two phases being identical. As a result, \(\int dt|\Phi(t)|^{2}\sim L_{A}^{3}/L^{2}\), with the numerator (denominator) given by the first (second) sum: This is the second term in Eq. (1). We see that it arises from the constructive interference between the two hopping paths (Fig. 4).
Our theory essentially hinges upon the relation between the wavefunction evolution and the trajectory \(\mathbf{\varphi}=\mathbf{\omega}t\) on a high-dimensional torus, and the information-theoretic observable as a function on that torus. So it can be extended to more general contexts. First, it applies to truly disordered systems and to other characteristics of entanglement, e.g., the multipartite entanglement entropy and the largest eigenvalue of the entanglement Hamiltonian. Second, when the initial state is not a Slater determinant state or has spontaneous symmetry breaking, or when the interaction is present, the evolving state is not Gaussian. In this case, Eq. (3) does not apply. However, it is conceivable that using the reduced density matrix one can still establish the statistical equivalence between the fluctuations with time and with disordered samples, and study the fluctuation statistics by the same token. Finally, because each virtual disordered sample corresponds to a pure state, our work suggests a simple way of producing a random pure-state ensemble to which great experimental efforts [45] are made. That is, we evolve an initial pure state by a single Hamiltonian and collect states at distinct sufficiently long time.
We thank Italo Guarneri and Xin Wan for discussions, and Jean-Claude Garreau and Azriel Z. Genack for comments on the manuscript. This work is supported by NSFC projects no. 11925507, 12047503 and 11974308.
|
2302.03556 | Is the horizon of an eternal black hole really smooth? | We point out that in many eternal black holes, including a Schwarzschild
eternal black hole and an eternal black hole in $AdS_5$, instant folded strings
are created in the past wedge and render the region just outside the horizon
singular. We also make a conjecture regarding instant folded D-branes and
discuss their possible implications for eternal black holes. In particular, we
argue that the bulk modes responsible for Poincare recurrence, when it occurs
in the dual quantum field theory, are either instant folded strings or instant
folded D-branes. | Nissan Itzhaki | 2023-02-07T16:07:34Z | http://arxiv.org/abs/2302.03556v2 | ###### Abstract
###### Abstract
We point out that in many eternal black holes, including a Schwarzschild eternal black hole and an eternal black hole in \(AdS_{5}\), instant folded strings are created in the past wedge and render the region just outside the horizon singular. We also make a conjecture regarding instant folded D-branes and discuss their possible implications for eternal black holes. In particular, we argue that the bulk modes responsible for Poincare recurrence, when it occurs in the dual quantum field theory, are either instant folded strings or instant folded D-branes.
**Is the horizon of an eternal black hole really smooth?**
_Nissan Itzhaki_
School of Physics and Astronomy, Tel Aviv University
Ramat Aviv 69978, Israel
_and_
School of Natural Sciences, Institute for Advanced Study
1 Einstein Drive, Princeton, NJ 08540 USA
_E-mail:[email protected]
## 1 Introduction
The fact that the horizon of a large Eternal Black Hole (EBH) appears to be regular at the quantum level [1] plays a key role also in modern debates about the BH information puzzle. In particular, in the context of the AdS/CFT correspondence, it was pointed out in [2] that the late time behavior of the two-point function provides a neat realization of the BH information puzzle. On the one hand, the smoothness of the horizon associated with an EBH in \(AdS_{5}\) suggests that the two point function decays forever. On the other hand, the discrete spectrum of the CFT (when considering on \(S^{3}\times R_{t}\)) implies that the two point function cannot decay forever and that Poincare recurrence takes place.
It was proposed in [2] that a subleading saddle on the gravity side could resolve this puzzle. This proposal was challenged in [3] where it was shown that subleading saddles are not sufficient to explain the expected time dependence of the Poincare recurrences - a stringy structure just outside the horizon of the EBH in \(AdS_{5}\) is required. So far no evidence of such a structure has been found. Here we attempt to fill this gap and describe the elusive stringy structure foreseen by the authors of [3].
The basic idea
The near horizon region of a large EBH is well described by
\[ds^{2}=-dudv, \tag{2.1}\]
where, for the time being, we ignore the angular directions and \(v=t+x,\ u=t-x\). This is a two dimensional Minkowski space in which nothing special happens at the horizons, \(u=0\) and \(v=0\). Since, for a large EBH, corrections to (2.1) are small it is natural to suspect that they cannot affect much the near horizon physics. Nonetheless, we wish to argue, that there are corrections to (2.1), that at first glance seem to be harmless, but in fact render the EBH horizon singular.
The simplest correction of this type is when the dilaton, \(\Phi\), is not constant and near the horizon it takes the form
\[\Phi=\Phi_{0}-\epsilon uv, \tag{2.2}\]
with a positive \(\epsilon\). Such a dilaton profile is quite common in string theory. For example, due to \(\alpha^{\prime}\) corrections this is the case also in eternal Schwarzschild BH [4, 5], and in EBH in \(AdS_{5}\)[6]1 (see also [7]).
Footnote 1: I thank J. Maldacena for reminding me of this paper.
Despite the fact that for a large EBH \(\epsilon\ll 1\) we attempt to claim now that no matter how small \(\epsilon\) is, as long as it is positive, it affects dramatically the horizon. We begin with the following observation. In the past wedge, \(u,v<0\), the dilaton gradient, \(\partial_{\mu}\Phi\), is time-like and points towards the future, and in the future wedge it points towards the past. Similarly, in the right wedge \(\partial_{\mu}\Phi\) is space-like and points to the right, and in the left wedge it points to the left (see Fig. 1). The dilaton gradient is small when \(\epsilon\ll 1\) and it is hard to imagine that this trivial observation can render the horizon singular.
The point is that a small dilaton gradient can trigger effects that simply do not exist in its absence. Since this is the key point we discuss it in detail. We start with the simplest case of a space-like linear dilaton direction with an extra time direction
\[ds^{2}=-dt^{2}+dx^{2},\ \ \ \ \Phi=Qx, \tag{2.3}\]
for which there is an exact CFT description. The dilaton gradient, \(Q\), does not affect the equations of motion, but it does modify the Virasoro constraints in an interesting way - it adds a linear term
\[-(\partial_{\pm}t)^{2}+(\partial_{\pm}x)^{2}-Q\partial_{\pm}^{2}x=0. \tag{2.4}\]
The dilaton gradient term, \(Q\partial_{\pm}^{2}x\), is subleading in the \(\alpha^{{}^{\prime}}\) expansion (we work with \(\alpha^{{}^{\prime}}=1\)) compared to the standard term \((\partial_{\pm}x)^{2}\), but since it is linear in \(x\) it can dominate the constraints and introduce novel features that are simply absent when \(Q=0\).
In the case of (2.3) the new feature is a long folded string [8]
\[t=t_{0}+\tau,\ \ \ \ x=x_{0}-Q\log\left(\frac{1}{2}(\cosh(\tau/Q)+\cosh(\sigma/ Q))\right), \tag{2.5}\]
that does not exist when \(Q=0\).2\(\tau\) and \(\sigma\) are the world sheet coordinates with a range \(-\infty<\tau,\sigma<\infty\). The solution describes a string that is stretched from weak coupling, \(x=-\infty\), to a finite value of \(x\) where it folds (at \(\sigma=0\))
Footnote 2: The 2D ‘yo yo’ solution of [9, 10] does not satisfy the Virasoro constraints at the fold (see [11] for recent discussion). As a result its 2D YM realizations involve new degrees of freedom at the fold. A recent example, in the zig-zag model [12], are the adjoint quarks. For earlier discussion see _e.g._[13].
\[x_{fold}(t)=x_{0}-Q\log\left(\frac{1}{2}(1+\cosh\left(\frac{t-t_{0}}{Q}\right) \right), \tag{2.6}\]
and stretched back to weak coupling. \(x_{fold}(t)\) is following a time-like trajectory, that is approaching null trajectories at \(t\to\pm\infty\). The null trajectory is right (left) moving for \(t<(>)0\). The time it takes \(x_{fold}(t)\) to turn is of the order of \(Q\). It is during this time that the dilaton gradient term dominates the standard term in the Virasoro constraints.
For \(0<Q\ll 1\), that is relevant when \(\epsilon\ll 1\), the target-space energy-momentum tensor associated with the long folded string solution takes a particularly simple form that reveals its properties:
\[T_{uv}=\frac{1}{2\pi}\Theta(v-v_{0})\Theta(u-u_{0}), \tag{2.7}\]
is due to the tension in the bulk of the folded string. Where \(\Theta\) is the step function and \(v_{0}=t_{0}+x_{0},\ u_{0}=t_{0}-x_{0}\). At the fold there is a null flux
\[T_{uu}=\frac{v_{0}-v}{2\pi}\Theta(v_{0}-v)\delta(u-u_{0}),\ \ \ \ T_{vv}=\frac{u-u_{0}}{2\pi}\Theta(u-u_{0})\delta(v_{0}-v), \tag{2.8}\]
which implies that for \(t<t_{0}\) the null momentum at the fold \(P^{v}=(v_{0}-v)/2\pi\) is positive and decreases with time due to the expansion of the folded string until it vanishes at the turning point, \(t=t_{0}\). For \(t>t_{0}\) the null momentum at the fold \(P^{u}=(u-u_{0})/2\pi\) is positive and increases with time as the string shrinks.
The background (2.3) is a good approximation to (2.2), when expanding around any point in the left and right wedges, with \(Q>(<)0\) in the right (left) wedge. Thus in the left (right) wedge strings can fold to the right (left).
Consider a folded string in the right wedge that turns at \(t_{0}=0\) and \(x_{0}>0\) (see figure 2(a)). The time scale associated with the turning of \(x_{fold}(t)\) is short - it scales like \(Q=\epsilon x_{0}\). In particular, it is much shorter than the scale set by the second derivative of the dilaton and the curvature, \(1/\epsilon\). This means that for \(t<-Q\) and for \(t>Q\) a good approximation to \(x_{fold}(t)\) is a null trajectory. For \(t<-Q\) the null trajectory is right moving and for \(t>Q\) it is left moving. Therefore, without knowing the exact folded string solutions in this background, we can tell that in the right wedge there is a folded string solution that is well approximated by figure 2(a). In the left wedge there is a mirror solution.
Even without knowing the extension of the folded string in figure 2(a) to the other wedges it is clear that its energy is at least of the order of \(x_{0}/\pi\). To trust the classical solution we need \(x_{0}\gg 1\) which means that \(E\gg 1\) and that these strings are irrelevant at the IR. In particular, they cannot render the horizon singular.
The dilaton gradient triggers more extreme effects when it is time-like and points to the future. Again, it is instructive to consider the constant dilaton gradient case first
\[ds^{2}=-dt^{2}+dx^{2},\quad\Phi=Qt, \tag{2.9}\]
with \(Q>0\). Now the Virasoro constraint are
\[-(\partial_{\pm}t)^{2}+(\partial_{\pm}x)^{2}-Q\partial_{\pm}^{2}t=0. \tag{2.10}\]
Figure 1: The string fold topology for \(\epsilon>0\). The blue arrows represent the dilaton gradient and the red curved arrow the possible string fold direction. The string folds point in all wedges towards the horizon.
and the dilaton slope term, \(Q\partial_{\pm}^{2}t\), triggers the creation of an Instant Folded String (IFS) that are described by [14]
\[x=x_{0}+\sigma,\quad t=t_{0}+Q\log\left(\frac{1}{2}(\cosh(\tau/Q)+\cosh(\sigma/ Q))\right). \tag{2.11}\]
While technically it appears similar to the folded string solution (2.5) the physical process it describes is quite different. What (2.11) describes is a closed folded string that is created _classically_ at size zero, at \(x=x_{0}\) and \(t=t_{0}\), and is expanding rapidly. The fold, located at \(\tau=0\), is following a space-like trajectory
\[t=t_{0}+Q\log\left(\frac{1}{2}(1+\cosh\left(\frac{x-x_{0}}{Q}\right)\right), \tag{2.12}\]
that very quickly, at time scales of the order of \(Q\), asymptotes to a null trajectory.
IFSs are more extreme than the long folded strings since their energy vanish, hence they can modify the IR physics dramatically. The energy of an IFS must vanish since it did not exist before \(t_{0}\) and since, from a fundamental string point of view, the background (2.9) is time translation invariant. The way the IFS's energy vanishes is interesting. As in the space-like case, the energy density in the bulk of the IFS is positive due to the string tension. This positive energy is canceled against negative energy at the fold [11]. As the IFS expands the energy at the folds decreases in such
Figure 2: A folded string in the right wedge (a) and in the past wedge (b). Both are consistent with the fold topology depicted in figure 1 and in both the folds asymptote to null trajectories in a short time that scales like \(Q\). The null momenta at the folds, marked with black arrows, is pointing to the future (past) in the space (time) like case.
a way that the total energy remains zero. This is reflected in the energy-momentum tensor which, again, in the limit \(Q\ll 1\), takes a particularly simple form
\[T_{uv} = \frac{1}{2\pi}\Theta(v-v_{0})\Theta(u-u_{0}), \tag{2.13}\] \[T_{uu} = \frac{v_{0}-v}{2\pi}\Theta(v-v_{0})\delta(u-u_{0}),\quad T_{vv}= \frac{u_{0}-u}{2\pi}\Theta(u-u_{0})\delta(v-v_{0}).\]
Just like in the space-like case, \(T_{uv}\) describes the positive tension of the folded string. The difference is that now the null momenta at the folds \(P^{v}=(v_{0}-v)/2\pi\) and \(P^{u}=(u_{0}-u)/2\pi\) are negative and decrease with time due to the expansion of the folded string.
The background (2.10) is a good approximation to the background (2.2) when expanding around any point in the past and future wedges, with \(Q>0\) in the past wedge and \(Q<0\) in the future wedge. This means that in the past (future) wedge string can fold to the future (past). Consider an IFS that is created in the past wedge at \(x_{0}=0\) and \(t_{0}<0\). The time it takes \(x_{fold}\) to approach a null trajectory is short - it scales like \(Q=-\epsilon t_{0}\). Hence, just like in the space-like case, without knowing the exact folded string solution we conclude that in the past wedge there is an IFS that looks like in figure 2(b). In the future wedge there is a time-reversal solution - a closed folded strings that shrinks and disappear at \(x_{0}=0\) and \(t_{0}>0\).
To leading order in \(\epsilon\) the energy associated with an IFS that is created in the past wedge vanishes. The energy is not identically zero since unlike (2.3), the background (2.2) is not invariant under time translation. Hence we expect the exact IFS solution to acquire with time a small energy that scales like \(\epsilon\). That is, like in (2.3), the IFS is created at zero size with vanishing energy, only that now its energy grows with time \(E\sim\epsilon(t-t_{0})\).
So far we discussed the possible folded string solutions in each wedge separately. Now we describe solutions that are valid in all wedges. A natural guess is the configuration in figure 3(a) which is the most symmetric configuration consistent with the fold topology. There are, however, simple worldsheet and target space arguments that show that this configuration is inconsistent. The worldsheet argument3 is that it has a topology of a sphere while we work in Lorentzian signature. Any attempt to reconcile these facts will generate a singularity. The target-space problem is that it violates energy-momentum conservation. To see this recall that in the past wedge the null momenta at the folds are negative (they point to the past). When the folds cross the horizons and enter the left and right wedges the string is still growing which means,
by energy conservation, that the null momenta have to become even more negative. In particular, they cannot vanish which, as discussed above, is necessary for the folds to turn sharply and form the symmetric configuration of figure 3(a). Classically its only option is to continue growing as described in figure 3(b).
An IFS is not going to expand indefinitely when interactions are taken into account. To estimate the IFS lifetime we follow the semi-classical approach of [15], which for a large spinning folded string agrees with the exact CFT calculations [16, 15]. In this approach, away from the fold, the folded string is viewed as two open strings on top of each other. For the folded string to split the two open strings should split, with a rate like in [17], a stringy distance from each other. This implies that the lifetime and maximal size of an IFS is of the order of \(1/g\).4
Footnote 4: In the case of the infinite long folded string of [8] the string coupling vanishes exponentially fast at infinity. As a result this estimate implies a bound on how close to the strong coupling region the folded string can get. This could have implications for [18, 19].
The details of the IFS decay should be interesting to explore since, as discussed below, they could provide a microscopic description of Hawking radiation. We, however, are in no position to do so since the starting point of such a study is the exact IFS solution which we do not know. Fortunately, for the main point here, it is sufficient to consider the IFS in its minimal form - an approximate triangle of size of order \(1/g\) (see figure 4(a)). A more detailed description of IFSs, that goes beyond their minimal form, can only increase their effect. Hence the discussion below is a lower bound on
Figure 3: (a) A natural guess for the folded string configuration that is consistent with the fold topology. The null momenta at the folds imply that this guess is not consistent with energy conservation. (b) The consistent configuration in which the null momenta at the folds always point to the past.
the effect IFSs can have on the EBH.
The maximal distance from the horizon an IFS, in its minimal form, can reach is of the order of \(1/g\) (see figure 4(a)). Still since the production rate of IFSs scales like \(Q^{2}\)[20], naively their effect is negligible for a large EBH (with \(\epsilon\ll 1\)). However, the EBH is eternal and even a tiny production rate can, in principle, generate an infinite effect. This is the case with IFSs since they expand basically at the speed of light. Concretely, since the EBH is boost invariant there are infinitely many IFS configurations that are related to the one in figure 4(a) by a boost. Hence for any positive \(\epsilon\), an observer that attempts to cross the EBH horizon will encounter an infinite number of IFSs (see figure 4(b)). This is the sense in which the EBH horizon is singular.
The conclusion that the horizon is singular is robust and is not sensitive to the initial condition. A natural initial condition, in the spirit of the nice-slice argument [21], is that there are no IFS at some invariant distance, \(\rho\gg 1\), from the past singularity. Such an initial condition is natural since it does not break the boost invariance of the EBH, and this slice is nice since both the curvature and string coupling are small for \(\rho\gg 1\). Since the IFSs that are relevant to the discussion above are created close to the
Figure 4: When interaction are taken into account the size and lifetime of an IFS is of order \(1/g\). (a) The IFS (in its minimal form) that penetrates the deepest into the right wedge. (b) Just before crossing the horizon an infalling observer, represented by the red arrow, crosses an infinite amount of IFS that are related to the one in (a) by a boost.
horizon this initial condition does not affect the mechanism discussed above. Moreover, since IFSs are created at the classical level their production rate scales also like \(1/g_{s}^{2}\)[20]. Therefore, this mechanism is in fact classical and interactions among IFSs are strong.
In summary the picture that seems to emerge is that everywhere in the past wedge IFSs are created to form an IFS condensate. IFSs that are created near the horizon manage to penetrate a bit into the left and right wedges. The IFS condensate is hot and radiates what to an observer at infinity looks like Hawking radiation. In \(AdS\) this radiation naturally falls back to the IFS condensate. The analog of the Hartle-Hawking state [22] is such that the condensate inside the future wedge is dominated by strings that look like the time-reversal of IFS - closed folded strings that shrinks and disappear at an instant. See figure 5.
## 3 Some issues
There are some issues with this mechanism that we would like to raise. An obvious issue is the reliance on IFSs. IFSs are non-standard stringy excitations - they are created classically in an instant, and they violate the averaged null energy condition - and therefore arguably not part of string theory.
Figure 5: The proposed Penrose diagram associated with EBH in \(AdS_{5}\). The past and future wedges as well as the region just outside the horizon are replaced by an IFS condensate. In the past (future) wedge the IFS, that are represented by the triangles, fold towards the future (past). The condensate emits radiation that bounces back from the boundary.
For several reasons we think that, as strange as they might appear, IFSs are an integral part of string theory. First, as a classical solution to the worldsheet equation of motion and Virasoro constraints they are as good as any other classical solution. Second, in the simplest setup of time-like linear dilaton they have an exact worldsheet description [20] which can be used to calculate their interactions with standard stringy modes and among themselves. A quantity that was actually calculated in [20], and is used here, is their production rate. Third, due to the large amount of symmetry associated with the \(SL(2)/U(1)\) EBH, the \((1,1)\) operator associated with the exponentially small tail of an IFS far from the horizon was identified in [23, 24] - its profile exactly matches semi-classical expectations.
Other, more subtle, issues:
**1.** The mechanism is based on the fact that a dilaton gradient induces a term in the Virasoro constraints
\[\partial_{\mu}\Phi\partial_{\pm}^{2}x^{\mu}, \tag{3.1}\]
that is linear in \(x^{\mu}\). Therefore, despite being subleading in the \(\alpha^{\prime}\) expansion it can dominate - even if only for a short while - the leading terms in the Virasoro constraints, \(g_{\mu\nu}\partial_{\pm}x^{\mu}\partial_{\pm}x^{\nu}\).
At higher orders in \(\alpha^{\prime}\) other linear terms in the Virasoro constraint can appear, and they can compete with (3.1). The leading curvature terms that are linear in \(x^{\mu}\), \(\partial_{\mu}R\partial_{\pm}^{2}x^{\mu}\) and \(\nabla_{\nu}R_{\mu}^{\nu}\partial_{\pm}^{2}x^{\mu}\), vanish in the Schwarzschild EBH background. The leading term that does not vanish is
\[\nabla_{\mu}R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}\partial_{ \pm}^{2}x^{\mu}. \tag{3.2}\]
In type II a dilaton gradient is generated only at order \((\alpha^{\prime})^{3}\)[25, 5]. Hence (3.2) can dominate (3.1). This is related to the following issue.
**2.** In the case of EBH in \(AdS_{5}\) it is somewhat surprising that what renders the horizon singular are fundamental strings, that are created only due to a term generated by \(\alpha^{\prime}\) corrections. Since \(AdS_{5}\) is made of D3-branes it is more natural for D3-branes to play the crucial role. In section 5 we argue, subject to a conjecture made in the next section, that, in fact, this is the case.
**3.** It also seems strange that the effect depends so dramatically on the sign of \(\epsilon\). Clearly, there are EBHs in string theory with \(\epsilon<0\). The effect of IFSs on such EBHs is much less dramatic since they are created only in the future wedge, and an infalling observer will encounter only a finite number of them and only after crossing the horizon.5 Moreover, IFSs that are created classically in the future wedge cannot induce
Poincare recurrence since they cannot render the spectrum of fluctuations outside the BH discrete.
In the next section we consider a setup designed to address point 1 above. This setup also suggests a conjecture which, if correct, addresses points 2 and 3.
## 4 A variant of [2]
In this section we consider a variant of [2]: the thermofield double state associated with two large \(N\) two dimensional SYM theories with 16 super-charges. To have a discrete spectrum and Poincare recurrence we compactify the special direction, \(x\sim x+2\pi R\), with a large \(R\) (compared to the scales discussed below). We start with a review of the conjectured phases of this theory [26], before discussing aspects of the thermofield double state.
### A review of large \(N\) SYM in 2D
The theory is super-renormalizable as is reflected by the fact that the 't Hooft coupling, \(\lambda=g_{YM}^{2}N\), has dimension two. Consequently, the theory is free at the UV, and the effective, dimensionless coupling constant \(\lambda_{eff}=\frac{\lambda}{E^{2}}\), is of order 1 at energies of the order of \(\sqrt{\lambda}\).
For energies much smaller than \(\sqrt{\lambda}\) the system is best described by string theory in the near horizon geometry of \(N\) D1-branes
\[ds^{2} = \frac{U^{3}}{\sqrt{\lambda}}(-dt^{2}+dx^{2})+\frac{\sqrt{\lambda} }{U^{3}}dU^{2}+\frac{\sqrt{\lambda}}{U}d\Omega_{6}^{2}, \tag{4.1}\] \[e^{\Phi} = \frac{1}{N}\frac{\lambda^{3/2}}{U^{3}},\]
where as in [27], \(U=r/\alpha^{\prime}\), is the energy scale associated with the radial direction \(r\), and we neglected factors of order 1. As usual [28] the string coupling scales like \(1/N\) and is small in the large \(N\) limit. The curvature (in string units) scales like \(R\sim\frac{U}{\sqrt{\lambda}}\sim\frac{1}{\lambda_{eff}^{1/4}}\) and so when the perturbative description breaks down the SUGRA description takes over.
As we go further to the IR the string coupling constant becomes large and for \(U<\sqrt{\lambda}/N^{1/3}\) the system is described via the S-dual background associated with the near horizon limit of \(N\) fundamental string
\[ds^{2} = N\left(\frac{U^{6}}{\lambda^{2}}(-dt^{2}+dx^{2})+\frac{1}{ \lambda}dU^{2}+\frac{U^{2}}{\lambda}d\Omega_{6}^{2}\right), \tag{4.2}\] \[e^{-\Phi} = \frac{1}{N}\frac{\lambda^{3/2}}{U^{3}}.\]
The curvature associated with this background scales like \(\frac{\lambda}{NU^{2}}\) and so eventually in the deep IR, \(U<\sqrt{\lambda}/N^{1/2}=g_{YM}\) this description breaks down and the system is best described via the orbifold \((R^{8})^{N}/S_{N}\) conformal field theory [29, 30], also known as matrix strings theory, associated with the motion of the \(N\) fundamental strings in the transverse space. The various effective descriptions are summarized in the figure 6.
### The thermofield double state
Consider the thermofield double state with an inverse temperature, \(\beta\), that entangles two 2D SYM theories
\[|TFD\rangle(\beta)=\frac{1}{\sqrt{Z(\beta)}}\sum_{n}e^{-\beta E_{n}/2}|n \rangle_{L}\times|n\rangle_{R}, \tag{4.3}\]
where as usual \(|n\rangle_{L,R}\) are the energy eigenstates of the individual theories.
The discussion above implies that the best description of \(|TFD\rangle(\beta)\) depends on \(\beta\). In the UV, \(\beta\ll 1/\sqrt{\lambda}\), the natural description is in terms of the perturbative degrees of freedom of SYM. In the IR, \(\beta\gg\sqrt{N/\lambda}\), the natural description is in terms of the perturbative degrees of freedom of the matrix strings theory [29, 30] - the diagonal elements of the 8 \(SU(N)\) matrices, that correspond to the motion of the \(N\) strings, and their super partners. Since \(x\) is compactified in both cases the spectrum is discrete, the two point function does not decay forever, and Poincare recurrence takes place.
The question is what happens in the intermediate region
\[1\gg\beta\sqrt{\lambda}\gg\sqrt{N}, \tag{4.4}\]
Figure 6: The conjectured phases of large N SYM in 2D.
where, at least to leading approximation, the thermofield double state is described in terms of an EBH [31, 2]?
In the range
\[N^{1/3}\gg\beta\sqrt{\lambda}\gg\sqrt{N}, \tag{4.5}\]
the relevant EBH is the one associated with \(N\) near extremal fundamental strings (the relevant Penrose diagram is depicted in figure 7(a)) and in the range
\[1\gg\beta\sqrt{\lambda}\gg N^{1/3}, \tag{4.6}\]
it is the one associated with \(N\) near extremal D1-branes (the relevant Penrose diagram is depicted in figure 8(a)). At the SUGRA level both lead to the standard problem that since the horizon is smooth the spectrum of excitations in its vicinity is continuous [32], which implies that the two point function decays forever.
In string theory the situation is more interesting. The dilaton in the near horizon region of the EBH associated with \(N\) near extremal fundamental strings is described by (2.2) with a small and positive \(\epsilon\). The discussion in section 2 suggests that the horizon in this case is not smooth but filled with IFSs, which implies that the spectrum of fluctuations in its vicinity is discrete. This fits neatly with [3] that argue that a stringy structure just outside the horizon is needed to explain the expected time dependence of the Poincare recurrences. It appears that, at least in the case of the EBH associated with near extremal fundamental strings, the stringy structure anticipated in [3] is the IFS condensate.
Note that, unlike EBH in \(AdS_{5}\), now \(\epsilon\) is non vanishing at the SUGRA level. Therefore, the IFS production trigger, \(\partial_{\mu}\Phi\), is the leading linear term in the Virasoro constraints and other possible linear terms, discussed in section 3, that can appear at higher orders in \(\alpha^{\prime}\), are negligible. This also addresses the second issue in section 3: now the background is made of fundamental strings and it is natural that IFSs are the ones that render the horizon singular.
What happens in the D1-branes range (4.6)? In this case \(\epsilon<0\) and IFS are created only in the future wedge. Such IFSs can modify the BH interior considerably, but they cannot turn the spectrum of fluctuations just outside the horizon discrete.
A concrete example with \(\epsilon<0\) that illustrates this is the EBH associated with \(k\) near extremal NS5-branes [33]. This background has a coset CFT description [34, 35, 36, 37] which was used to calculate the exact reflection coefficient, including all perturbative and non-perturbative \(\alpha^{\prime}\) corrections, on the sphere [38] (see also [39]). Since the production rate of IFSs scales like \(1/g^{2}\)[20] they are expected to leave their mark on this calculation. In fact, the screening operator used in [39] to perform this
calculation is the operator that describes the IFS in this background [23, 24]. The exact reflection coefficient differs from the SUGRA result only by a phase which implies that IFSs affect very little the region outside the horizon. In particular, the spectrum of fluctuations outside the horizon remains continuous, as is clear from the fact the reflection coefficient decays exponentially fast for energies larger than \(1/\sqrt{k}\). The dependence of this phase on the energy is highly non-trivial [40], and it was argued in [41, 42] that this implies that the region beyond the horizon is not smooth. This fits well with the fact that IFSs are classically created only behind the horizon [14].
Even if we put aside field theory considerations, from a pure bulk perspective it seems highly peculiar that in a certain temperature range the spectrum of fluctuations near the horizon is continuous and in a nearby temperature range it is discrete. A way to evade this peculiarity is to argue that since the EBH associated with \(N\) near extremal D1-branes is S-dual to the EBH associated with \(N\) near extremal fundamental strings its horizon is not smooth either, this time due to creation in the past wedge of Instant Folded D1-branes (IFD1-branes) - the naive S-dual of IFSs. Since IFSs are not BPS it is not clear how they transform under S-duality and the existence of an
Figure 7: (a) The standard Penrose diagram associated with the eternal near extremal fundamental strings. The purple regions have a large curvature and are best described by perturbative SYM. The dashed lines represent the EBH singularities. The green regions are described by the near extremal D1-branes background and the yellow by the near extremal F1 background. Both the curvature and string coupling are small at the horizons. (b) Since \(\epsilon\) is positive the region just outside the horizon as well as the past and future wedges are replaced by an IFS condensate (marked in red). The white triangles indicate the dominant shape of the IFSs that form the IFS condensate.
IFD1-branes is a conjecture.6 The precise conjecture is:
Footnote 6: We hope that this conjecture is provable. Linear dilaton CFT admits several D-branes that are absent when the dilaton slope vanishes [43, 44, 45]. Not all of them are easily described by the DBI action. The space like version of this conjecture is that on top of these in the background (2.3) there are long folded D1-branes - the S-dual of (2.5) with a shape roughly described by (2.5), with \(Q\rightarrow-Q\) and \(x\rightarrow-x\).
_A time-like dilaton gradient that points to the past triggers the creation of IFD1-branes with a shape roughly described by (2.11), with \(Q\rightarrow-Q\) and \(t\rightarrow-t\)._
The expected lifetime of an IFD1-brane is of the order of the string scale. The reason is that away from the fold an IFD1-brane looks like a D1-brane on top of an anti D1-brane - a system which admits an open string tachyon with \(m^{2}\sim-1\)[46]. We do not know the production rate of IFD1-branes but it is reasonable to suspect that it is finite, in which case the eternity of the eternal near extremal D1-branes background suggests that they should render the near horizon spectrum discrete.
Figure 8: (a) The standard Penrose diagram associated with eternal near extremal D1-branes. The purple regions have a large curvature and are best described by perturbative SYM. The dashed lines represent the EBH singularities. The green regions are described by the near extremal D1-branes background. Both the curvature and string coupling are small at the horizons. (b) As \(\epsilon\) is negative we conjectured that the region just outside the horizon as well as the past and future wedges are replaced by an IFD1-brane condensate (marked in red). The white triangles indicate the dominant shape of the IFD1-branes that form the IFD1-brane condensate.
Back to Schwarzschild and \(AdS_{5}\)
In the previous section we conjectured that a time-like dilaton gradient that points to the past triggers the creation of IFD1-branes. In this section we assume the conjecture is correct and apply T-duality to find triggers for the creation of other instant folded D-branes. We discuss possible implications to the Schwarzschild EBH and the EBH in \(AdS_{5}\).
Consider a setup in which \(\partial_{t}\Phi(t)<0\) and one of the directions is compactified with a radius that depends on \(t\), \(y\sim y+2\pi R(t)\). Since the creation of an IFD1-brane, that is extended in \(t\) and some other direction \(x\), is a local process it is not expected to be sensitive to the fact that \(y\) is compactified. The IFD1-branes evolution, however, is sensitive to \(R(t)\), especially when \(R<1\). To see this we recall that for \(R<1\) there are tachyons, on top of the one discussed in the previous section, due to open strings that are stretched between the IFD1-brane and its images in the covering space. As we decrease \(R\) the number of these tachyonic modes grows, which implies that the smaller \(R\) is the shorter the lifetime of the IFD1-brane is.
When \(R(t)\ll 1\) it is natural to apply a time-dependent T-duality [47] which takes \(R(t)\rightarrow\tilde{R}(t)=1/R(t)\gg 1\) and the IFD1-brane to an IFD2-brane that wraps \(\tilde{y}\). T-duality also changes the string coupling \(g_{s}=\tilde{g}_{s}/\tilde{R}\)[48, 49], which means that the trigger for the creation of the IFD2-brane is a time-like
\[\partial_{\mu}(\tilde{r}-\tilde{\Phi}), \tag{5.1}\]
that points to the future where \(\tilde{r}\) is the radion field, \(\tilde{r}=\log(\tilde{R})\).
This seems to be relevant for an eternal Schwarzschild BH in type IIA string theory. In the past wedge the \(S^{2}\) is growing with time and so there is a \(\tilde{r}\) such that \(\partial_{\mu}\tilde{r}\) is time-like and points to the future. To show this explicitly we write the background in the familiar form
\[ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\frac{dr^{2}}{(1-\frac{2M}{r})}+r^{ 2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),\ \ \ \ \Phi=\Phi_{0}. \tag{5.2}\]
The radion associated with the \(\phi\) direction, \(\tilde{r}=\log(r\sin\theta)\), is time-like and points to the future in the past wedge when
\[r<2M\sin^{2}\theta. \tag{5.3}\]
Therefore, it triggers the creation of IFD2-branes which wrap \(\phi\) and have a similar shape to the IFS in figure 4(a) in the \(u,v\) plane. Note that the classical background (5.2), with no \(\alpha^{\prime}\) corrections, admits (5.1) that is time-like and points to the future.
The discussion above suggests that the larger the \(S^{1}\) is the shorter the lifetime of the IFD2-brane is. This combined with (5.3) implies that the lifetime of the IFD2-branes that dominate the near horizon dynamics is short. Again, the eternity of the eternal Schwarzschild BH guarantees that as long as this lifetime is finite the horizon is singular, in the sense discussed in section 2.
In case there is another \(S^{1}\) we can apply T-duality once more to find that the trigger for the creation of IFD3-branes (which wrap the two cycles) is a time-like
\[\partial_{\mu}(\tilde{r}_{1}+\tilde{r}_{2}-\tilde{\Phi}), \tag{5.4}\]
that points to the future where \(\tilde{r}_{i}\) are the radion fields, \(\tilde{r}_{i}=\log(\tilde{R}_{i}),\ i=1,2\).
This appears to be relevant for EBH in \(AdS_{5}\). To end up with a discrete spectrum on the field theory side we can consider the theory on \(S^{3}\times R_{t}\) or on \(T^{3}\times R_{t}\). In both cases the dual EBH involves two cycles that are growing with time in the past wedge, and IFD3-branes that wrap these two cycles will render the horizon singular. Again the classical background of an EBH in \(AdS_{5}\), with no \(\alpha^{\prime}\) corrections, admits (5.4) that is time-like and points to the future. This means that the red region in figure 4 is more likely to represent an IFD3-branes condensate than IFS condensate.
More generally, the conjecture implies that IFSs are only the tip of the iceberg and that there are many objects in string theory that are created in an instant in time-dependent situations. These instant objects could play an important role also in cosmology, where like IFS [50], they are expected to induce negative pressure at no energy cost - this time when scalars other than the dilaton vary with time. They may even play a role in extreme situations in astrophysics.
## Acknowledgments
I thank A. Hashimoto and E. Witten for helpful discussions. I also thank Y. Zigdon for pointing out a typo in (2.6) and (2.12). Work supported in part by the ISF (grant number 256/22), BSF (grant number 2018068) and by the Adler Family Fund.
|
2301.02318 | Effects of Spatiotemporal Upscaling on Predictions of Reactive Transport
in Porous Media | The typical temporal resolution used in modern simulations significantly
exceeds characteristic time scales at which the system is driven. This is
especially so when systems are simulated over time-scales that are much longer
than the typical temporal scales of forcing factors. We investigate the impact
of space-time upscaling on reactive transport in porous media driven by
time-dependent boundary conditions whose characteristic time scale is much
smaller than that at which transport is studied or observed at the macroscopic
level. The focus is on transport of a reactive solute undergoing diffusion,
advection and heterogeneous reaction on the solid grain boundaries. We first
introduce the concept of spatiotemporal upscaling in the context of
homogenization by multiple-scale expansions, and demonstrate the impact of
time-dependent forcings and boundary conditions on macroscopic reactive
transport. We then derive the macroscopic equation as well as the corresponding
applicability conditions based on the order of magnitude of the P\'{e}clet and
Damk\"{o}hler dimensionless numbers. Finally, we demonstrate that the dynamics
at the continuum scale is strongly influenced by the interplay between signal
frequency at the boundary and transport processes at the pore level. | Farzaneh Rajabi | 2023-01-05T22:26:54Z | http://arxiv.org/abs/2301.02318v1 | # Effects of Spatiotemporal Upscaling on Predictions of Reactive Transport in Porous Media
###### Abstract
In this paper, we investigate the effect of spatiotemporal Upscaling on the temporal Upscaling of a network of
###### Abstract
The typical temporal resolution used in modern simulations significantly exceeds characteristic time scales at which the system is driven. This is especially so when systems are simulated over time-scales that are much longer than the typical temporal scales of forcing factors. We investigate the impact of space-time upscaling on reactive transport in porous media driven by time-dependent boundary conditions whose characteristic time scale is much smaller than that at which transport is studied or observed at the macroscopic level. The focus is on transport of a reactive solute undergoing diffusion, advection and heterogeneous reaction on the solid grains boundaries. We first introduce a concept of spatiotemporal upscaling in the context of homogenization by multiple-scale expansions, and demonstrate the impact of time-dependent forcings and boundary conditions on macroscopic reactive transport. We then derive the macroscopic equation as well as the corresponding applicability conditions based on the order of magnitude of the Peclet and Damkohler dimensionless numbers. Finally, we demonstrate that the dynamics at the continuum scale is strongly influenced by the interplay between signal frequency at the boundary and transport processes at the pore level.
## 1 Introduction
The choice of an appropriate level of hydrogeologic model complexity continues to be a challenge. That is because subsurface flow and transport take place in complex highly hierarchical heterogeneous environments, and exhibit nonlinear dynamics and often lack spatiotemporal scale separation [_Tartakovsky_, 2013]. The constant tension between fundamental understanding and predictive science on the one hand, and the need to provide science-informed engineering-based solutions to practitioners, on the other, is part of an ongoing debate on the role of hydrologic models [e.g. _Miller et al._, 2013]. A physics-based model development follows a bottom-up approach which, through rigorous upscaling techniques, allows one to construct effective medium representations of fine-scale processes with different degrees of coupling and complexity [e.g. _Wood and Valdes-Parada_, 2013; _Helming et al._, 2013]. Yet, current model deployment is generally based on established engineering practices and often relies on'simpler' classical local continuum descriptions with limited predictive capabilities.
The development of multiscale, multiphysics models aims at filling this scale gap and at addressing the limited applicability of classical local macroscopic models [_Auriault_, 1991; _Auriault and Adler_, 1995; _Mikelic et al._, 2006]. Originated in the physics literature, multiscale methods were developed to couple particle to continuum solvers [_Wadsworth and Erwin_, 1990; _Hadjiconstantinou and Patera_, 1997; _Abraham et al._, 1998; _Tiwari and Klar_, 1998; _Shenoy et al._, 1999; _Flekoy et al._, 2000; _Alexander et al._, 2002, 2005]. Multiphysics domain-decomposition approaches [_Peszynska et al._, 2002; _Arbogast et al._, 2007; _Ganis et al._, 2014], combined with multiscale concepts, led to the development of multiphysics, multiscale capabilities to address the multiscale nature of transport in the subsurface [_Tartakovsky et al._, 2008; _Mehmani et al._, 2012; _Roubinet and Tartakovsky_, 2013; _Bogers et al._, 2013; _Mehmani and Balhoff_, 2014; _Yousefzadeh_, 2020; _Taverniers and Tartakovsky_, 2017]. The proposed methods predominantly focus on tackling partial or total lack of scale separation due to spatial heterogeneity, and are often based on spatial upscaling to construct coupling conditions between representations at different scales.
Upscaling methods enable one to formally establish a link between fine-scale (e.g. pore-scale) and observation-scale/macroscopic processes. Spatial upscaling methods include volume averaging [e.g., _Wood et al._, 2003; _Wood_, 2009; _Whitaker_, 1999; _Wood and Valdes-Parada_, 2013] and thermodynamically constrained averaging theory [_Gray and Miller_, 2005, 2014], the method of moments [_Taylor_, 1953; _Brenner_, 1980; _Shapiro and Brenner_, 1988], homogenization via multiple-scale expansions [_Bensoussan et al._, 1978; _Hornung et al._, 1994; _Allaire et al._, 2010; _Hornung_, 2012, e.g.,], and pore network models [_Acharya
_et al._, 2005). _Cushman et al._ (2002) provides a review of different upscaling methods. Comparative studies discuss differences and similarities of various upscaling techniques (e.g., _David et al._, 2013). Other upscaling approaches are described in (_Brenner_, 1987).
Yet, the need for computationally efficient predictions of subsurface system response to unsteady, and potentially highly fluctuating, forcing factors calls for the formulation of spatiotemporally-upscaled models. The practical need of averaging in time (as well as in space) originates from the disparity in temporal scales between the frequency at which the system is driven and the temporal horizon in which predictions are made, e.g., local micro-climate (precipitation, etc.) and the temporal scale relevant for climate studies, or local human activity and CO\({}_{2}\) sequestration scenarios, which, will be referred to as 'long times' in the following. In an attempt to curb computational burden, this problem is often tackled by adopting larger time-stepping and by temporally averaging time-dependent boundary conditions or driving forces (_Beese and Wierenga_, 1980; _Wang et al._, 2009; _Yin et al._, 2015).
While standard in the theory of turbulence (_Taylor_, 1959; _Pope_, 2000), time-averaging of fine-scale models of flow in porous media and geologic formations has attracted less attention (_He and Sykes_, 1996; _Pavliotis and Kramer_, 2002; _Rajabi and Battiato_, 2015, 2017; _Rajabi_, 2021). Yet, the implications of temporally unresolved boundary conditions and driving forces in nonlinear subsurface systems appear to be dire: for example, _Wang et al._ (_Wang et al._, 2009) showed that predictions of nonlinear transport in the vadose zone are greatly affected by the time resolution of forcing factors (e.g. annual versus hourly meteorological data). In partially saturated flows, _Bresler and Dagan_ (1982) and _Russo et al._ (1989) found breakthrough curves under time-varying and time-averaged boundary conditions to be very different, with contaminant travelling faster and further in the former case. Similar highly dynamical conditions can be found in the subsurface interaction zone (SIZ) of riverine systems where environmental transitions often result in biogeochemical hotspots and moments that drive microbial activity and control organic carbon cycling (_Stegen et al._, 2016). To the best of our knowledge, with a few exceptions (_Beese and Wierenga_, 1980; _He and Sykes_, 1996; _Pavliotis and Kramer_, 2002; _Wang et al._, 2009), the effects of temporal averaging on macroscopic transport have not been thoroughly investigated. On the contrary, the impact of temporally fluctuating flows, boundary conditions and forcings in the context of upscaled transport in porous media has been the object of a number of studies. The seminal work by _Smith_ (1981, 1982) investigated the impact of dispersion in oscillatory flows and derived a spatially upscaled delay-diffusion equation which accounts for memory effects. The effect of periodic oscillations leads to dynamic effective dispersion and time-dependent closure problems as analyzed by a number of authors (e.g. _Moyne_, 1997; _Valdes-Parada and Alvarez Ramirez_, 2011; _Davit and Quintard_, 2012; _Valdes-Parada and Alvarez Ramirez_, 2012; _Dentz and Carrera_, 2003; _Pool et al._, 2014, 2015, 2016; _Nissan et al._, 2017). Other studies focused on spatial upscaling of transport in porous media with changing pore-scale geometry due, e.g., to precipitation/dissolution processes (_van Noorden et al._, 2010; _Kumar et al._, 2011, 2014; _Bringedal et al._, 2016). In the context of atmospheric and oceanic pollutant transport where large-scale mean flow interacts non-linearly with small-scale fluctuations, Pavliotis _et al._ (_Pavliotis_, 2002; _Pavliotis and Kramer_, 2002; _Pavliotis and Stuart_, 2008) use higher-order homogenization to derive a rigorous homogenized equation and screen the temporal distribution of macroscopic quantities over long times by selecting appropriate spatial-temporally invariant volumes of the domain over which space-time volume averaging is applied. _Fish and Chen_ (2004) presents a model for wave propagation in heterogeneous media by introducing multiple space-time scales with higher order homogenization theory to resolve stability and consistency issues.
Here, we are primarily interested in studying the effect of space-time averaging on the final form of the upscaled equations for long times (rather than early and/or pre-asymptotyc times (_Valdes-Parada and Alvarez Ramirez_, 2012)), i.e. when the influence on the initial condition has been forgotten, and their corresponding regimes of validity. This knowledge is important to assess the accuracy of, e.g., numerical models wherein the temporal numerical
resolution significantly exceeds characteristic scales at which the system is driven. Specifically, we focus on reactive transport in undeformable porous media driven by time-varying boundary conditions, whose frequency is much larger than the characteristic time scale at which transport is studied or observed at the macroscopic scale. Some of the questions we are interested in addressing are: under which conditions (e.g. signal frequency) the instantaneous macroscopic response of the system can be decoupled from temporally fluctuating forcing factors (e.g. temporally dependent injection rates at the boundary)? How to properly account for time-averaged boundary conditions in upscaled models? We propose to address these questions by introducing the concept of spatiotemporal upscaling in the context of asymptotic multiple scale expansions. The main contribution of the paper is to explicitly address the question of whether or not, and how, space-time upscaling affects reactive transport modeling, and more importantly, if/what conditions of applicability of upscaled equations need to be satisfied for the macroscopic models to be accurate. This problem becomes of increasing importance as hydrologic modeling (and its relation to climate models) expands the time-window (from months to years to decades and more) used for forward predictions, while the time resolution in our simulations remains constrained by computational costs.
The manuscript is organized as follows. In Section 2, we present the pore-scale model describing advective and diffusive transport of a solute subject to time-dependent Dirichlet conditions at the macroscale boundary and undergoing a heterogenous chemical reaction with the solid matrix. In Section 3, we introduce the concept of spatiotemporal upscaling in the context of homogenization by multiple-scale expansions, and demonstrate the impact of time-dependent forcings and boundary conditions on macroscopic reactive transport. We first classify the macroscopic dynamics in three regimes (slowly, moderately and highly fluctuating regimes) and then derive a set of frequency-dependent conditions under which scales are separable. Section 4 provides a physical interpretation of the key analytical results of Section 3. In Section 5, we discuss different transport regimes in terms of relevant dimensionless numbers. Conclusions and outlook are given in Section 6.
## 2 Problem Formulation
### Domain and governing equations
Let \(\hat{\mathbf{\Omega}}\) be a domain in \(\mathcal{R}^{n}\) (\(n\geq 2\)), bounded by \(\partial\hat{\mathbf{\Omega}}\), of characteristic length \(L\) such that \(\hat{\mathbf{\Omega}}=\hat{\mathbf{\Omega}}_{s}\cup\hat{\mathbf{\Omega}}_{p}\), where \(\hat{\mathbf{\Omega}}_{s}\) and \(\hat{\mathbf{\Omega}}_{p}\) are the solid and pore phases in \(\hat{\mathbf{\Omega}}\), respectively, and \(\hat{\mathbf{\Omega}}_{p}\) is fully saturated with a viscous fluid. The boundary between the solid and the pore space domains is \(\hat{\Gamma}\). The domain \(\hat{\mathbf{\Omega}}\) is composed of repeating unit cells \(\hat{\mathbf{Y}}=\hat{\mathbf{\mathcal{B}}}\cup\hat{\mathbf{\mathcal{G}}}\) of characteristic size \(l\) with \(l\ll L\), where \(\hat{\mathbf{\mathcal{G}}}\) and \(\hat{\mathbf{\mathcal{B}}}\) are the solid and pore phases in \(\hat{\mathbf{Y}}\), respectively. The geometric scaling parameter
\[\varepsilon:=\frac{l}{L}\ll 1 \tag{1}\]
relates the size of the pore-scale unit cell to the corresponding macroscale (or observation spatial scale).
The laminar incompressible flow of a viscous fluid through the pore space \(\hat{\mathbf{\Omega}}_{p}\) satisfies Stokes law and the continuity equation
\[\mu\hat{\mathbf{\nabla}}^{2}\hat{\mathbf{v}}_{\varepsilon}-\hat{\mathbf{ \nabla}}\hat{\rho}_{\varepsilon}=0,\quad\hat{\mathbf{\kappa}}\in\hat{\mathbf{\Omega}} _{p}^{\varepsilon}, \tag{2a}\] \[\hat{\mathbf{\nabla}}\cdot\hat{\mathbf{v}}_{\varepsilon}=0,\quad\hat{\bm {\kappa}}\in\hat{\mathbf{\Omega}}_{p}^{\varepsilon}, \tag{2b}\]
subject to
\[\hat{\mathbf{v}}_{\varepsilon}=0,\quad\mathbf{\kappa}\in\hat{\Gamma}^{\varepsilon}, \tag{3}\]
and appropriate boundary conditions on \(\mathbf{v}_{\varepsilon}\) and \(\hat{p}_{\varepsilon}\) on the domain boundary \(\partial\hat{\mathbf{\Omega}}\). In (2) and (3), \(\hat{\mathbf{v}}_{\varepsilon}\) [LT\({}^{-1}\)], \(\hat{p}_{\varepsilon}\) and \(\mu\) are the fluid velocity, dynamic pressure and dynamic viscosity,
respectively. The transport of a reactive solute \(\mathcal{M}\), dissolved in the fluid, with molar concentration \(\hat{c}_{\varepsilon}(\hat{\mathbf{x}},\hat{t})\) [molL\({}^{-3}\)] at \(\hat{\mathbf{x}}\in\hat{\Omega}_{p}^{\varepsilon}\) and time \(\hat{t}>0\), is governed by
\[\frac{\partial\hat{c}_{\varepsilon}}{\partial\hat{t}}+\hat{\mathbf{v}}_{ \varepsilon}\cdot\hat{\nabla}\hat{c}_{\varepsilon}=\hat{\nabla}\cdot(\hat{ \mathbf{D}}\hat{\nabla}\hat{c}_{\varepsilon}),\quad\hat{\mathbf{x}}\in\hat{ \Omega}_{p}^{\varepsilon},\quad\hat{t}>0 \tag{4}\]
where \(\hat{\mathbf{D}}\) [L\({}^{2}\)T\({}^{-1}\)] is the molecular diffusion tensor, \(\left[\mathbf{D}\nabla c_{\varepsilon}\right]_{i}=D_{ij}\partial_{x_{j}}c_{\varepsilon}\) is a matrix-vector multiplication, and '\(\cdot\)' represents a scalar product, e.g. \(\hat{\mathbf{v}}_{\varepsilon}\cdot\hat{\nabla}\hat{c}_{\varepsilon}=\hat{v}_ {\varepsilon,i}\partial_{x_{i}}c_{\varepsilon}\), where summation is implied over a repeated index. The nonlinear heterogenous precipitation/dissolution reaction of solute \(\mathcal{M}\) at the solid grains boundary can be modelled through the following boundary condition on \(\Gamma\)
\[-\mathbf{n}\cdot\hat{\mathbf{D}}\hat{\nabla}\hat{c}_{\varepsilon}=\hat{k}( \hat{c}_{\varepsilon}^{a}-\overline{c}^{a})\qquad\hat{\mathbf{x}}\in\Gamma^{ \varepsilon} \tag{5}\]
which represents a mass balance across the solid-liquid interface. Equation (4) is subject to initial conditions
\[\hat{c}_{\varepsilon}(\hat{\mathbf{x}},0)=\hat{c}_{u}(\hat{\mathbf{x}}), \quad\hat{\mathbf{x}}\in\hat{\Omega}_{p} \tag{6}\]
and boundary conditions on \(\partial\hat{\Omega}=\partial\hat{\Omega}_{D}\cup\partial\hat{\Omega}_{N}\cup \partial\hat{\Omega}_{R}\), where \(\partial\hat{\Omega}_{i}\), \(i=\{D,N,R\}\) represent a portion of the boundary subject to Dirichlet, Neumann or Robin boundary conditions, respectively. Without loss of generality, we assume \(\partial\hat{\Omega}_{D}\) is subject to time-varying boundary conditions, i.e.
\[\hat{c}_{\varepsilon}(\hat{\mathbf{x}}_{D},\hat{t})=\hat{c}_{D}(\hat{t}). \tag{7}\]
The previous boundary condition models a spatially localized seasonal release of reacting solute (e.g. contaminant or nutrient), associated to, e.g., respiration processes of bacteria, hydrologic cycles that create local chemical hotspots (e.g. in the hyporheic corridor), etc. We emphasize that other time-dependent boundary conditions could be used in place of (15), e.g. Danckwerts' [_Danckwerts_, 1953].
### Dimensionless formulation
We define the following dimensionless quantities
\[c=\frac{\hat{c}_{\varepsilon}}{\hat{c}_{u}},\quad\mathbf{D}=\frac{\hat{ \mathbf{D}}}{D},\quad\mathbf{x}=\frac{\hat{\mathbf{x}}}{L},\quad\mathbf{v}_{ \varepsilon}=\frac{\hat{v}_{\varepsilon}}{U},\quad t=\frac{\hat{t}}{\tau_{c}},\quad p=\frac{\hat{p}\hat{t}^{2}}{\nu UL} \tag{8}\]
where \(U\), \(D\) and \(\tau_{c}\) are characteristic scales for velocity, diffusivity and time. We set \(\tau_{c}\) as the diffusive time-scale, i.e.
\[\tau_{c}=\frac{L^{2}}{D}, \tag{9}\]
Inserting (8) and (9) in (2)-(15), one obtains
\[\varepsilon^{2}\nabla^{2}\mathbf{v}_{\varepsilon}-\nabla p_{ \varepsilon}=0\quad\text{and}\quad\nabla\cdot\mathbf{v}_{\varepsilon}=0, \quad\mathbf{x}\in\Omega_{p}^{\varepsilon} \tag{10}\]
subject to
\[\mathbf{v}_{\varepsilon}=0,\quad\mathbf{x}\in\Gamma^{\varepsilon}, \tag{11}\]
and
\[\frac{\partial c_{\varepsilon}}{\partial t}+\nabla\cdot(-\mathbf{D}\nabla c_ {\varepsilon}+\mathbf{P}\mathbf{v}_{\varepsilon}c_{\varepsilon})=0,\quad \mathbf{x}\in\Omega_{p}^{\varepsilon},\quad t>0 \tag{12}\]
subject to
\[-\mathbf{n}\cdot\mathbf{D}\nabla c_{\varepsilon} =\text{Da}(c_{\varepsilon}^{a}-1)\qquad\mathbf{x}\in\Gamma^{ \varepsilon},\quad t>0 \tag{13}\] \[c(\mathbf{x},0) =c_{u}(\mathbf{x})\qquad\mathbf{x}\in\Omega_{p}^{\varepsilon}, \tag{14}\]
and to time-varying Dirichlet boundary conditions on a subset of the macroscopic boundary \(\partial\Omega_{D}\), i.e.
\[c_{\varepsilon}(\mathbf{x}_{D},t)=c_{D}(t). \tag{15}\]
In (12) and (13)
\[\text{Pe}:=\frac{\tau_{d}}{\tau_{a}}=\frac{UL}{D},\qquad\text{and}\qquad\text{ Da}:=\frac{\tau_{d}}{\tau_{r}}=\frac{L\hat{k}\hat{c}_{0}^{a-1}}{D}, \tag{16}\]
are the Peclet and Damkholer numbers, defined as the ratio between the diffusive time \(\tau_{d}\) and the advection and reaction time scales, \(\tau_{a}\) and \(\tau_{r}\), respectively, with \(\tau_{a}=L/U\) and \(\tau_{r}=L/(\hat{k}\hat{c}_{0}^{a-1})\).
## 3 Space-Time Homogenization via Multiple-Scale Expansions
In this section, we generalize the multiple-scale expansion method to upscale in both space and time the pore scale dimensionless equations (10) and (12) to the macroscale, and to derive effective equations for the space-time averages of the flow velocity \(\langle\mathbf{v}_{\varepsilon}\rangle\) and the solute concentration \(\langle c_{\varepsilon}\rangle\) while accounting for time-varying boundary conditions. We emphasize a similar approach can be employed to handle time-varying source terms and coefficients.
### Preliminaries and Extensions to Time Homogenization
Within the multiple-scale expansion framework, we introduce a 'fast' space variable \(\mathbf{y}\) defined in the unit cell \(Y\), i.e. \(\mathbf{y}\in Y\). Furthermore, if the system is driven by time-varying boundary conditions or forcing factors with characteristic time scale \(\hat{\tau}\ll T\) where \(T\) is the observation time scale, one can define a temporal scaling parameter
\[\omega:=\frac{\hat{\tau}}{T}\ll 1, \tag{17}\]
that relates the driving force/boundary condition frequency (\(\sim~{}1/\hat{\tau}\)) and the observation (macroscopic) time scale \(T\). We define the exponent \(\gamma\) such that
\[\varepsilon=\omega^{\gamma}, \tag{18}\]
i.e. \(\gamma\) quantifies the separation between temporal and spatial scales and is uniquely determined once the characteristic length and time scales of the problem are identified. Each variable is defined as follows,
\[\mathbf{y}=\varepsilon^{-1}\mathbf{x},\quad\text{and}\quad\tau=\omega^{-1}t. \tag{19}\]
For any pore-scale quantity \(\psi_{\varepsilon}\),
\[\langle\psi_{\varepsilon}\rangle_{Y}\equiv\frac{1}{|Y|}\int\limits_{\mathcal{ B}(\mathbf{x})}\psi_{\varepsilon}\mathrm{d}\mathbf{y},\quad\langle\psi_{ \varepsilon}\rangle_{\mathcal{B}}\equiv\frac{1}{|\mathcal{B}(\mathbf{x})|} \int\limits_{\mathcal{B}(\mathbf{x})}\psi_{\varepsilon}\mathrm{d}\mathbf{y}, \text{ and }\langle\psi_{\varepsilon}\rangle_{\Gamma}\equiv\frac{1}{|\Gamma|} \int\limits_{\Gamma(\mathbf{x})}\psi_{\varepsilon}\mathrm{d}\mathbf{y} \tag{20}\]
are three local spatial averages (function of \(\mathbf{x}\)) over the pore space \(\mathcal{B}(\mathbf{x})\) of the unit cell \(Y(\mathbf{x})\) centered at \(\mathbf{x}\). In (20), \(\langle\psi_{\varepsilon}\rangle_{Y}=\phi\langle\psi_{\varepsilon}\rangle_{ \mathcal{B}}\) and \(\phi=|\mathcal{B}|/|Y|\) is the porosity. Similarly, one can define temporal averages (function of \(t\)) over a time unit cell \(\mathcal{I}\) centered at \(t\), i.e,
\[\langle\psi_{\varepsilon}\rangle_{\mathcal{I}}\equiv\frac{1}{|\mathcal{I}|} \int\limits_{\mathcal{I}(t)}\psi_{\varepsilon}\mathrm{d}\tau. \tag{21}\]
where \(\mathcal{I}\) is the smallest time-scale resolved at the macroscale, e.g. the discretization time-step at the continuum scale. The space-time averages \(\langle\psi_{\varepsilon}\rangle_{\mathcal{I}\mathcal{B}}\) and \(\langle\psi_{\varepsilon}\rangle\) are defined as
\[\langle\psi_{\varepsilon}\rangle_{\mathcal{I}\mathcal{B}}:=\langle\langle\psi _{\varepsilon}\rangle_{\mathcal{I}}\rangle_{\mathcal{B}}=\langle\langle\psi_{ \varepsilon}\rangle_{\mathcal{B}}\rangle_{\mathcal{I}}. \tag{22}\]
\[\langle\psi_{\varepsilon}\rangle:=\langle\langle\psi_{\varepsilon}\rangle_{T}\rangle_{T }=\langle\langle\psi_{\varepsilon}\rangle_{Y}\rangle_{T}=\phi\langle\psi_{ \varepsilon}\rangle_{T\mathcal{B}}. \tag{23}\]
Furthermore, any pore-scale function \(\psi_{\varepsilon}(\mathbf{x},t)\) can be represented as \(\psi_{\omega}(\mathbf{x},t)\) through (18) and \(\psi_{\omega}\left(\mathbf{x},t\right):=\psi\left(\mathbf{x},\mathbf{y},t,\tau\right)\). Replacing \(\psi_{\omega}\left(\mathbf{x},t\right)\) with \(\psi\left(\mathbf{x},\mathbf{y},t,\tau_{\mathrm{a}},\tau_{t}\right)\) gives the following relations for the spatial and temporal derivatives,
\[\nabla\psi_{\omega}=\nabla_{\mathbf{x}}\psi+\varepsilon^{-1}\nabla_{\mathbf{y }}\psi=\nabla_{\mathbf{x}}\psi+\omega^{-\gamma}\nabla_{\mathbf{y}}\psi,\quad \text{and}\quad\frac{\partial\psi_{\omega}}{\partial t}=\frac{\partial\psi}{ \partial t}+\omega^{-1}\frac{\partial\psi}{\partial\tau} \tag{24}\]
respectively. The function \(\psi\) is represented as an asymptotic series in powers of \(\omega\),
\[\psi(\mathbf{x},\mathbf{y},t,\tau)=\sum_{m=0}^{\infty}\omega^{m}\psi_{m}( \mathbf{x},\mathbf{y},t,\tau), \tag{25}\]
wherein \(\psi_{m}(\mathbf{x},\mathbf{y},t,\tau)\), \(m=0,1,\ldots\), are \(Y\)-periodic in \(\mathbf{y}\). Finally, we set
\[\mathrm{Pe}=\omega^{-\alpha},\quad\text{and}\quad\mathrm{Da}=\omega^{\beta}, \tag{26}\]
with the exponents \(\alpha\) and \(\beta\) determining the system behavior. We seek the asymptotic space-time average behavior of \(\psi_{\omega}\) as \(\omega\ \to\ 0\) for any arbitrary time-scale separation parameter \(\gamma\).
It should be emphasized that, an important step in solving the cascade of equations for \(\psi_{0},\psi_{1},\cdots\), is consistently checking whether the solvability condition is satisfied. Otherwise, the derivation leads to misleading results. More specifically, when seeking a solution for \(\psi_{1}\), we have to impose the solvability condition. This condition ensures existence and uniqueness of a solution rigorously by employing the _Fredholm Alternative_. Critical points to consider while employing the homogenization theory to upscale the transport equation are summarized by _Auriault_ (2019), where the author explicitly mentions that the averaging process is imposed by Fredholm Alternative, and there is no arbitrary step along the derivation process.
### Upscaled Transport Equations and Homogenizability Conditions
The homogenization of the Stokes equations (2) leads to the classical result
\[\langle\mathbf{v}\rangle=-\mathbf{K}\cdot\nabla P_{0},\qquad\nabla\cdot \langle\mathbf{v}\rangle=0,\quad\mathbf{x}\in\Omega, \tag{27}\]
where the dimensionless permeability tensor \(\mathbf{K}\) is defined as \(\mathbf{K}=\langle\mathbf{k}\rangle\) and \(\mathbf{k}\) is the closure variable, solution of the closure problem
\[\nabla^{2}\mathbf{k}+\mathbf{I}-\nabla\mathbf{a}=0,\qquad\nabla\cdot \mathbf{k}=0,\qquad\mathbf{y}\in\mathcal{B} \tag{28}\]
subject to \(\mathbf{k}(\mathbf{y})=0\) for \(\mathbf{y}\in\Gamma\) and \(\langle\mathbf{a}\rangle=0\), where \(\mathbf{a}\) is \(Y\)-periodic (_Hornung_, 2012, pp. 46-47, Theorem 1.1).
Here, we are interested in studying the system for long times, also referred to as 'quasi-steady stage' (as per definition of _Valdes-Parada and Alvarez Ramirez_ (2012)), i.e. when both time- and length-scales can be separated. Then, the space-time homogenization of the pore-scale reactive transport equations (12)-(15) up to order \(\omega^{2}\), leads to (_Rajabi_, 2021) (details in Appendix A: )
\[\phi\frac{\partial\langle c\rangle_{T\mathcal{B}}}{\partial t}= \nabla\cdot\left[\tilde{\tilde{\mathbf{D}}}^{\star}\nabla\langle c\rangle_{T \mathcal{B}}-\mathrm{Pe}\langle c\rangle_{T\mathcal{B}}(\mathbf{v})_{T \mathcal{B}}\right]+\phi\omega^{-\gamma}\mathcal{K}^{\star}\mathrm{Da}(1- \langle c\rangle_{T\mathcal{B}}^{a}),\] \[(\mathbf{x},t)\in\Omega\times(0,T), \tag{29}\]
where the effective coefficients \(\mathcal{K}^{\star}\), and \(\tilde{\tilde{\mathbf{D}}}^{\star}\) are defined as
\[\mathcal{K}^{\star} =\frac{|\Gamma|}{|\mathcal{B}|}, \tag{30}\] \[\tilde{\tilde{\mathbf{D}}}^{\star} =\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{\mathbf{y }}\chi)\rangle+\omega^{1-\alpha}\langle\chi\mathbf{k}\rangle\cdot\nabla_{ \mathbf{x}}P_{0}, \tag{31}\]
and \(\mathbf{\chi}(\mathbf{y},\tau)\) is the closure variable. The effective coefficient \(\tilde{\mathbf{D}}^{\star}\) is computed through the solution of the unsteady auxiliary cell problem for \(\mathbf{\chi}(\mathbf{y},\tau)\), i.e.
\[\frac{\partial\mathbf{\chi}}{\partial\tau}+\omega^{-\alpha}(\mathbf{v} _{0}-\langle\mathbf{v}_{0}\rangle_{\mathcal{IB}})-\omega^{-\gamma}\nabla_{ \mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{\mathbf{y}}\mathbf{ \chi})+\omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot(\nabla_{\mathbf{y}}\mathbf{ \chi})=0,\quad\mathbf{y}\in\mathcal{B},\] \[\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\mathbf{\chi})=0,\quad\mathbf{y}\in\mathcal{B}, \tag{32}\] \[\mathbf{\chi}(\mathbf{y},0)=\mathbf{\chi}_{\star}(\mathbf{y}),\]
and \(\langle\mathbf{\chi}\rangle_{\mathcal{B}}=0\), where \(\mathbf{v}_{0}=-\mathbf{k}(\mathbf{y})\cdot\nabla_{\mathbf{x}}P_{0}\) is the solution of the homogenized flow equation (27), provided the following conditions are met [_Rajabi_, 2021]
1. \(\varepsilon\ll 1\),
2. \(\omega\ll 1\),
3. \(\langle\mathbf{\chi}\rangle_{\mathcal{I}\Gamma}\approx\langle\mathbf{\chi}\rangle_{ \mathcal{IB}}\),
Additional bounds on the Damkohler and Peclet numbers must be satisfied depending on the time-space scale separation parameter \(\gamma\). Specifically,
* when \(\varepsilon<\omega\), i.e. \(\gamma>1\), the system is referred to as _slowly fluctuating_ and the additional conditions to guarantee that scale separation occurs are
* When \(\omega<\varepsilon<\omega^{1/2}\) (or \(\omega\approx\varepsilon\)), i.e. \(1/2<\gamma<1\), the system is referred to as _moderated fluctuating_ and the additional conditions to guarantee that scale separation occurs are
* \(\mathrm{Da}/\mathrm{Pe}<\omega\).
* When \(\omega^{1/2}<\varepsilon<1\) (or \(\varepsilon\gg\omega\)), i.e. \(0<\gamma<1/2\), the system is referred to as _highly fluctuating_ and the additional conditions to guarantee that scale separation occurs are
These conditions can be graphically visualized in a phase diagram in the (\(\mathrm{Pe},\mathrm{Da}\))-space, or the \((\alpha,\beta)\)-space for the three different regimes [_Rajabi_, 2021]. The bounds for slowly fluctuating systems (i.e. \(\varepsilon<\omega\), i.e. \(\gamma>1\)) are summarized in the \((\alpha,\beta)\)-plane of Figure 1(a), where the lines \(\beta=\gamma\), \(\alpha+\beta=\gamma\) and \(\alpha=1\) correspond to \(\mathrm{Da}=\varepsilon\), \(\mathrm{Da}/\mathrm{Pe}=\varepsilon\) and \(\mathrm{Pe}=\omega^{-1}\), respectively. For moderately fluctuating systems where \(\omega\approx\varepsilon\), the bounds are summarized in the \((\alpha,\beta)\)-plane of Figure 1(b). The lines \(\alpha+\beta=1\), \(\alpha=1\) and \(\alpha=1-\gamma\) correspond to \(\mathrm{Da}/\mathrm{Pe}=\omega\), \(\mathrm{Pe}=\omega^{-1}\) and \(\mathrm{Pe}=\varepsilon/\omega\), respectively. Finally, in highly fluctuating systems, i.e. when \(\omega^{1/2}<\varepsilon<1\) (or \(\varepsilon\gg\omega\)), the previous conditions are summarized in Figure 1(c), where the line \(\beta=1-\gamma\) corresponds to \(\mathrm{Da}=\omega/\varepsilon\). Figure 1(d) overlaps the applicability conditions for the three regimes to allow direct comparison. We emphasize that, while Eq. (29) has the form of a classical advection-reaction-dispersion equation, both the (i) the form of its effective coefficients and (ii) the conditions under which both spatial and temporal scales are fully decoupled explicitly depend on \(\gamma\), i.e. the scale parameter that relates spatial scales and the frequency of the boundary fluctuations. Furthermore, Eq. (29) is consistent with the results obtained through space and time volume averaging (ST-averaging) by _He and Sykes_[1996] where, for a homogeneous distribution of elementary (space-time averaging) domains, the ST-upscaling using volume averaging degenerates into a classical volume average.
## 4 Discussion and Physical Interpretation
In this Section, we are concerned with providing a physical interpretation of the (formally derived) thresholds on \(\gamma\) and their connection with the regimes classification (slowly, moderately and highly fluctuating regimes) proposed in the previous Section. For this purpose, we consider a conceptual example, which, despite its simplicity, maintains enough complexity to provide useful physical insights on the theoretical results. Without loss of generality, let us consider a pressure-driven flow through a thin bidimensional channel of length \(L\) and aperture \(l\) with \(l\ll L\). For a channel of width \(l\), the length \(L\) is to be interpreted as the "observation scale". Steady state fully-developed incompressible flow is assumed. Reactive solute transport at the pore-scale is governed by (4) subject to (5) on the fracture walls. Time
Figure 1: Applicability conditions in the \((\alpha,\beta)\)-phase space for: (a) _slowly fluctuating regimes_ (Zone 1), i.e. \(\varepsilon~{}<~{}\omega\) or \(\gamma~{}>~{}1\); (b) _moderately fluctuating regimes_ (Zone 2), i.e. \(\omega~{}<~{}\varepsilon~{}<~{}\omega^{1/2}\) (\(\omega~{}\approx~{}\varepsilon\)) or \(1/2~{}<~{}\gamma~{}<~{}1\); (c) _highly fluctuating regimes_ (Zone 3), i.e. \(\omega^{1/2}~{}<~{}\varepsilon~{}<~{}1\) (\(\varepsilon~{}\gg~{}\omega\)) or \(0~{}<~{}\gamma~{}<~{}1/2\); (d) all regimes overlapped for direct comparison. In each Figure, the shaded region identifies sufficient conditions for the validity of macroscopic equation in terms of Da and Pe numbers. In the white region, scales are not well separated and macroscopic and microscopic models should be solved simultaneously.
varying Dirichlet boundary conditions for solute concentration are imposed at the fracture inlet. The characteristic time scale of the fluctuating boundary conditions is \(\hat{\tau}\ll T\), with \(T\) the macroscale observation time. Figure 2 shows a sketch of the system. As discussed in Section 3.1, the space and time scale separation parameters are
\[\varepsilon\equiv\frac{l}{L}\ll 1,\quad\text{and}\quad\omega\equiv\frac{\hat{\tau}}{T }\ll 1. \tag{33}\]
The (dimensional) time scales for diffusive and advective transport at the macro-and micro-scale are
\[\hat{t}_{\text{\tiny{Latence}}} =\frac{L^{2}}{D},\quad\hat{t}_{\text{\tiny{Latence}}}=\frac{L}{ U}, \tag{34a}\] \[\hat{t}_{\text{\tiny{Latence}}} =\frac{l^{2}}{D},\quad\hat{t}_{\text{\tiny{Latence}}}=\frac{l}{ U}, \tag{34b}\]
respectively, and the (macroscopic) Peclet number is defined as in (16)
\[\text{Pe}:=\frac{\hat{t}_{\text{\tiny{Latence}}}}{\hat{t}_{\text{\tiny{Latence }}}}=\frac{1}{\varepsilon}\frac{\hat{t}_{\text{\tiny{Latence}}}}{\hat{t}_{ \text{\tiny{Latence}}}}=\frac{LU}{D}. \tag{35}\]
Using a diffusive scaling, i.e. \(t:=\frac{\hat{t}}{\hat{t}_{\text{\tiny{Latence}}}}\), the time scales defined in (34a) can be expressed in terms of powers of \(\varepsilon\) or \(\omega\)
\[t_{\text{\tiny{Latence}}}=\omega^{0}=\varepsilon^{0},\qquad t_{ \text{\tiny{Latence}}}=\omega^{2\gamma}=\varepsilon^{2}, \tag{36a}\] \[t_{\text{\tiny{Latence}}}=\omega^{\alpha\tau}=\varepsilon^{ \alpha/\gamma},\qquad t_{\text{\tiny{Latence}}}=\omega^{\alpha+\gamma}= \varepsilon^{1+\alpha/\gamma}, \tag{36b}\]
(summarized in Table 1) and their relative magnitude is controlled by the exponents \(\gamma\), \(\alpha\) and \(\beta\). Importantly, the characteristic diffusion time \(t_{\text{\tiny{Latence}}}\) scales as \(\varepsilon^{2}\), i.e. the separation of scale parameter \(\varepsilon\) can be related to the characteristic dimensionless time scale of the dominant mass transport mechanisms at the microscale. This observation allows us (i) to relate the dimensionless period of the oscillations \(\omega\) to the dimensionless time-scale of mass transport processes at the pore scale (specifically, diffusion), and (ii) to elucidate the physical meaning of the \(\gamma\)-thresholds (i.e. \(\gamma=1/2\) and \(\gamma=1\)) that identify the slowly, moderately and highly fluctuating regimes. Specifically, a _slowly fluctuating regime_ corresponds to a system driven by time-dependent boundary conditions with a characteristic time-scale \(\omega\) greater than \(\varepsilon\), i.e. \(\omega\gg t_{\text{\tiny{Latence}}}\): in this regime, temporal fluctuations in the boundary conditions are very slow compared to pore-scale diffusion, and the dynamics at the microscale is exclusively controlled by local pore-scale mass transport processes. This translates in a steady state diffusive problem for the closure variables as discussed in Section 5.1. In the _moderately fluctuating regime_, \(\omega<\varepsilon<\omega^{1/2}\) or, equivalently, \(\omega^{2}<t_{\text{\tiny{Latence}}}<\omega\), i.e. \(\omega\) and \(t_{\text{\tiny{Latence}}}\) are of the same order of magnitude. While the local cell problems for the closure variables are still steady state (Section 5.2), advection and diffusion become the two mechanisms that guarantee mixing at the pore-scale. In the _highly fluctuating regime_, \(\omega^{1/2}<\varepsilon<1\) or \(\omega\ll t_{\text{\tiny{Latence}}}\), i.e. the
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Time scale & \(\mathcal{O}(\omega)\) & \(\mathcal{O}(\varepsilon)\) \\ \hline BCs & \(\omega^{1}\) & \(\varepsilon^{1}\) \\ \(t_{\text{\tiny{Latence}}}\) & \(\omega^{0}\) & \(\varepsilon^{0}\) \\ \(t_{\text{\tiny{Latence}}}\) & \(\omega^{\alpha}\) & \(\varepsilon^{\alpha/\gamma}\) \\ \(t_{\text{\tiny{Latence}}}\) & \(\omega^{2\gamma}\) & \(\varepsilon^{2}\) \\ \(t_{\text{\tiny{Latence}}}\) & \(\omega^{\alpha+\gamma}\) & \(\varepsilon^{1+\alpha/\gamma}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the characteristic time scales of transport processes at the micro- and macro-scale in terms of either integer powers of \(\omega\) and \(\varepsilon\).
characteristic time scale at which the system is driven is much smaller than pore-scale diffusion time. In this regime spatial and temporal scales can still be separated, but the local cell problem becomes unsteady and advective and unsteady effects will control mass transport at the pore-scale (Section 5.3). It is worth noticing that the applicability domain in the (Da-Pe) space for the moderately fluctuating regimes is much smaller than those for both slowly and highly fluctuating case: contrary to intuition, a slower advection drags the system outside the homogenizability conditions in a moderately fluctuating regime. This can be explained as follows: when diffusion and advection are the dominant mechanisms controlling transport at the pore-scale, slower advection results in an increased longitudinal, rather than transversal, mixing, making the applicability conditions in terms of Pe number much more stringent. Surprisingly, the applicability conditions in the slowly fluctuating case are a subset of those for the highly fluctuating scenario, i.e. the latter has less stringent constraints in terms of both Pe and Da numbers for the same value of \(\gamma\): we hypothesize that advection and unsteadiness (and their combination) may prove more effective in achieving pore-scale mixing, i.e. may contribute to an enhancement of mixing at the pore-scale. In presence of very fast fluctuations (at a time scale much smaller than diffusion), the pore-scale concentration in the fracture can be idealized as a periodic sequence of very thin strips of fixed concentration which travel downstream due to advection. As a result, while the system is very heterogeneous in the longitudinal direction (for times smaller than the characteristic diffusion time), it is well-mixed in the transverse direction, i.e. along the unit cell. This hypothesis is subject of current numerical investigations.
Importantly, according to (33), once the physical domain of interest is identified (i.e. \(\varepsilon\) is fixed) and the characteristic time scale \(\hat{\tau}\) of the boundary conditions determined, the macroscopic time horizon \(T\) (i.e. the time at which predictions are ought to be made) uniquely defines \(\omega\), and consequently, \(\gamma\). This implies that, for a given pair (Pe, Da), the accuracy of the upscaled equation used for forward predictions can be greatly affected by modifying \(T\): for example if \(T_{2}>T_{1}\), then \(\omega_{2}<\omega_{1}\ll 1\); for the same \(\varepsilon\), this corresponds to \(\gamma_{2}<\gamma_{1}\) since \(\gamma:=\log\varepsilon/\log\omega\), i.e. the applicability conditions may change from a slowly to a moderately fluctuating regime. This observation suggests that caution should be employed when systems driven by time-varying boundary conditions/forcings are _de facto_, if not voluntarily, upscaled both in space and time, e.g. due to computational limitations.
In the following Section we quantitatively characterize the dominant transport mechanisms at the pore-and continuum-scale for different values of Da and Pe numbers.
## 5 Special Cases
In this Section, we investigate specific flow and transport regimes under which the upscaled equation (29) and the closure problem (32) can be simplified. Such transport regimes are identified by the order of magnitude of the Damkohler and Peclet numbers. Differently from similar analyses on the applicability conditions of diffusive-advective-reactive systems under steady boundary conditions (or forcings) [_Auriault and Adler_, 1995] and/or the dynamics of composite materials [_Auriault_, 1991], here we are specifically interested in elucidating the impact of boundary/forcing frequency on the form of the upscaled equations for the highly, moderately and slowly fluctuating regimes and any given pair of Damkohler and Peclet numbers satisfying the conditions outlined in Section 3.2. Our analysis below shows that for systems with the same Damkohler and Peclet numbers, the form of the space-time upscaled equations and of the closure problem depends on the frequency of the boundary condition, i.e. pore-scale mixing is controlled by the interplay of diffusion, advection and unsteady effects (due to boundary frequency), and not by the characteristic time scales of diffusive, advective and reactive transport processes only.
### Slowly Fluctuating Boundary Conditions: \(\varepsilon<\omega\)
#### 5.1.1 Transport regime with \(\mathbf{P}\epsilon<1\)
In this case, Eq.(29) simplifies to a dispersion-reaction equation, since diffusion dominates advection at the macro-scale.
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}\right]+ \phi\omega^{-\gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{I \mathcal{B}}^{a}), \tag{37}\]
where is determined from the simplified closure problem
\[\nabla_{\mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\chi)=0,\quad\mathbf{y}\in\mathcal{B}, \tag{38a}\] \[\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\chi)=0,\quad\mathbf{y}\in\mathcal{B}, \tag{38b}\]
where the advective and unsteady terms at the pore-scale can be neglected compared to the diffusive ones. In this regime, the characteristic time scale of boundary fluctuations is much larger than the diffusive time scale, and the system dynamics at the pore-scale is entirely controlled by diffusion processes, as mentioned in Section 4. This results in a steady-state purely diffusive closure problem. The magnitude of the Damkohler number Da determines the effects of chemical reactions on transport at the macroscale.
#### 5.1.1.1 Diffusion dominates reactions
\(\mathrm{Da}~{}<~{}\omega\). In this regime, diffusion dominates advection and reactive transport processes at the macro-scale as well. As result, the macroscale equation (37) reduces to
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}\right]\,. \tag{39}\]
where the closure variable \(\chi\) still satisfies (38).
Figure 2: Time-dependent solute injection boundary condition (\(C_{\text{in}}\)) (top) at the inlet of a planar thin impermeable fracture of aperture \(\varepsilon\ll 1\) (bottom). The time-varying boundary condition has a frequency of \(\omega^{-1}\) (or characteristic dimensionless time scale/period \(\omega\ll 1\)). Figure not in scale.
#### 5.1.2 Transport regime with \(1\leq\mbox{Pe}<\omega^{-1}\)
In this regime, dispersion and advection are comparable at the macroscale, and the upscaled transport equation is (29) with effective coefficient \(\tilde{\mathbf{\mathsf{D}}}^{\star}\) defined by (31), i.e., \(\tilde{\mathbf{\mathsf{D}}}^{\star}=\langle\mathbf{\mathsf{D}$ }(\mbox{\boldmath$\mathsf{I}}+\omega^{1-\gamma}\nabla_{\mathbf{y}} \chi)\rangle+\omega^{1-\alpha}\langle\chi\mathbf{\mathsf{k}}\rangle \cdot\nabla_{\mathbf{\mathsf{x}}}P_{0}\). Yet, at the pore-scale the dynamics is still controlled by diffusion and the closure variables \(\chi\) is the solution of the closure problem (38).
Figure 3: (a) Slowly fluctuating regime: the time-varying boundary condition \(C_{\mbox{\tiny\rm in}}(t)\) has a characteristic time scale much larger than pore-scale diffusion, i.e. \(\omega\ \gg\ t_{\mbox{\tiny\rm diffusion}}\). (b) Moderately fluctuating regime: the characteristic time scale of the boundary condition \(C_{\mbox{\tiny\rm in}}(t)\) is of the same order of pore-scale diffusion, i.e. \(\omega\ \approx\ t_{\mbox{\tiny\rm diffusion}}\). (c) Highly fluctuating regime: pore-scale diffusion is much slower than the time scale imposed by \(C_{\mbox{\tiny\rm in}}(t)\). Figure not in scale.
#### 5.1.2.1 Diffusion and advection dominate reaction
\(\mathrm{Da}<\omega\). In this regime, reaction can be neglected compared to diffusive processes at the macroscale and the upscaled equation simplifies to
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot\left[ \tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}-\mathrm{Pe} \langle c\rangle_{I\mathcal{B}}\langle\mathbf{v}\rangle_{I\mathcal{B}}\right], \quad(\mathbf{x},t)\in\Omega\times(0,T), \tag{40}\]
where \(\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\boldsymbol{\chi})\rangle+\omega^{1-\alpha}\langle \boldsymbol{\chi}\mathbf{k}\rangle\cdot\nabla_{\mathbf{x}}P_{0}\), and \(\boldsymbol{\chi}\) still satisfies (38).
### Moderately Fluctuating Boundary Conditions: \(\boldsymbol{\omega<\varepsilon<\omega^{1/2}}\)
For this case, \(\alpha\) always lies in \(0\leq\alpha<1\) range. Advection at the macroscale is non-negligible and the transport equation at the macroscale is described by Eq.(29). The closure problems for \(\boldsymbol{\chi}\) reduces to
\[\omega^{-\alpha}(\mathbf{v}_{0}-\langle\mathbf{v}_{0}\rangle)- \omega^{-\gamma}\nabla_{\mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1- \gamma}\nabla_{\mathbf{y}}\boldsymbol{\chi})+\omega^{1-\gamma-\alpha}\mathbf{ v}_{0}\cdot(\nabla_{\mathbf{y}}\boldsymbol{\chi})=0,\quad\mathbf{y}\in\mathcal{B}, \tag{41a}\] \[\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\boldsymbol{\chi})=0,\quad\mathbf{y}\in\mathcal{B}, \tag{41b}\]
since the unsteady term can be neglected compared to diffusion and advection. In this regime, the characteristic time scale of boundary fluctuations is much larger than both diffusive and advective time scales. This results in a steady-state closure problem.
#### 5.2.0.1 Diffusion and Advection Dominate Reaction
\(\mathrm{Da}<\omega\). In this regime reaction is negligible and the upscaled equation (29) simplifies to (40) where \(\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\boldsymbol{\chi})\rangle+\omega^{1-\alpha}\langle \boldsymbol{\chi}\mathbf{k}\rangle\cdot\nabla_{\mathbf{x}}P_{0}\), and \(\boldsymbol{\chi}\) satisfies (41).
### Highly Fluctuating Boundary Conditions: \(\boldsymbol{\omega^{1/2}<\varepsilon<1}\)
#### 5.3.1 Transport regime with \(\mathrm{Pe}<1\)
In this regime, the advective term at the macro-scales is negligible. As a result the upscaled equation simplifies to Eq. (37),
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}\right] +\phi\omega^{-\gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{I \mathcal{B}}^{a}),\]
with \(\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\boldsymbol{\chi})\rangle\). The closure variable \(\boldsymbol{\chi}\) satisfies instead an unsteady closure problem where unsteady effects, diffusion and advection are equally important, i.e.
\[\frac{\partial\boldsymbol{\chi}}{\partial\tau}-\omega^{-\gamma} \nabla_{\mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\boldsymbol{\chi})+\omega^{-\alpha}(\mathbf{v}_{0}-\langle \mathbf{v}_{0}\rangle)=0,\qquad\mathbf{y}\in\mathcal{B}, \tag{42}\] \[-\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\boldsymbol{\chi})=0,\qquad\mathbf{y}\in\Gamma. \tag{43}\]
#### 5.3.1.1 Diffusion and Advection Dominate Reaction
\(\mathrm{Da}<\omega\). In this regime (\(\beta>1\)) the reaction term at the macroscopic scale is negligible and the upscaled equation is described by (40) where \(\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\boldsymbol{\chi})\rangle\).
#### 5.3.2 Transport regime with \(1\leq\mathrm{Pe}<\omega^{-1}\)
At the macroscale, dispersive and advective fluxes are of the same order of magnitude and the upscaled transport equation is given by Eq.(29) with effective coefficients defined by Eqs. (31). Yet, diffusion is now negligible in the closure problem for \(\boldsymbol{\chi}\), i.e.
\[\frac{\partial\boldsymbol{\chi}}{\partial\tau}+\omega^{-\alpha}( \mathbf{v}_{0}-\langle\mathbf{v}_{0}\rangle)+\omega^{1-\gamma-\alpha}\mathbf{ v}_{0}\cdot(\nabla_{\mathbf{y}}\boldsymbol{\chi})=0,\qquad\mathbf{y}\in\mathcal{B},\] \[-\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\boldsymbol{\chi})=0,\qquad\mathbf{y}\in\Gamma, \tag{44}\]
#### 5.3.2.1 Diffusion and Advection Dominate Reaction
\(\mathrm{Da}\ <\ \omega\). In this regime the reaction term at the macroscale is negligible, and the upscaled equation is given by Eq.(29) where the effective parameter \(\bar{\mathbf{D}}^{\star}\) is defined as \(\bar{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_ {\mathbf{y}}\chi)\rangle+\omega^{1-\alpha}\langle\chi\mathbf{k}(\mathbf{y}) \rangle\cdot\nabla_{\mathbf{x}}P_{0}\)
## 6 Conclusion
Given the temporal variability of boundary conditions and forcings driving many subsurface processes, e.g. precipitation-driven transport in the vadose zone of arid and semiarid regions, or microbial activity and carbon cycling in the subsurface interaction zone (SIZ) controlled by seasonal mixing of surface water and groundwater in riverine systems, we investigate the impact of space-time averaging on nonlinear reactive transport in porous media. We are specifically concerned with understanding the impact of space-time upscaling in nonlinear systems driven by time-varying boundary conditions whose frequency is much larger than the characteristic time scale at which transport is studied or observed at the macroscopic scale. Such systems are more vulnerable to upscaling approximations since the typical temporal resolution used in modern simulations significantly exceeds characteristic scales at which the system is driven.
We start by introducing the concept of spatiotemporal upscaling in the context of multiple-scale expansions. We then homogenize the pore-scale equations in space and time, and obtain a macroscopic equation which is dependent on the boundary condition frequency \(\omega^{-1}\) and the geometric separation of scale parameter \(\varepsilon\). Importantly, three different dynamical regimes are identified depending on the ratio between the diffusive time at the pore-scale (\(\sim\varepsilon^{2}\)) and the characteristic dimensionless period of the boundary temporal oscillations (\(\omega\)). They are referred to as slowly, moderately and highly fluctuating regimes. In the slowly fluctuating regime (when \(\varepsilon\ \ll\ \omega\)) pore-scale mass transport is entirely controlled by diffusion (and advection), and the local problem is steady state. In the highly fluctuating regime (when \(\omega\ \ll\ \varepsilon\)), pore-scale mass transport is affected by the additional time scale imposed by the boundary conditions and the local problem becomes unsteady. We refer to the moderately fluctuating regime if the period of the boundary conditions is comparable to the pore-scale diffusion time scale. This analysis (i) supports the proposed classification in three dynamical regimes, where the'speed of the fluctuation' (slow, moderate or high) is quantified relatively to the characteristic diffusion time at the pore-scale, and (ii) provides insights on the primary mechanisms controlling mixing at the pore-scale. We also identify the conditions under which scales are separable for any arbitrary \(\omega\). Such conditions are expressed in terms of the Peclet, Damkohler numbers and the product between the boundary frequency \(\omega^{-1}\)and \(\varepsilon\).
To conclude, the effects of lack of temporal resolution (i.e. temporal averaging) on nonlinear reactive transport driven by time-varying boundary conditions or forcings should be accounted for at the macroscopic scale. The upscaling errors introduced by temporal (and spatial) averaging could have important implications especially when simulating systems for long temporal scales, i.e. when the observation time is much larger than the characteristic period of the oscillations.
## Appendix A Homogenization of the Transport Equation
As discussed in _Rajabi_(2021), we present derivation of the upscaling procedure using space-time homogenization scheme. We start the upscaling procedure with the dimensionless pore-scale equation describing the transport of the scalar function \(c_{\omega}(\mathbf{x},t)\) in an incompressible steady-state velocity field \(\mathbf{v}(\mathbf{x})\),
\[\frac{\partial c_{\omega}}{\partial t}+\nabla\cdot(-\mathbf{D}\nabla c_{ \omega}+\mathrm{P}\mathrm{e}\mathbf{v}c_{\omega})=0,\quad(\mathbf{x},t)\in \Omega_{p}^{\omega}\times(0,T),\] (A.1)
subject to the following boundary and initial conditions
\[-\mathbf{n}\cdot\mathbf{D}\nabla c_{\omega} =\mathrm{Da}(c_{\omega}^{a}-1), \mathbf{x}\in\Gamma^{\omega}, t>0,\] (A.2) \[c_{\omega}(\mathbf{x},t=0) =c_{\omega}(\mathbf{x}), \mathbf{x}\in\Omega_{\rho}^{\omega}.\] (A.3)
We define
\[t=\omega\tau,\qquad\mathbf{x}=\varepsilon\mathbf{y},\qquad \varepsilon=\omega^{\gamma},\qquad\mathrm{Pe}=\omega^{-\alpha},\qquad\mathrm{ Da}=\omega^{\beta},\] (A.4)
where \(\mathbf{y}\) and \(\tau\) are the fast variables in space and time, respectively, and \(\varepsilon\ll 1\) and \(\omega\ll 1\) are the spatial and temporal scale separation parameters. The exponents \(\alpha\), \(\beta\) and \(\gamma\) identify the system's physical regimes. Particularly, \(\gamma\) allows to represent the relationship between the frequency of boundary-imposed temporal fluctuations and the spatial heterogeneity. It is worth noticing that \(\gamma>0\) since \(\varepsilon\ll 1\) and \(\omega\ll 1\). We first represent \(c_{\omega}(\mathbf{x},t)\) as \(c_{\omega}(\mathbf{x},t):=c(\mathbf{x},\mathbf{y},t,\tau)\). Given (A.4), the following relations hold for any space and time derivative in (A.1) (_Rajabi_, 2021),
\[\frac{\partial c_{\omega}}{\partial t} =\frac{\partial c}{\partial t}+\omega^{-1}\frac{\partial c}{ \partial\tau},\] (A.5a) \[\nabla c_{\omega} =\nabla_{\mathbf{x}}c+\varepsilon^{-1}\nabla_{\mathbf{y}}c.\] (A.5b)
Inserting (A.5) into (A.1) leads to
\[\left(\frac{\partial c_{\omega}}{\partial t}+\omega^{-1}\frac{ \partial c_{\omega}}{\partial\tau}\right) +\nabla_{\mathbf{x}}\cdot\left[-\mathbf{D}(\nabla_{\mathbf{x}}c _{\omega}+\varepsilon^{-1}\nabla_{\mathbf{y}}c_{\omega})+\mathrm{Pev}c_{ \omega}\right]\] \[+\varepsilon^{-1}\nabla_{\mathbf{y}}\cdot\left[-\mathbf{D}( \nabla_{\mathbf{x}}c_{\omega}+\varepsilon^{-1}\nabla_{\mathbf{y}}c_{\omega})+ \mathrm{Pev}c_{\omega}\right]=0.\] (A.6)
Expanding (A.6) up to order \(\mathcal{O}(\omega^{2})\), while using the _ansatz_ (25) and the definitions (A.4) for \(\varepsilon\) and \(\mathrm{Pe}\), one obtains
\[\left(\frac{\partial c_{0}}{\partial t}\right. +\frac{1}{\omega}\frac{\partial c_{0}}{\partial\tau}\right)+ \left(\omega\frac{\partial c_{1}}{\partial\tau}+\frac{\partial c_{1}}{ \partial\tau}\right)+\left(\omega^{2}\frac{\partial c_{2}}{\partial t}+\omega \frac{\partial c_{2}}{\partial\tau}\right)\] \[-\nabla_{\mathbf{x}}\cdot\mathbf{D}[\nabla_{\mathbf{x}}c_{0}+ \omega^{-\gamma}\nabla_{\mathbf{y}}c_{0}+\omega\nabla_{\mathbf{x}}c_{1}+ \omega^{1-\gamma}\nabla_{\mathbf{y}}c_{1}+\omega^{2}\nabla_{\mathbf{x}}c_{2}+ \omega^{2-\gamma}\nabla_{\mathbf{y}}c_{2}]\] \[+\nabla_{\mathbf{x}}\cdot[\omega^{-\alpha}\nabla_{0}c_{0}+ \omega^{1-\alpha}(\nabla_{0}c_{1}+\nabla_{1}c_{0})+\omega^{2-\alpha}(\nabla_ {0}c_{2}+c_{1}\nabla_{1}+\mathbf{v}_{2}c_{0})]\] \[-\nabla_{\mathbf{y}}\cdot\mathbf{D}[\omega^{-\gamma}\nabla_{ \mathbf{x}}c_{0}+\omega^{-2\gamma}\nabla_{\mathbf{y}}c_{0}+\omega^{1-\gamma} \nabla_{\mathbf{x}}c_{1}+\omega^{1-2\gamma}\nabla_{\mathbf{y}}c_{1}+\omega^ {2-\gamma}\nabla_{\mathbf{x}}c_{2}+\omega^{2-2\gamma}\nabla_{\mathbf{y}}c_{2}]\] \[+\nabla_{\mathbf{y}}\cdot[\omega^{-\alpha-\gamma}\nabla_{0}c_{0}+ \omega^{1-\alpha-\gamma}(\nabla_{0}c_{1}+\mathbf{v}_{1}c_{0})+\omega^{2-\alpha- \gamma}(\nabla_{0}c_{2}+c_{1}\nabla_{1}+\mathbf{v}_{2}c_{0})]=0.\] (A.7)
We collect terms of like-powers of \(\omega\) as follows
\[\omega^{-1}\left\{\frac{\partial c_{0}}{\partial\tau}-\omega^{1-2 \gamma}\nabla_{\mathbf{y}}\cdot(\mathbf{D}\nabla_{\mathbf{y}}c_{0})+\omega^{1- \gamma-\alpha}\nabla_{\mathbf{y}}\cdot(c_{0}\nabla_{0})\right\}+\] \[\omega^{0}\left\{\left(\frac{\partial c_{0}}{\partial t}+\frac{ \partial c_{1}}{\partial\tau}\right)-\nabla_{\mathbf{x}}\cdot(\mathbf{D} \nabla_{\mathbf{x}}c_{0})-\omega^{-\gamma}[\nabla_{\mathbf{x}}\cdot(\mathbf{D} \nabla_{\mathbf{y}}c_{0})+\nabla_{\mathbf{y}}\cdot(\mathbf{D}\nabla_{\mathbf{x }}c_{0})]-\omega^{1-2\gamma}\nabla_{\mathbf{y}}\cdot(\mathbf{D}\nabla_{\mathbf{y }}c_{1})+\right.\] \[\left.+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot(c_{0}\nabla_{0})+ \omega^{1-\gamma-\alpha}\nabla_{\mathbf{y}}\cdot(\nabla_{0}c_{1}+\mathbf{v}_{1 }c_{0})\right\}+\] \[\omega\left\{\left(\frac{\partial c_{1}}{\partial t}+\frac{ \partial c_{2}}{\partial\tau}\right)-\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{ \mathbf{x}}c_{1})-\omega^{-\gamma}[\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{ \mathbf{y}}c_{1})+\nabla_{\mathbf{y}}\cdot(\mathbf{D}\nabla_{\mathbf{x}}c_{1} )]-\omega^{1-2\gamma}\nabla_{\mathbf{y}}\cdot(\mathbf{D}\nabla_{\mathbf{y}}c_{2})+\right.\] \[\left.+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}c_{1}+ \mathbf{v}_{1}c_{0})+\omega^{1-\gamma-\alpha}\nabla_{\mathbf{y}}\cdot(\mathbf{v} _{0}c_{2}+\mathbf{v}_{1}c_{1}+\mathbf{v}_{2}c_{0})\right\}=\mathcal{O}(\omega^{ 2}).\] (A.8)
Similarly, boundary condition (A.2) can be written as
\[-\mathbf{n}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{0}+\varepsilon^{-1}\nabla_{ \mathbf{y}}c_{0}+\omega\nabla_{\mathbf{x}}c_{1}+\omega\varepsilon^{-1}\nabla_{ \mathbf{y}}c_{1}+\omega^{2}\nabla_{\mathbf{x}}c_{2}+\omega^{2}\varepsilon^{-1} \nabla_{\mathbf{y}}c_{2})=\omega^{\beta}(c_{0}^{a}+a\omega c_{0}^{a-1}c_{1}-1).\] (A.9)
Collecting terms of like-powers of \(\omega\) one obtains
\[\omega^{-1}[-\mathbf{n}\cdot(\omega^{1-\gamma}\mathbf{D}\nabla_{ \mathbf{y}}c_{0})]+\omega^{0}[-\mathbf{n}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{0}+ \omega^{1-\gamma}\nabla_{\mathbf{y}}c_{1})-\omega^{\beta}(c_{0}^{a}-1)]+\] \[\omega[-\mathbf{n}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{1}+ \omega^{1-\gamma}\nabla_{\mathbf{y}}c_{2})-\omega^{\beta}ac_{0}^{a-1}c_{1}]= \mathcal{O}(\omega^{2}).\] (A.10)
### Terms of Order \(\mathbf{O(\omega^{-1})}\)
At the leading order, (A.8) and (A.10) provide the following equation for \(c_{0}\)
\[\frac{\partial c_{0}}{\partial\tau}-\omega^{1-2\gamma}\nabla_{\mathbf{y}}\cdot( \mathbf{D}\nabla_{\mathbf{y}}c_{0})+\omega^{1-\gamma-\alpha}\nabla_{\mathbf{y}} \cdot(c_{0}\mathbf{v}_{0})=0,\quad\mathbf{y}\in\Omega_{p},\;\tau\in\mathbf{I}\] (A.11)
subject to
\[-\mathbf{n}\cdot(\omega^{1-\gamma}\mathbf{D}\nabla_{\mathbf{y}}c_{0})=0,\quad \mathbf{y}\in\Gamma,\] (A.12)
i.e. \(c_{0}=c_{0}(\mathbf{x},t,\tau)\) since (A.11) and (A.12) are homogeneous. Integrating (A.11) over \(\Omega_{p}\) while applying the divergence theorem, one can write
\[\int_{\Omega_{p}}\frac{\partial c_{0}}{\partial\tau}\mathrm{d}\mathbf{y}- \omega^{1-2\gamma}\int_{\Gamma}\mathbf{n}\cdot(\mathbf{D}\nabla_{\mathbf{y}}c _{0})\mathrm{d}\mathbf{y}+\omega^{1-\gamma-\alpha}\int_{\Gamma}\mathbf{n} \cdot(c_{0}\mathbf{v}_{0})\mathrm{d}\mathbf{y}=0.\]
Accounting for (A.12) and the no-slip condition yields to
\[\int_{\Omega_{p}}\frac{\partial c_{0}}{\partial\tau}\mathrm{d}\mathbf{y}=0.\]
Since \(\frac{\partial c_{0}}{\partial\tau}\geq 0\), then \(\frac{\partial c_{0}}{\partial\tau}=0\), i.e. \(c_{0}=c_{0}(\mathbf{x},t)\).
### Terms of Order \(\mathbf{O(\omega^{0})}\)
Rearranging (A.8) and (A.10) give
\[\left(\frac{\partial c_{0}}{\partial t}+\frac{\partial c_{1}}{ \partial\tau}\right)-\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{\mathbf{x}}c _{0})-\omega^{-\gamma}[\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{\mathbf{y}} c_{0})]-\omega^{-\gamma}\nabla_{\mathbf{y}}\cdot[\mathbf{D}(\nabla_{\mathbf{x}}c_{0}+ \omega^{1-\gamma}\nabla_{\mathbf{y}}c_{1})]+\] \[+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot(c_{0}\mathbf{v}_{0})+ \omega^{1-\gamma-\alpha}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}c_{1}+\mathbf{ v}_{1}c_{0})=0,\] (A.13)
subject to
\[-\mathbf{n}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{0}+\omega^{1-\gamma}\nabla_ {\mathbf{y}}c_{1})-\omega^{\beta}(c_{0}^{\alpha}-1)=0,\quad\mathbf{y}\in\Gamma.\] (A.14)
Integrating (A.13) with respect to \(\mathbf{y}\) and \(\tau\) over \(\mathcal{B}\) and \(\mathcal{I}\), respectively, while noting that \(\nabla_{\mathbf{y}}c_{0}\equiv 0\), and accounting for the divergence theorem and the boundary condition (A.14), leads to
\[\frac{\partial c_{0}}{\partial t}=-\left\langle\frac{\partial c_{1}}{\partial \tau}\right\rangle_{I\mathcal{B}}+\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{ \mathbf{x}}(c_{0})_{I\mathcal{B}})-\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot(c_ {0}\langle\mathbf{v}_{0}\rangle_{I\mathcal{B}})-\mathcal{K}^{\bullet}\omega^{ \beta-\gamma}(c_{0}^{\alpha}-1),\] (A.15)
where \(\mathcal{K}^{\bullet}=|\Gamma|/|\mathcal{B}|\). Inserting (A.15) into (A.13) leads to
\[\frac{\partial c_{1}}{\partial\tau}-\left\langle\frac{\partial c_ {1}}{\partial\tau}\right\rangle_{I\mathcal{B}}-\omega^{-\alpha}\nabla_{\mathbf{ x}}\cdot(c_{0}\langle\mathbf{v}_{0}\rangle_{I\mathcal{B}})-\mathcal{K}^{\bullet} \omega^{\beta-\gamma}(c_{0}^{\alpha}-1)+\omega^{-\alpha}\nabla_{\mathbf{x}} \cdot(c_{0}\mathbf{v}_{0})\] \[-\omega^{-\gamma}\nabla_{\mathbf{y}}\cdot[\mathbf{D}(\nabla_{ \mathbf{x}}c_{0}+\omega^{1-\gamma}\nabla_{\mathbf{y}}c_{1})]+\omega^{1-\gamma -\alpha}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}c_{1}+\mathbf{v}_{1}c_{0})=0,\] (A.16)
since \(c_{0}=\langle c_{0}\rangle_{I\mathcal{B}}\). Equation (A.16) is subject to (A.14). We look for a solution for \(c_{1}(\mathbf{x},t,\mathbf{y},\tau)\) in the following form
\[c_{1}(\mathbf{x},t,\mathbf{y},\tau)=\chi(\mathbf{y},\tau)\cdot\nabla_{\mathbf{x }}c_{0}+\lambda(\mathbf{y},\tau)\frac{\partial c_{0}}{\partial t}+\widetilde{c }_{1}(\mathbf{x},t),\] (A.17)
where \(\chi(\mathbf{y},\tau)\) and \(\lambda(\mathbf{y},\tau)\) are two unknown vector and scalar functions, and \(\widetilde{c}_{1}(\mathbf{x},t)\) is an integration function, respectively. We emphasize that for 'early' and 'pre-asymptotic' times, i.e. when neither time- or length-scales can be separated, or when no time-constraints are applicable but there is a separation of characteristic length scales, respectively, the postulated closure (A.33) should, at least, exhibit memory effects (see e.g. [_Valdes-Parada and Alvarez Ramirez_, 2011, 2012; _Wood and Valdes-Parada_, 2013]). Here, however, we are interested
in long times, _aka_ 'quasi-steady' state where both time- and spatial scales can be separated and local (in space and time) equations can be formulated. Inserting (A.33) into (A.16) and (A.14), while noticing that \(\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0}\equiv 0\) and \(\nabla_{\mathbf{x}}\cdot\langle\mathbf{v}_{0}\rangle\equiv 0\)[_Auriault and Adler_, 1995] and \(\partial_{\tau}\overline{c}_{1}=\nabla_{\mathbf{y}}\overline{c}_{1}\equiv 0\), gives
\[\left[\frac{\partial\lambda}{\partial\tau}-\left\langle\frac{ \partial\lambda}{\partial\tau}\right\rangle_{\mathcal{IB}}-\omega^{1-2\gamma} \nabla_{\mathbf{y}}\cdot\left(\mathbf{D}\nabla_{\mathbf{y}}\lambda\right)+ \omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot\nabla_{\mathbf{y}}\lambda\right] \frac{\partial c_{0}}{\partial t}+\] \[\left[\frac{\partial\chi}{\partial\tau}-\left\langle\frac{ \partial\chi}{\partial\tau}\right\rangle_{\mathcal{IB}}-\omega^{-\alpha} \langle\mathbf{v}_{0}\rangle_{\mathcal{IB}}+\omega^{-\alpha}\mathbf{v}_{0}- \omega^{-\gamma}\nabla_{\mathbf{y}}\cdot\left[\mathbf{D}(\mathbf{I}+\omega^ {1-\gamma}\nabla_{\mathbf{y}}\chi)\right]+\omega^{1-\gamma-\alpha}\mathbf{v }_{0}\cdot\nabla_{\mathbf{y}}\chi\right]\cdot\nabla_{\mathbf{x}}c_{0}+\] \[\omega^{-\alpha}(\nabla_{\mathbf{x}}\cdot\mathbf{v}_{0}+\omega^ {1-\gamma}\nabla_{\mathbf{y}}\cdot\mathbf{v}_{1})c_{0}+\omega^{1-\gamma- \alpha}\mathbf{v}_{0}\cdot\nabla_{\mathbf{y}}\overline{c}_{1}-\mathcal{K}^{ \star}\omega^{\beta-\gamma}(c_{0}^{a}-1)=0,\] (A.18)
where \(\mathbf{I}\) is the identity matrix. Equation (A.18) is subject to the boundary condition
\[-\mathbf{n}\cdot\mathbf{D}\left[(\mathbf{I}+\omega^{1-\gamma}\nabla_{\mathbf{ y}}\chi)\cdot\nabla_{\mathbf{x}}c_{0}+\omega^{1-\gamma}\nabla_{\mathbf{y}} \lambda\frac{\partial c_{0}}{\partial t}\right]=\omega^{\beta}(c_{0}^{a}-1).\] (A.19)
Expanding the continuity equation \(\nabla\cdot\mathbf{v}_{\omega}=\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}+ \omega\mathbf{v}_{1}+\omega^{2}\mathbf{v}_{2})+e^{-1}\nabla_{\mathbf{y}}\cdot (\mathbf{v}_{0}+\omega\mathbf{v}_{1}+\omega^{2}\mathbf{v}_{2})=0\) leads to
\[\omega^{-1}(\omega^{1-\gamma}\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0})+\omega ^{0}(\nabla_{\mathbf{x}}\cdot\mathbf{v}_{0}+\omega^{1-\gamma}\nabla_{\mathbf{ y}}\cdot\mathbf{v}_{1})+\omega(\nabla_{\mathbf{x}}\cdot\mathbf{v}_{1}+\omega^{1- \gamma}\nabla_{\mathbf{y}}\cdot\mathbf{v}_{2})=\mathcal{O}(\omega^{2})\] (A.20)
i.e. \(\nabla_{\mathbf{x}}\cdot\mathbf{v}_{0}+\omega^{1-\gamma}\nabla_{\mathbf{y}} \cdot\mathbf{v}_{1}==0\) and (A.18) reduces to
\[\left[\frac{\partial\chi}{\partial\tau}-\omega^{-\alpha}(\mathbf{ v}_{0})_{\mathcal{IB}}+\omega^{-\alpha}\mathbf{v}_{0}-\omega^{-\gamma}\nabla_{ \mathbf{y}}\cdot\left[\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{\mathbf{ y}}\chi)\right]+\omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot\nabla_{\mathbf{y}} \chi\right]\cdot\nabla_{\mathbf{x}}c_{0}+\] \[\left[\frac{\partial\lambda}{\partial\tau}-\omega^{1-2\gamma} \nabla_{\mathbf{y}}\cdot\left(\mathbf{D}\nabla_{\mathbf{y}}\chi\right)+ \omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot\nabla_{\mathbf{y}}\chi\right] \frac{\partial c_{0}}{\partial t}=\mathcal{K}^{\star}\omega^{\beta-\gamma}(c_ {0}^{a}-1),\] (A.21)
since \(\langle\lambda\rangle_{\mathcal{B}}=\langle\chi\rangle_{\mathcal{B}}=0\). In order to decouple the pore-scale from the continuum-scale, it is sufficient that the closure problem (A.21) is independent of macroscopic quantities, such as \(\frac{\partial c_{0}}{\partial t}\) and \(\nabla_{\mathbf{x}}c_{0}\). Therefore, one needs to impose that these terms are negligible relative to all others for all possible values of \(\alpha\), \(\beta\) and \(\gamma\). This results on constraining the exponents in the coefficients multiplying these coupling terms. Specifically, in order to separate scales, it is sufficient that
\[\beta-\gamma>M\] (A.22)
where
\[M:=\max\{0,-\gamma,1-\alpha-\gamma,-\alpha,1-2\gamma\}.\] (A.23)
Additionally, \(\beta>\max\{0,1-\gamma\}\), i.e.
\[\beta>0,\] (A.24)
since \(\gamma>0\). We emphasize that condition (A.24) is automatically satisfied if (A.22) is satisfied since both \(\gamma>0\) and \(M>0\). Once the conditions under which scales are decoupled have been identified, appropriate initial conditions need to be formulated. We start by expanding Eq. (A.3) at \(t=\tau=0\), i.e. \(c_{\omega}(\mathbf{x},t=0)=c_{\omega}(\mathbf{x})\)
\[c_{\omega}(\mathbf{x})=c_{0,\omega}(\mathbf{x})+\omega c_{1,\omega}(\mathbf{x}, \mathbf{y})=c_{0,\omega}(\mathbf{x})+\omega\left[X_{\omega}(\mathbf{y})\cdot \left.\nabla_{\mathbf{x}}c_{0}\right|_{t=0}+\lambda_{\omega}(\mathbf{y})\left. \frac{\partial c_{0}}{\partial t}\right|_{t=0}+\overline{c}_{1}(\mathbf{x},t=0)\right]\] (A.25)
At the leading order, \(c_{\omega}(\mathbf{x})=c_{0,\omega}\). At the order \(\omega\),
\[\left.\mathcal{X}_{\omega}(\mathbf{y})\cdot\left.\nabla_{\mathbf{x}}c_{0}\right|_ {t=0}+\lambda_{\omega}(\mathbf{y})\left.\frac{\partial c_{0}}{\partial t} \right|_{t=0}=0,\] (A.26)
if we set \(c_{1}(\mathbf{x},t=0)=0\). Since \(\nabla_{\mathbf{x}}c_{0}|_{t=0}\) and \(\left.\frac{\partial c_{0}}{\partial t}\right|_{t=0}\) are known functions of \(\mathbf{x}\), the compatibility condition (A.26) requires \(\chi_{\text{in}}(\mathbf{y})=\lambda_{\text{in}}(\mathbf{y})=0\). The former conditions allow one to write the following closure problems for \(\chi\) and \(\lambda\),
\[\frac{\partial\chi}{\partial\tau}-\omega^{-\alpha}\langle\mathbf{v}_{0} \rangle_{T\mathcal{B}}+\omega^{-\alpha}\mathbf{v}_{0}-\omega^{-\gamma}\nabla_ {\mathbf{y}}\cdot\left[\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\chi)\right]+\omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot\nabla_{ \mathbf{y}}\chi=0,\] (A.27)
subject to
\[-\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\chi)=0,\] (A.28) \[\chi(\mathbf{y},\tau=0)=\chi_{\text{in}}(\mathbf{y})=0,\] (A.29)
and
\[\frac{\partial\lambda}{\partial\tau}-\omega^{1-2\gamma}\nabla_{\mathbf{y}} \cdot\left(\mathbf{D}\nabla_{\mathbf{y}}\chi\right)+\omega^{1-\gamma-\alpha} \mathbf{v}_{0}\cdot\nabla_{\mathbf{y}}\lambda=0,\] (A.30)
subject to
\[-\mathbf{n}\cdot\mathbf{D}\nabla_{\mathbf{y}}\lambda=0,\] (A.31) \[\lambda(\mathbf{y},\tau=0)=\lambda_{\text{in}}(\mathbf{y})=0.\] (A.32)
It is important to note that the closure problem for \(\lambda\) is homogeneous, i.e. the postulation for \(c_{1}\) for long times, reduces to the classical closure
\[c_{1}(\mathbf{x},t,\mathbf{y},\tau)=\chi(\mathbf{y},\tau)\cdot\nabla_{\mathbf{ x}}c_{0}+\overline{c}_{1}(\mathbf{x},t).\] (A.33)
i.e. \(\lambda\equiv 0\).
#### a.2.1 Conditions
In this section, we investigate how (A.22) translates into constraints on \(\alpha\) and \(\beta\) for different values of \(\gamma\). We do so by hypothesizing the value of the maximum \(M\), defined by (A.2), among the four possible scenarios: \(M=0\), \(M=1-\alpha-\gamma\), \(M=-\alpha\) and \(M=1-2\gamma\). We emphasize that once the physical system under study is identified both in terms of physical domain (i.e. \(\epsilon\)), boundary conditions (i.e. \(\gamma\)) and dynamic regimes (\(\alpha\) and \(\beta\)), the parameters \(\epsilon\), \(\gamma\), \(\alpha\) and \(\beta\) are fixed, \(M\) is a uniquely defined scalar, and (A.22) must be satisfied if scales are decoupled. If (A.22) is not satisfied, then (29) may not represent spatio-temporally averaged pore-scale processes with the accuracy prescribed by the homogenization procedure. In the following, we rewrite the applicability condition (A.22) in terms of Da and Pe, so that its ramification on dynamical regimes is made explicit.
#### a.2.1.1 When \(M=0\)
Conditions (A.22) are reformulated as
\[\begin{cases}\alpha>0\\ \gamma>1/2\\ \alpha>1-\gamma\end{cases}\Rightarrow\beta>\gamma,\] (A.34)
i.e. Da\(<\epsilon\).
#### a.2.1.2 When \(M=1-\alpha-\gamma\)
Conditions (A.22) are reformulated as
\[\begin{cases}\alpha<\gamma\\ \gamma<1\\ \alpha<1-\gamma\end{cases}\Rightarrow\beta>1-\alpha,\] (A.35)
i.e. Da\(/\)Pe \(<\omega\).
#### a.2.1.3 When \(M=-\alpha\)
Conditions (A.22) are reformulated as
\[\begin{cases}\alpha<0\\ \gamma>1\end{cases}\quad\Rightarrow\beta>\gamma-\alpha,\] (A.36)
i.e. \(\mathrm{Da}/\mathrm{Pe}<\varepsilon\).
#### a.2.1.4 When \(M=1-2\gamma\)
Conditions (A.22) are reformulated as
\[\begin{cases}\alpha>\gamma\\ \gamma<1/2\end{cases}\quad\Rightarrow\beta>1-\gamma,\] (A.37)
i.e. \(\mathrm{Da}<\omega/\varepsilon\).
We emphasize that the case \(M=-\gamma\) requires \(\gamma<0\). This violates the assumption that \(\gamma>0\). As a result, this case is not self-consistent with the homogenization procedure and should be ignored.
The previous conditions are summarized in the \((\alpha,\gamma)\)-plane of Figure C.1.
The system behavior can be classified based on the magnitude of \(\gamma\):
* When \(\gamma>1\), i.e. \(\varepsilon<\omega\), the system is referred to as _slowly fluctuating_; the conditions to guarantee that scale separation occur are summarized in the \((\alpha,\beta)\)-plane in the Figure 1(a);
* When \(1/2<\gamma<1\), i.e. \(\omega<\varepsilon<\omega^{1/2}\) (or \(\omega\approx\varepsilon\)), the system is referred to as _moderately fluctuating_; the conditions to guarantee that scale separation occur are summarized in the \((\alpha,\beta)\)-plane in the Figure 1(b);
* When \(0<\gamma<1/2\), i.e. \(\omega^{1/2}<\varepsilon<1\) (or \(\varepsilon\gg\omega\)), the system is referred to as _highly fluctuating_; the conditions to guarantee that scale separation occur are summarized in the \((\alpha,\beta)\)-plane in the Figure 1(c).
### Terms of Order \(\boldsymbol{O(\omega^{1})}\)
At the following order, we have
\[\left(\frac{\partial c_{1}}{\partial t}+\frac{\partial c_{2}}{ \partial\tau}\right)-\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{\mathbf{x}}c_ {1})-\omega^{-\gamma}[\nabla_{\mathbf{x}}\cdot(\mathbf{D}\nabla_{\mathbf{y}}c _{1})]+\] \[-\omega^{-\gamma}\nabla_{\mathbf{y}}\cdot\mathbf{D}(\nabla_{ \mathbf{x}}c_{1}+\omega^{1-\gamma}\nabla_{\mathbf{y}}c_{2})+\omega^{-\alpha} \nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}c_{1}+\mathbf{v}_{1}c_{0})+\omega^{1- \gamma-\alpha}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}c_{2}+\mathbf{v}_{1}c_{ 1}+\mathbf{v}_{2}c_{0})=0\] (A.38)
subject to
\[-\mathbf{n}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{1}+\omega^{1- \gamma}\nabla_{\mathbf{y}}c_{2})-\omega^{\beta}ac_{0}^{a-1}c_{1}=0.\] (A.39)
Integrating (A.38) over \(\mathcal{B}\) and \(\mathcal{I}\) with respect to \(\mathbf{y}\) and \(\tau\), while accounting for (A.33), \(\langle\chi\rangle=0\), we obtain,
\[\left\langle\frac{\partial c_{1}}{\partial t}\right\rangle_{ \mathcal{IB}}+\left\langle\frac{\partial c_{2}}{\partial\tau}\right\rangle_{ \mathcal{IB}}-\nabla_{\mathbf{x}}\cdot\left[\mathbf{D}\nabla_{\mathbf{x}} \left(\left\langle\chi(\mathbf{y},\tau)\right\rangle_{\mathcal{IB}}\cdot \nabla_{\mathbf{x}}c_{0}+\overline{c}_{1}(\mathbf{x},t)\right)\right]\] \[-\omega^{-\gamma}\left[\nabla_{\mathbf{x}}\cdot\left\langle \mathbf{D}\nabla_{\mathbf{y}}\right.(\chi(\mathbf{y},\tau)\cdot\nabla_{ \mathbf{x}}c_{0}+\overline{c}_{1}(\mathbf{x},t))\right\rangle_{\mathcal{IB}}\right]\] \[-\omega^{-\gamma}\left\langle\nabla_{\mathbf{y}}\cdot\mathbf{D}( \nabla_{\mathbf{x}}c_{1}+\omega^{1-\gamma}\nabla_{\mathbf{y}}c_{2})\right\rangle _{\mathcal{IB}}+\omega^{1-\gamma-\alpha}\left\langle\nabla_{\mathbf{y}}\cdot( \mathbf{v}_{0}c_{2}+\mathbf{v}_{1}c_{1}+\mathbf{v}_{2}c_{0})\right\rangle_{ \mathcal{IB}}\] \[+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}c_{1}+ \mathbf{v}_{1}c_{0})_{\mathcal{IB}}=0\] (A.40)
The third term in (A.40) is identically equal to zero since \(\left\langle\chi\right\rangle=0\) and the arbitrary integrating function \(\overline{c}_{1}\) can be selected such that \(\overline{\nu}_{\mathbf{x}}\cdot\left(\mathbf{D}\overline{\nu}_{\mathbf{x}} \overline{c}_{1}\right)=0\), i.e. if \(\overline{c}_{1}\) is linear in \(\mathbf{x}\). Similarly, \(\left\langle\nabla_{\mathbf{y}}\cdot\left(\mathbf{v}_{0}c_{2}+\mathbf{v}_{1}c _{1}+\mathbf{v}_{2}c_{0}\right)\right\rangle_{I\mathcal{B}}=0\) because of the divergence theorem, the no-slip boundary condition on \(\Gamma\) and periodicity on the unit cell boundaries. Therefore, (A.40) simplifies to
\[\left\langle\frac{\partial c_{1}}{\partial t}\right\rangle_{I \mathcal{B}}+\left\langle\frac{\partial c_{2}}{\partial\tau}\right\rangle_{I \mathcal{B}}-\omega^{-\gamma}\left[\nabla_{\mathbf{x}}\cdot\left(\left\langle \mathbf{D}\overline{\nu}_{\mathbf{y}}\chi(\mathbf{y},\tau)\right\rangle_{I \mathcal{B}}\cdot\nabla_{\mathbf{x}}c_{0}\right)\right]\] \[+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot\left(\mathbf{v}_{0}c_{ 1}+\mathbf{v}_{1}c_{0}\right)_{I\mathcal{B}}-\omega^{-\gamma}\left(\nabla_{ \mathbf{y}}\cdot\mathbf{D}(\nabla_{\mathbf{x}}c_{1}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}c_{2})\right)_{I\mathcal{B}}=0.\] (A.41)
We proceed further by analyzing the last two terms separately. We start with the fourth term in (A.41), \(\nabla_{\mathbf{x}}\cdot\left(\mathbf{v}_{0}c_{1}+\mathbf{v}_{1}c_{0}\right)_ {I\mathcal{B}}\). Combining it with (A.33) and \(\mathbf{v}_{0}=-\mathbf{k}(\mathbf{y})\cdot\nabla_{\mathbf{x}}P_{0}\) one obtains
\[\nabla_{\mathbf{x}}\cdot\left(\mathbf{v}_{0}c_{1}+\mathbf{v}_{1}c_{0}\right)_ {I\mathcal{B}}=-\nabla_{\mathbf{x}}\cdot\left\langle\mathbf{k}\nabla_{ \mathbf{x}}P_{0}\left(\chi\cdot\nabla_{\mathbf{x}}c_{0}+\overline{c}_{1} \right)\right\rangle_{I\mathcal{B}}+\nabla_{\mathbf{x}}\cdot\left\langle \mathbf{v}_{1}c_{0}\right\rangle_{I\mathcal{B}}.\] (A.42)
Using Einstein notation convention and indicial notation, one can write
\[\nabla_{\mathbf{x}}\cdot\left\langle\mathbf{v}_{0}c_{1}+\mathbf{v }_{1}c_{0}\right\rangle_{I\mathcal{B}} =\frac{\partial}{\partial x_{i}}\left\langle v_{0i}c_{1}+v_{1i}c_ {0}\right\rangle_{I\mathcal{B}}\] \[=-\frac{\partial}{\partial x_{i}}\left\langle k_{ij}\frac{ \partial P_{0}}{\partial x_{j}}\left(\chi_{m}\frac{\partial c_{0}}{\partial x _{m}}+\overline{c}_{1}\right)\right\rangle_{I\mathcal{B}}+\frac{\partial}{ \partial x_{i}}\left\langle v_{1i}c_{0}\right\rangle_{I\mathcal{B}}\] \[=-\left\langle k_{ij}\chi_{m}\right\rangle_{I\mathcal{B}}\left( \frac{\partial^{2}P_{0}}{\partial x_{i}\partial x_{j}}\frac{\partial c_{0}}{ \partial x_{m}}+\frac{\partial P_{0}}{\partial x_{j}}\frac{\partial^{2}c_{0}}{ \partial x_{i}\partial x_{m}}\right)\] \[\quad-\left\langle k_{ij}\right\rangle_{I\mathcal{B}}\frac{ \partial}{\partial x_{i}}\left(\frac{\partial P_{0}}{\partial x_{j}}\overline{c }_{1}\right)+\frac{\partial}{\partial x_{i}}\left\langle v_{1i}c_{0}\right\rangle _{I\mathcal{B}}.\] (A.43)
Noticing that \(\nabla_{\mathbf{x}}\cdot\left\langle\mathbf{v}_{0}\right\rangle_{I\mathcal{B}}\equiv 0\), this results in
\[\frac{\partial\left\langle v_{0i}\right\rangle_{I\mathcal{B}}}{\partial x_{i} }=-\frac{\partial}{\partial x_{i}}\left(\left\langle k_{ij}\right\rangle_{I \mathcal{B}}\frac{\partial P_{0}}{\partial x_{j}}\right)=-\langle k_{ij} \rangle_{I\mathcal{B}}\frac{\partial^{2}P_{0}}{\partial x_{i}\partial x_{j}} \equiv 0\] (A.44)
i.e. \(\partial_{x_{i}x_{j}}^{2}P_{0}\equiv 0\), since \(\langle k_{ij}\rangle_{I\mathcal{B}}\neq 0\). Therefore, (A.43) can be simplified as follows
\[\nabla_{\mathbf{x}}\cdot\left\langle\mathbf{v}_{0}c_{1}+\mathbf{v }_{1}c_{0}\right\rangle_{I\mathcal{B}} =-\frac{\partial^{2}c_{0}}{\partial x_{i}\partial x_{m}}\left\langle \chi_{m}k_{ij}\right\rangle_{I\mathcal{B}}\frac{\partial P_{0}}{\partial x_{j} }-\frac{\partial}{\partial x_{i}}\left(\left\langle k_{ij}\right\rangle_{I \mathcal{B}}\frac{\partial P_{0}}{\partial x_{j}}\overline{c}_{1}\right)+ \frac{\partial}{\partial x_{i}}\left\langle v_{1i}c_{0}\right\rangle_{I \mathcal{B}}\] \[=-\left[\left\langle\chi\mathbf{k}\right\rangle_{I\mathcal{B}} \cdot\nabla_{\mathbf{x}}P_{0}\right]_{mi}\frac{\partial}{\partial x_{i}}\left( \frac{\partial c_{0}}{\partial x_{m}}\right)\] \[\quad-\frac{\partial}{\partial x_{i}}\left(\left[\left\langle \mathbf{k}\right\rangle_{I\mathcal{B}}\cdot\nabla_{\mathbf{x}}P_{0}\right]_{i} \overline{c}_{1}\right)+\frac{\partial}{\partial x_{i}}\left\langle v_{1i}c_{0} \right\rangle_{I\mathcal{B}}\] \[=-\left[\left(\left\langle\chi\mathbf{k}\right\rangle_{I \mathcal{B}}\cdot\nabla_{\mathbf{x}}P_{0}\right)\cdot\nabla_{\mathbf{x}}\right] \cdot\nabla_{\mathbf{x}}c_{0}\] \[\quad-\nabla_{\mathbf{x}}\cdot\left(\left\langle\mathbf{k}\right \rangle_{I\mathcal{B}}\cdot\nabla_{\mathbf{x}}P_{0}\overline{c}_{1}\right)+ \nabla_{\mathbf{x}}\cdot\left(\left\langle\mathbf{v}_{1}\right\rangle_{I \mathcal{B}}c_{0}\right)\] (A.45)
Using the divergence theorem and the boundary condition (A.39), the last term in (A.41) can be written as
\[\omega^{-\gamma}\left\langle\nabla_{\mathbf{y}}\cdot\mathbf{D}(\nabla_{\mathbf{x }}c_{1}+\omega^{1-\gamma}\nabla_{\mathbf{y}}c_{2})\right\rangle_{I\mathcal{B}}=- \omega^{\beta-\gamma}\mathcal{K}^{\star}ac_{0}^{a-1}\langle c_{1}\rangle_{I \Gamma},\] (A.46)
where \(\mathcal{K}^{\star}=\frac{|\Gamma|}{|\mathcal{B}|}\). Inserting (A.45) and (A.46) in (A.41), white noting that \(\left\langle A\right\rangle=\phi\langle A\rangle_{I\mathcal{B}}\) and \(\overline{c}_{1}=\langle c_{1}\rangle\), we obtain
\[\left\langle\frac{\partial c_{1}}{\partial t}\right\rangle_{I \mathcal{B}}+\left\langle\frac{\partial c_{2}}{\partial\tau}\right\rangle_{I \mathcal{B}}-\phi^{-1}\omega^{-\gamma}\left[\nabla_{\mathbf{x}}\cdot\left( \left\langle\mathbf{D}\overline{\nu}_{\mathbf{y}}\chi\right\rangle\cdot\nabla_{ \mathbf{x}}c_{0}\right)\right]-\phi^{-1}\omega^{-\alpha}\left[\left(\left\langle \chi\mathbf{k}\right\rangle\cdot\nabla_{\mathbf{x}}P_{0}\right)\cdot\nabla_{ \mathbf{x}}\right]\cdot\nabla_{\mathbf{x}}c_{0}\] \[\quad+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot\left(\left\langle \mathbf{v}_{0}\right\rangle_{I\mathcal{B}}\overline{c}_{1}\right)+\omega^{- \alpha}\nabla_{\mathbf{x}}\cdot\left(\left\langle\mathbf{v}_{1}\right\rangle_{I \mathcal{B}}c_{0}\right)+\omega^{\beta-\gamma}\mathcal{K}^{\star}ac_{0}^{a-1} \langle c_{1}\rangle_{I\Gamma}=0.\] (A.47)
Importantly, since \([(\left\langle\chi\mathbf{k}\right\rangle\cdot\nabla_{\mathbf{x}}P_{0})\cdot \nabla_{\mathbf{x}}]\cdot\nabla_{\mathbf{x}}c_{0}=\nabla_{\mathbf{x}}\cdot[( \left\langle\chi\mathbf{k}\right\rangle\cdot\nabla_{\mathbf{x}}P_{0})\cdot \nabla_{\mathbf{x}}c_{0}]\) because of (A.44), (A.47) can be rearranged as follows
\[\left\langle\frac{\partial c_{1}}{\partial t}\right\rangle_{I \mathcal{B}}+\left\langle\frac{\partial c_{2}}{\partial\tau}\right\rangle_{I \mathcal{B}}-\phi^{-1}\omega^{-1}\nabla_{\mathbf{x}}\cdot\left[\left(\omega^{1 -\gamma}\left\langle\mathbf{D}\nabla_{\mathbf{y}}\chi\right\rangle+\omega^{1 -\alpha}\left\langle\chi\mathbf{k}\right\rangle\cdot\nabla_{\mathbf{x}}P_{0} \right)\cdot\nabla_{\mathbf{x}}c_{0}\right]\] \[+\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot\left(\left\langle \mathbf{v}_{0}\right\rangle_{I\mathcal{B}}\overline{c}_{1}+\left\langle \mathbf{v}_{1}\right\rangle_{I\mathcal{B}}c_{0}\right)+\omega^{\beta-\gamma} \mathcal{K}^{\star}ac_{0}^{\alpha-1}\langle c_{1}\rangle_{I\Gamma}=0.\] (A.48)
Let
\[\tilde{\mathbf{D}}^{\star}=\omega^{1-\gamma}\left\langle\mathbf{D}\nabla_{ \mathbf{y}}\chi\right\rangle+\omega^{1-\alpha}\left\langle\chi\mathbf{k} \right\rangle\cdot\nabla_{\mathbf{x}}P_{0}\] (A.49)
\(\tilde{\mathbf{D}}^{\star}\) is a positive definite tensor. Accordingly, (A.48) can be written as
\[\omega\left\langle\frac{\partial c_{1}}{\partial t}\right\rangle_{I \mathcal{B}}+\omega\left\langle\frac{\partial c_{2}}{\partial\tau}\right\rangle _{I\mathcal{B}}-\phi^{-1}\nabla_{\mathbf{x}}\cdot\left(\tilde{\mathbf{D}}^{ \star}\cdot\nabla_{\mathbf{x}}\left\langle c_{0}\right\rangle\right)\] \[+\omega^{1-\alpha}\nabla_{\mathbf{x}}\cdot\left(\left\langle \mathbf{v}_{0}\right\rangle_{I\mathcal{B}}\overline{c}_{1}+\left\langle \mathbf{v}_{1}\right\rangle_{I\mathcal{B}}c_{0}\right)+\omega^{\beta-\gamma} \mathcal{K}^{\star}\left(a\omega c_{0}^{\alpha-1}\langle c_{1}\rangle_{I \Gamma}\right)=0.\] (A.50)
Calculating \(\left\langle\partial c_{\omega}/\partial t\right\rangle_{I\mathcal{B}}\), while retaining terms up to the second order gives
\[\left\langle\frac{\partial c}{\partial t}\right\rangle_{I\mathcal{B}}=\frac{ \partial c_{0}}{\partial t}+\left\langle\frac{\partial c_{1}}{\partial\tau} \right\rangle_{I\mathcal{B}}+\omega\left(\left\langle\frac{\partial c_{1}}{ \partial t}\right\rangle_{I\mathcal{B}}+\left\langle\frac{\partial c_{2}}{ \partial\tau}\right\rangle_{I\mathcal{B}}\right)+\mathcal{O}(\omega^{2}).\] (A.51)
where \(\left\langle\frac{\partial c}{\partial t}\right\rangle_{I\mathcal{B}}=\frac{ \partial\left\langle c\right\rangle_{I\mathcal{B}}}{\partial t}\) because of the Leibniz rule. Adding (A.50) with (A.15) while accounting for (A.51), yields
\[\phi\frac{\partial\left\langle c\right\rangle_{I\mathcal{B}}}{ \partial t} =\nabla_{\mathbf{x}}\cdot\left(\tilde{\mathbf{D}}^{\star}\nabla_{ \mathbf{x}}\left\langle c_{0}\right\rangle_{I\mathcal{B}}\right)+\nabla_{ \mathbf{x}}\cdot\left(\mathbf{D}\nabla_{\mathbf{x}}\left\langle c_{0}\right \rangle_{I\mathcal{B}}\right)\] \[-\omega^{-\alpha}\nabla_{\mathbf{x}}\cdot\left(\omega\left\langle \mathbf{v}_{0}\right\rangle\overline{c}_{1}+\omega\left\langle\mathbf{v}_{1} \right\rangle c_{0}+c_{0}\langle\mathbf{v}_{0}\rangle_{I\mathcal{B}}\right)\] \[+\phi\mathcal{K}^{\star}\omega^{\beta-\gamma}(1-c_{0}^{\alpha}-a \omega c_{0}^{\alpha-1}\langle c_{1}\rangle_{I\Gamma}).\] (A.52)
Since \(\overline{c}_{1}=\langle c_{1}\rangle_{I\mathcal{B}}\) and \(\langle c_{0}\rangle_{I\mathcal{B}}\langle\mathbf{v}_{0}\rangle=\langle c_{0} \rangle\langle\mathbf{v}_{0}\rangle_{I\mathcal{B}}\), then
\[\langle c\rangle_{I\mathcal{B}}\langle\mathbf{v}\rangle=\langle c_{0} \rangle\langle\mathbf{v}_{0}\rangle_{I\mathcal{B}}+\omega c_{0}\langle \mathbf{v}_{1}\rangle+\omega\overline{c}_{1}\langle\mathbf{v}_{0}\rangle+ \mathcal{O}(\omega^{2}).\] (A.53)
Assuming that \(\quad\langle\chi\rangle_{I\Gamma}\approx\langle\chi\rangle_{I\mathcal{B}}\), then \(\langle c_{1}\rangle_{I\Gamma}\approx\langle c_{1}\rangle_{I\mathcal{B}}\) and
\[\langle c_{0}\rangle_{I\mathcal{B}}^{a}+\omega a(c_{0})_{I\mathcal{B}}^{a-1} \langle c_{1}\rangle_{I\Gamma}\approx\langle c_{0}\rangle_{I\mathcal{B}}^{a}+ \omega a(c_{0})_{I\mathcal{B}}^{a-1}\langle c_{1}\rangle_{I\mathcal{B}}= \langle c\rangle_{I\mathcal{B}}^{a}+O(\omega^{2}).\] (A.54)
Defining
\[\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{\mathbf{y}}\chi)\rangle+\omega^{1-\alpha}\langle\chi\mathbf{k}\rangle \cdot\nabla_{\mathbf{x}}P_{0},\] (A.55)
(A.52) becomes
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot( \tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}-\operatorname{ Pe}\langle c\rangle_{I\mathcal{B}}\langle\mathbf{v}\rangle)+\phi\omega^{-\gamma} \mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{I\mathcal{B}}^{a}),\] (A.56)
which approximates the space-time average of \(c_{\omega}\) up to an error of order \(\omega^{2}\).
## Appendix B Equations summary
### Slowly Fluctuating Regimes: \(\varepsilon<\omega\)
#### b.1.1 Pe \(<1\)
\[\phi\frac{\partial\langle c\rangle_{I\mathcal{B}}}{\partial t}=\nabla\cdot\left[ \tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{I\mathcal{B}}\right]+\phi \omega^{-\gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{I \mathcal{B}}^{a}),\]
with
\[\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{ \mathbf{y}}\chi)\rangle\] (B.1)
and \(\chi\) defined as the solution of the following boundary value problem in the unit cell \(\mathcal{B}\)
\[\nabla_{\mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{\mathbf{y}} \chi)=0,\quad\text{subject to}\quad\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+ \omega^{1-\gamma}\nabla_{\mathbf{y}}\chi)=0.\] (B.2)
#### b.1.2 \(1<\mathbf{Pe}<\omega^{-1}\)
\[\phi\frac{\partial\langle c\rangle_{\mathcal{IB}}}{\partial t}=\nabla\cdot\left[ \tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{\mathcal{IB}}-\mathrm{Pe} \langle c\rangle_{\mathcal{IB}}(\mathbf{v})_{\mathcal{IB}}\right]+\phi\omega^{- \gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{\mathcal{IB}}^{a}),\]
with
\[\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{y}\mathcal{X})\rangle+\omega^{1-\alpha}(\chi\mathbf{k})\cdot\nabla_{ \mathbf{x}}P_{0},\] (B.3)
and \(\mathcal{X}\) defined as the solution of the following boundary value problem in the unit cell \(\mathcal{B}\)
\[\mathbf{D}\nabla_{y}^{2}\mathcal{I}=0,\quad\text{subject to}\quad \mathbf{n}\cdot\mathbf{D}\nabla_{y}\mathcal{I}=0\text{ on }\Gamma,\] \[\nabla_{\mathbf{y}}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{y}\mathcal{X})=0,\quad\text{subject to}\quad\mathbf{n}\cdot\mathbf{D}( \mathbf{I}+\omega^{1-\gamma}\nabla_{y}\mathcal{X})=0\text{ on }\Gamma.\] (B.4)
### Moderately Fluctuating Regimes: \(\omega^{1/2}<\varepsilon<1\)
\[\phi\frac{\partial\langle c\rangle_{\mathcal{IB}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{\mathcal{IB}}-\mathrm{ Pe}\langle c\rangle_{\mathcal{IB}}(\mathbf{v})_{\mathcal{IB}}\right]+\phi\omega^{- \gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{\mathcal{IB}}^{a}),\]
with
\[\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{y}\mathcal{X})\rangle+\omega^{1-\alpha}(\chi\mathbf{k})\cdot\nabla_{ \mathbf{x}}P_{0}\] (B.5)
and \(\mathcal{X}\) defined as the solution of the following boundary value problem in the unit cell \(\mathcal{B}\)
\[\omega^{-\alpha}(\mathbf{v}_{0}-\langle\mathbf{v}_{0}\rangle)- \omega^{-\gamma}\nabla_{y}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_ {y}\mathcal{X})+\omega^{1-\gamma-\alpha}\mathbf{v}_{0}\cdot(\nabla_{y} \mathcal{X})=0,\] \[\text{subject to}\quad\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^ {1-\gamma}\nabla_{y}\mathcal{X})=0,\text{ on }\Gamma.\] (B.6)
### Highly Fluctuating Regimes: \(\varepsilon\gg\omega\)
#### b.3.1 Pe \(<1\)
\[\phi\frac{\partial\langle c\rangle_{\mathcal{IB}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{\mathcal{IB}}\right] +\phi\omega^{-\gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{ \mathcal{IB}}^{a}),\]
with
\[\tilde{\mathbf{D}}^{\star}=\langle\mathbf{D}(\mathbf{I}+\omega^{1-\gamma} \nabla_{y}\mathcal{X})\rangle\] (B.7)
and \(\mathcal{X}\) defined as the solution of the following boundary value problem in the unit cell \(\mathcal{B}\)
\[\frac{\partial\mathcal{X}}{\partial\tau}-\omega^{-\gamma}\nabla_{y}\cdot \mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{y}\mathcal{X})+\omega^{-\alpha }(\mathbf{v}_{0}-\langle\mathbf{v}_{0}\rangle)=0\] (B.8)
subject to
\[-\mathbf{n}\cdot\mathbf{D}(\mathbf{I}+\omega^{1-\gamma}\nabla_{y} \mathcal{X})=0\quad\text{on}\quad\Gamma,\] \[\mathcal{X}(\mathbf{y},\tau=0)=\mathcal{X}_{\mathrm{in}}(\mathbf{ y})=0.\] (B.9)
#### b.3.2 \(1<\mathbf{Pe}<\omega^{-1}\)
\[\phi\frac{\partial\langle c\rangle_{\mathcal{IB}}}{\partial t}=\nabla\cdot \left[\tilde{\mathbf{D}}^{\star}\nabla\langle c\rangle_{\mathcal{IB}}-\mathrm{ Pe}\langle c\rangle_{\mathcal{IB}}(\mathbf{v})_{\mathcal{IB}}\right]+\phi\omega^{- \gamma}\mathcal{K}^{\star}\mathrm{Da}(1-\langle c\rangle_{\mathcal{IB}}^{a}),\]
with
\[\tilde{\tilde{\bf D}}^{\star}=\langle{\bf D}({\bf I}+\omega^{1-\gamma} \nabla_{\bf y}\chi)\rangle+\omega^{1-\alpha}(\chi{\bf k})\cdot\nabla_{\bf x}P_{0}\] (B.10)
and \(\chi\) defined as the solution of the following boundary value problem in the unit cell \({\cal B}\)
\[\frac{\partial\chi}{\partial\tau}+\omega^{-\alpha}({\bf v}_{0}- \langle{\bf v}_{0}\rangle)+\omega^{1-\gamma-\alpha}{\bf v}_{0}\cdot(\nabla_{ \bf y}\chi)=0,\]
subject to
\[-{\bf n}\cdot{\bf D}({\bf I}+\omega^{1-\gamma}\nabla_{\bf y}\chi )=0\quad\mbox{on}\quad\Gamma,\] \[\chi({\bf y},\tau=0)=\chi_{\alpha}({\bf y})=0.\] (B.11)
## Appendix C Nomenclature
\({\cal B}:\) Pore-scale domain in the unit cell \(Y\)
\({\cal I}:\) Temporal unit cell
\(c_{\omega}:\) Dimensionless pore-scale concentration
\(c_{\omega}({\bf x}):\) Dimensionless initial pore-scale concentration
\(c_{D}(t):\) Dimensionless time-varying concentration at a Dirichlet boundary \(\partial\Omega_{D}\)
\(\langle c\rangle_{{\cal I}{\cal B}}:\) Average of pore-scale concentration over the pore volume \({\cal B}\) and the time interval \({\cal I}\)
\(\langle c\rangle:\) Average of pore-scale concentration over the unit cell \(Y\) and the time interval \({\cal I}\), such that \(\langle c\rangle=\phi\langle c\rangle_{{\cal I}{\cal B}}\)
\({\bf D}:\) Dimensionless molecular diffusion coefficient
Da : Damkohler number
Pe : Peclet number
\(l:\) Characteristic length of periodic unit cell \(Y\)
\(L:\) Characteristic length of the macroscopic porous medium domain \(\Omega\)
\(a:\) Order of the heterogeneous reaction
\(\hat{p}:\) Dimensional dynamic pressure
\(\mu:\) Dynamic viscosity of the fluid
\(\varepsilon=\frac{l}{L}:\) Spatial scale separation parameter
\(\omega=\frac{\hat{\tau}}{T}:\) Temporal scale separation parameter
\(\phi:\) Unit cell porosity
\(\hat{\Omega}:\) Porous medium domain
\(\hat{\Omega}_{p}:\) Volume of the pore phase in \(\hat{\Omega}\)
\(\hat{\Omega}_{s}:\) Volume of the solid phase in \(\hat{\Omega}\)
\(\partial\hat{\Omega}:\) Outer boundary of the porous medium \(\hat{\Omega}\)
\(\hat{\Gamma}:\) Boundary between solid and pore phase
\(\hat{\bf v}_{\varepsilon}:\) Dimensional pore-scale velocity
\(\chi:\) Closure variable in the unit cell
\(\hat{Y}:\) Unit cell domain
\(\hat{\cal B}:\) Solid phase in the unit cell domain \(Y\)
\(\hat{\cal G}:\) Pore phase in the unit cell domain \(Y\)
**x** : Slow spatial scale
\(t\) : Slow time scale
\(\mathbf{y}\) : Fast spatial scale
\(\tau\) : Fast time scale
\(U\) : Characteristic velocity
\(p\) : Dimensionless pressure
\(\hat{t}_{\mbox{\tiny{atmin}}}\) : Dimensional time-scale for diffusion at microscale
\(\hat{t}_{\mbox{\tiny{atmin}}}\) : Dimensional time-scale for diffusion at microscale
\(\hat{t}_{\mbox{\tiny{atmin}}}\) : Dimensional time-scale for advection at microscale
\(\hat{t}_{\mbox{\tiny{atmin}}}\) : Dimensional time-scale for advection at macroscale
\(\tau_{c}=\frac{L^{2}}{D}\) : Characteristic time
\(T\) : Observation time-scale
\(\hat{\tau}_{a}\) : Advection time-scale
\(\hat{\tau}_{d}\) : Diffusion time-scale
\(\hat{\tau}_{r}\) : Reaction time-scale
\(\hat{k}\) : Dimensional pore-scale heterogeneous reaction rate
\(\gamma\) : The parameter connecting spatial and temporal scale separation parameters
\(\psi_{\varepsilon}\) : Any arbitrary pore-scale quantity
\(\alpha\) : Parameter defining Peclet, Pe = \(\omega^{-\alpha}\)
\(\beta\) : Parameter defining Damkohler, Da = \(\omega^{\beta}\)
**K** : Dimensionless permeability tensor
**k** : Closure variable
**a** : Closure variable
\(\mathcal{K}^{\star}\) : Effective reaction rate
\(\underline{\mathbf{D}}^{\star}\) : Effective dispersion tensor
\(\nabla_{x}P_{0}\) : Macroscopic pressure gradient
**n** : Unit vector normal to the boundary
\(c_{0},c_{1},c_{2},...\) : Expansions of pore-scale concentration
\(\mathbf{v}_{0},\mathbf{v}_{1},\mathbf{v}_{2},\cdots\) : Expansions of pore-scale velocity
**Acknowledgments**
Financial support for this work was provided by the Stanford University Petroleum Research Institute (SUPRI-B Industrial Affiliates Program). The Author is grateful to Professor Hamdi Tchelepi from the Energy Resources Engineering Department at Stanford University for reviewing the content of this paper and providing valuable feedback. The author declares no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2309.01033 | From constant to rough: A survey of continuous volatility modeling | In this paper, we present a comprehensive survey of continuous stochastic
volatility models, discussing their historical development and the key stylized
facts that have driven the field. Special attention is dedicated to fractional
and rough methods: we outline the motivation behind them and characterize some
landmark models. In addition, we briefly touch the problem of VIX modeling and
recent advances in the SPX-VIX joint calibration puzzle. | Giulia Di Nunno, Kęstutis Kubilius, Yuliya Mishura, Anton Yurchenko-Tytarenko | 2023-09-02T22:38:06Z | http://arxiv.org/abs/2309.01033v2 | # From constant to rough: A survey of continuous volatility modeling
###### Abstract
In this paper, we present a comprehensive survey of continuous stochastic volatility models, discussing their historical development and the key stylized facts that have driven the field. Special attention is dedicated to fractional and rough methods: without advocating for either roughness or long memory, we outline the motivation behind them and characterize some landmark models. In addition, we briefly touch the problem of VIX modeling and recent advances in the SPX-VIX joint calibration puzzle.
**Keywords:** stochastic volatility, implied volatility smile, rough volatility, fractional processes, VIX, option pricing
**MSC 2020:** 91-02; 91-03; 62P05; 60H10; 60G22; 91G15; 91G30; 91G80
**Acknowledgements.** The present research is carried out within the frame and support of the ToppForsk project nr. 274410 of the Research Council of Norway with the title STORM: Stochastics for Time-Space Risk Models. The third author is supported by The Swedish Foundation for Strategic Research, grant Nr. UKR22-0017, and by Japan Science and Technology Agency CREST, project reference number JPMJCR2115. We are grateful to Asmund Hausken Sande for his advice on a piece of data analysis as well as to Guido Gazzani for his feedback and excellent reference suggestions.
## 1 Introduction
Ultimately, finance evolves around the interplay between the expected return and risk. The established benchmark for the latter is _volatility_: loosely defined, this term refers to the degree
of variability of an asset \(S\) in time which is traditionally quantified as
\[\sqrt{\frac{1}{T}\sum_{t=1}^{T}\left(\log\frac{S(t)}{S(t-1)}\right)^{2}}. \tag{1.1}\]
The rationale behind using the _log_-returns in (1.1) is simple and can be traced back to P. Samuelson: the prices are strictly positive and hence it is reasonable to represent them as an exponential of some random variable. For example, if one models the returns as
\[\frac{S(t)}{S(t-1)}=e^{\xi_{t}},\quad t=1,...,T,\]
where \(\xi_{t}\sim\mathcal{N}(0,\sigma^{2})\), \(t=1,...,T\), are centered i.i.d. random variables, the value (1.1) becomes a strongly consistent estimator of the _standard deviation_ parameter \(\sigma\).
However, it turns out that the metric (1.1) is not as innocuous as it may seem at first sight. For example, if one computes (1.1) using samples from distinct time periods, the results can be _drastically_ different. In some sense, it is not a big surprise: there are periods on the market when prices exhibit minimal variation and other times when they fluctuate with extreme intensity (especially during crises). Moreover, as noted in e.g. [61, 75], the variability in asset prices cannot be fully explained only by changes in "fundamental" economic factors. The important conclusion from this observation is as follows: **volatility changes over time in an unpredictable manner**.
In this survey, we offer an overview of one approach aimed at addressing this phenomenon: _stochastic volatility modeling_. This methodology, which can be traced back to the early discrete-time model of Clark (1973) [58], has evolved into an enormous, deep, and highly engaging area of research. Contrary to the older surveys of Skiadopoulous (2001) [196], Shepard & Andersen (2009) [194] and Duong & Swanson (2011) [92], we put a significant emphasis on fairly recent topics of fractional and rough volatility. We tried to make the presentation as friendly as possible, with the hope that readers familiar with stochastic analysis will find most parts of the material accessible.
Before proceeding to the main part of the paper, let us make two important disclaimers.
* First of all, apart from more straightforward applications in, e.g., mean-variance portfolio theory originating from the paper [161] by H. Markowitz, the concept of volatility has a crucial role in another significant area within finance: _non-arbitrage pricing theory_ conceptualized by F. Black, M. Scholes, and R. Merton in [32, 166]. This theory has a unique and intricate perspective on volatility and has been the major driving force of volatility modeling for the past four decades. Therefore, in this survey, we consider the volatility predominantly from the option pricing viewpoint.
* Our focus primarily centers on _continuous stochastic volatility models in continuous time_. This means that we will not discuss various discrete-time models such as ARCH, GARCH or EARCH of Engle [101], Bollerslev [34] and Nelson [172] as well as multiple jump-diffusion models including the approaches of Duffie et. al. [90] or Barndorff-Nielsen & Shephard [21]. The omission of jumps seems to be the most notable gap in light of the realism this approach provides. However, given our desire to highlight fractional and rough volatility in more detail, the inclusion of discrete-time and jump-diffusion models with the level of detail they deserve would substantially inflate the size of this survey and result in a notable change of emphasis. Therefore, we leave these two topics for a separate work. Readers with a specific interest in discrete-time models are referred to [35, 111]; for jump models, see the
specialized books by Barndorff-Nielsen & Shepard [22], Gatheral [121], Rachev et. al. [177] or Tankov & Cont [66]. For other modeling viewpoints that are different from stochastic volatility, see the overviews given by Shiryaev [195] and Mariani & Florescu [160].
The paper is structured as follows. In Section 2, we present a historical context for stochastic volatility modeling as well as list several relevant stylized facts that are aimed to be reproduced. We devote separate attention to the notion of _implied volatility_ since the latter serves as the focal point of the field. Section 3 gives a detailed survey of continuous stochastic volatility models from the constant elasticity of variance (CEV) model proposed by J. C. Cox in 1975 to the modern fractional and rough volatility approaches. For each class of models, we provide the context of their applicability and characterize their advantages and shortcomings. In Section 4, we touch on selected aspects of a special topic in volatility modeling: VIX. After a brief detour in its computation, we describe two problems within VIX modeling: reproducing positive VIX skew and the SPX-VIX joint calibration problem. In addition, we analyze advances in continuous stochastic volatility modeling for solving these problems. Section 5 concludes our presentation.
## 2 Market volatility: empirical challenges and stylized facts
### From random walk to geometric Brownian motion
Probability theory and stochastic analysis have evolved into irreplaceable tools for economics and finance. The first attempts to model financial markets using probabilistic methods can be traced back to the French economist Jules Regnault1: as early as in 1863 [178], he proposed modeling stock prices with, what we call nowadays, symmetric random walks. However, the foundation for employing probability in finance is commonly attributed to another French mathematician, Louis Bachelier, and his 1900 dissertation titled "_Theorie de la speculation_" [16], where he considered market modeling and even derivative pricing in a very comprehensive manner. Naturally, the tools of stochastic calculus had not yet been formulated at that time, which means that the dissertation might lack some modern aspects of mathematical rigor. Nonetheless, if one translates Bachelier's reasoning into today's mathematical language2, it turns out that Bachelier essentially modeled stock price dynamics with a prototype of a standard Brownian motion, 5 years before A. Einstein and his famous paper [96].
Footnote 1: For more details about this lesser known and somewhat underestimated figure and his contributions to finance, the reader is referred to the paper [146].
Footnote 2: An excellent job in contextualizing Bachelier’s work from the perspective of modern mathematics was done by M. Davis and A. Etheridge in [17].
Unfortunately, Bachelier's ideas did not gain immediate recognition and went relatively unnoticed. Only more than 50 years after the publication of the original dissertation, statistician Jimmy Savage inexplicably stumbled upon this work and brought it to the attention of a number of researchers in economics. One of those researchers turned out to be Paul Samuelson who described Savage's finding in the foreword to the English translation of Bachelier's dissertation [17] as follows:
"_...Discovery or rediscovery of Louis Bachelier's 1900 Sorbonne thesis [...] initially involved a dozen or so postcards sent out from Yale by the late Jimmie Savage, a pioneer in bringing back into fashion statistical use of Bayesian probabilities. In paraphrase, the postcard's message said, approximately, 'Do any of you economist guys know about a 1914 French book on the theory of speculation by some French professor named Bachelier?_'
_Apparently I was the only fish to respond to Savage's cast. The good MIT mathematical library did not possess Savage's 1914 reference. But it did have something better, namely Bachelier's original thesis itself._
_I rapidly spread the news of the Bachelier gem among early finance theorists. And when our MIT PhD Paul Cootner edited his collection of worthy finance papers, on my suggestion he included an English version of Bachelier's 1900 French text..._"
Samuelson noticed that Bachelier's quasi-Brownian model could potentially take negative values, which is an unrealistic property for real-life prices. Therefore he proposed3 a simple but very useful modification of Bachelier's original approach: Brownian dynamics should be used to model _price logarithms_ rather than the _prices themselves_. After a small adjustment with a linear trend, Samuelson's model took the form of a _geometric_ or (_economic relative_, the term used by Samuelson himself [189]) _Brownian motion_
Footnote 3: Samuelson himself acknowledged (see e.g. his comments in a foreword to [17]) that the same idea was independently expressed by an astronomer M. Osborne in [175].
\[S(t)=S(0)\exp\left\{\left(\mu-\frac{\sigma^{2}}{2}\right)t+\sigma W(t)\right\}, \quad\mu\in\mathbb{R},\quad S(0),\ \sigma>0, \tag{2.1}\]
or, as a stochastic differential equation (SDE),
\[dS(t)=\mu S(t)dt+\sigma S(t)dW(t). \tag{2.2}\]
Note that the term _volatility_ obtains a very specific meaning within the model (2.1)-(2.2); namely, it refers to the parameter \(\sigma\). Such terminology is very intuitive and, moreover, agrees well with the metric (1.1) mentioned above: if \(0=t_{0}<t_{1}<...<t_{n}=T\) is a partition of the interval \([0,T]\), then
\[\frac{1}{T}\sum_{k=1}^{n}\left(\log\frac{S(t_{k+1})}{S(t_{k})}\right)^{2} \rightarrow\sigma^{2}\]
in \(L^{2}\) as the diameter of the partition \(\max_{k}|t_{k+1}-t_{k}|\to 0\) (see e.g. [180]). Due to its simplicity and tractability, the log-normal process (2.1)-(2.2), together with the corresponding notion of volatility, subsequently became a mainstream choice for stock price models for the next couple of decades. Even now, well-informed of multiple arguments against the geometric Brownian motion, practitioners still use it as a benchmark or a reliable "first approximation" model.
It is important to note that, in addition to the market model, Bachelier also considered the problem of option pricing and eventually derived an expression that can be called a precursor of the now famous _Black-Scholes formula_. Of course, his rationale was not based on the no-arbitrage principle and had a number of shortcomings, as it is often the case with pioneering works. The correction of those shortcomings became the subject of a number of studies in the 1960s, among which one can mention [37, 197, 199]. Samuelson himself also heavily contributed to that topic, see e.g. [188] or his paper [190] (in co-authorship with Robert Merton) where it was suggested to consider a warrant/option payoff as a function of the price of the underlying asset. One could argue that these works were just a few steps away from the breakthrough made by Black, Scholes, and Merton just a couple of years later.
Here it is worth paying attention to the fact that the options market remained relatively illiquid until the end of the 60s. The reason for that was the lack of a consistent pricing methodology, and serious investors regarded options as akin to gambling rather than worthy trading instruments. It is somewhat ironic that even Robert Merton himself, right in his pivotal article [166], wrote the following:
"_Because options are specialized and relatively unimportant financial securities, the amount of time and space devoted to the development of a pricing theory might be questioned..._"
However, in 1968, a demand for that type of contract suddenly arose from the Chicago Board of Trade. The organization observed a significant decline in commodity futures trading on its exchange and therefore opted to create additional instruments for investors. They settled on options, and, in 1973, the Chicago Board of Options Exchange commenced its activity. Precisely in that year, two revolutionary papers appeared: "_The pricing of options and corporate liabilities_" [32] by Fischer Black and Myron Scholes and "_Theory of rational option pricing_" [166] by Robert Merton4. The ideas of Black, Scholes and Merton revolutionized mathematical finance and enjoyed empirical success: Stephen Ross, for instance, claimed in 1987 [185] that
Footnote 4: As a side note, the publication of the Black and Scholes paper was far from a smooth process: in 1987 [31], Black recalled that the manuscript was rejected first by the _Journal of Political Economy_ and then by the _Review of Economics and Statistics_. The paper was published only after Eugene Fama and Merton Miller personally recommended the _Journal of Political Economy_ to reconsider its decision (in the meanwhile, Robert Merton showed a great deal of academic integrity by delaying the publication of his own article so that Black and Scholes would be the first).
"_When judged by its ability to explain the empirical data, option pricing theory is the most successful theory not only in finance, but in all of economics._"
### Implied volatility and the smile phenomenon
The main result of Black, Scholes, and Merton can be formulated as follows: if a stock follows the model (2.1)-(2.2), then, under some assumptions, the discounted _no-arbitrage price_ of a standard European call option \(C^{\text{B-S}}=C^{\text{B-S}}(t,S)\) evolves as a function of the current time \(t\) and current price \(S\) and must satisfy a partial differential equation, known now as the _Black-Scholes formula_, of the form
\[\frac{\partial C^{\text{B-S}}}{\partial t}+\frac{1}{2}\sigma^{2}S^{2}\frac{ \partial^{2}C^{\text{B-S}}}{\partial S^{2}}+rS\frac{\partial C^{\text{B-S}}}{ \partial S}-rC^{\text{B-S}}=0 \tag{2.3}\]
with a boundary condition5
Footnote 5: In what follows, we will utilize the standard notation \((x)_{+}:=\max\{x,0\}\).
\[C^{\text{B-S}}(T,S)=(S-K)_{+}, \tag{2.4}\]
where \(r\) denotes the instantaneous interest rate that is assumed to be constant, \(T\) is the maturity date of the option and \(K\) is its exercise price. As mentioned above, the Black-Scholes-Merton rationale relies on a number of rather abstract assumptions that do not align with real-world market conditions. In particular, their reasoning required specific price dynamics, the absence of transaction costs, and the capacity to buy and sell any quantity of assets. However, the inability to perfectly replicate the reality does not necessarily carry significant implications. Black, Scholes, and Merton themselves were aware of the limitations of their approach: for example, [104] quotes Fisher Black on this subject:
"_Yet that weakness is also its greatest strength. People like the model because they can easily understand its assumptions. The model is often good as a first approximation, and if you can see the holes in the assumptions you can use the model in more sophisticated ways._"
What really mattered was the successful empirical performance of the vanilla Black-Scholes-Merton model; as it was noted by J. Wiggins, one of the pioneers of continuous-time stochastic
volatility modeling, "_given the elegance and tractability of the Black-Scholes formula, profitable application of alternate models requires that economically significant valuation improvements can be obtained empirically_" [200].
However, after the Black Monday market crash in 1987, it became evident that there were glaring flaws within the log-normal paradigm prompting the need for rectification. Jackwerth & Rubinstein [143] described the problem as follows:
"_Following the standard paradigm, assume that stock market returns are lognormally distributed with an annualized volatility of 20% (near their historical realization). On October 19, 1987, the two month S&P 500 futures price fell 29%. Under the lognormal hypothesis, this is a -27 standard deviation event with probability \(10^{-160}\). Even if one were to have lived through the entire 20 billion year life of the universe and experienced this 20 billion times (20 billion big bangs), that such a decline could have happened even once in this period is a virtual impossibility._"
Evidently, experiencing "_virtually impossible_" price falls which rendered investors insolvent was already a good argument to reassess financial modeling approaches. Yet, apart from that single shock that, in principle, could be attributed to a single anomaly, there was another consistent phenomenon that manifested itself in the aftermath of Black Monday: **the volatility smile**.
In order to explain the problem, let us first discuss the parameters of the Black-Scholes model in more detail. Note that the sole value in the Black-Scholes formula (2.3)-(2.4) that is not directly observable is the volatility parameter \(\sigma\): indeed, the maturity date \(T\) and exercise price \(K\) are given in the specifications of the given option contract whereas the price \(S(t)\) can be read from the market. The volatility \(\sigma\) is unknown and necessitates some form of estimation based on data. In 1976, Latane & Rendelman [153] proposed an elegant method to address this problem. First of all, note that the equation (2.3)-(2.4) has an explicit solution of the form
\[C^{\text{B-S}}(t,S(t))\\ =S(t)\Phi\left(\frac{\log\frac{e^{r(T-t)}S(t)}{K}+\frac{\sigma^{ 2}}{2}(T-t)}{\sigma\sqrt{T-t}}\right)-Ke^{-r(T-t)}\Phi\left(\frac{\log\frac{e ^{r(T-t)}S(t)}{K}-\frac{\sigma^{2}}{2}(T-t)}{\sigma\sqrt{T-t}}\right), \tag{2.5}\]
where \(\Phi(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{y^{2}}{2}}dy\). For simplicity, let us take \(t=0\) and re-parametrize (2.5) as
\[C^{\text{B-S}}(T,S(0),K,\sigma):=S(0)\Phi\left(\frac{\log\frac{e^{rT}S(0)}{K} +\frac{\sigma^{2}}{2}T}{\sigma\sqrt{T}}\right)-Ke^{-rT}\Phi\left(\frac{\log \frac{e^{rT}S(0)}{K}-\frac{\sigma^{2}}{2}T}{\sigma\sqrt{T}}\right). \tag{2.6}\]
Take the _actual_ market price \(C(T,K)\) of the corresponding option with payoff \(K\) and maturity date \(T\) and notice that, since \(C^{\text{B-S}}(T,S(0),K,\sigma)\) is supposed to coincide with \(C\), the volatility \(\sigma\) can be obtained from the equation
\[C^{\text{B-S}}(T,S(0),K,\sigma)-C(T,K)=0. \tag{2.7}\]
The solution \(\widehat{\sigma}=\widehat{\sigma}(T,K)\) to this equation is called the _implied_ volatility. Note that, the literature (see e.g. [154, p. 244]), \(\widehat{\sigma}\) is often written as a function \(\widehat{\sigma}_{\text{log-m}}\) of time to maturity \(T\) and _log-moneyness_\(\kappa:=\log\frac{K}{e^{rT}S(0)}\), i.e.
\[\widehat{\sigma}_{\text{log-m}}(T,\kappa):=\widehat{\sigma}(T,S(0)e^{\kappa+ rT}).\]
In what follows, we will slightly abuse the notation and omit the subscript "log-m", using \(\widehat{\sigma}(T,K)\) for the function of strike and \(\widehat{\sigma}(T,\kappa)\) for the function of log-moneyness.
If the stock price model (2.1)-(2.2) indeed corresponds to reality well enough, \(\widehat{\sigma}(T,K)\) should be approximately constant for options with the same underlying asset but differing maturities \(T\) and strikes \(K\). However, this is not the case! For instance, \(\widehat{\sigma}(T,K)\) turns out to change with \(T\) for fixed \(K\) (see Fig. 1 below).
A straightforward modification of (2.1), considered by Merton in his original paper [166], appears to accommodate this issue. Namely, if the volatility \(\sigma=\sigma(t)\) is assumed to be a deterministic function of time, one can obtain a version of (2.5) of the form
\[C^{\text{B-S}}(t,S(t)) =S(t)\Phi\left(\frac{\log\frac{e^{r(T-t)}S(t)}{K}+\frac{1}{2} \int_{t}^{T}\sigma^{2}(s)ds}{\sqrt{\int_{t}^{T}\sigma^{2}(s)ds}}\right)\] \[\quad-Ke^{-r(T-t)}\Phi\left(\frac{\log\frac{e^{r(T-t)}S(t)}{K}- \frac{1}{2}\int_{t}^{T}\sigma^{2}(s)ds}{\sqrt{\int_{t}^{T}\sigma^{2}(s)ds}}\right)\] \[:=S(t)\Phi\left(\frac{\log\frac{e^{r(T-t)}S(t)}{K}+\frac{1}{2} \overline{\sigma}^{2}(t,T)T}{\overline{\sigma}(t,T)\sqrt{T}}\right)\] \[\quad-Ke^{-r(T-t)}\Phi\left(\frac{\log\frac{e^{r(T-t)}S(t)}{K}- \frac{1}{2}\overline{\sigma}^{2}(t,T)T}{\overline{\sigma}(t,T)\sqrt{T}}\right),\]
where \(\overline{\sigma}^{2}(t,T):=\frac{1}{T}\int_{t}^{t+T}\sigma^{2}(s)ds\). Then the counterpart (at \(t=0\)) of the equation (2.7) gets the form
\[C^{\text{B-S}}(T,S(0),K,\overline{\sigma}(0,T))-C(T,K)=0,\]
so its solution \(\widehat{\sigma}(T,K)\) is supposed to represent \(\overline{\sigma}(0,T)\) and hence it is allowed to vary with \(T\) for fixed \(K\). One may even argue that it is actually _reasonable_ to assume that \(\sigma\) changes with time: as noted in [83, p. 144], "_there is nothing inconsistent about expecting high volatility this year and low volatility next year_".
Figure 1: Variation of implied volatility with time to maturity \(T\) for a fixed strike \(K\). The values were calculated using S&P500 (SPX) option prices on May 3, 2023, retrieved from _Yahoo! Finance_.
Regarding the variation in \(K\) for fixed \(T\), the implied volatility remained relatively flat6 before the above-mentioned Black Monday crash in 1987. Starting from that (terrible) date, investors observed notable variability of the implied volatility in \(K\) characterized by distinct convex patterns (see Fig. 2 below as well as [77]) which were eventually called "_volatility smiles_" or "_volatility smirks_". Such behavior was consistent, had a direct adverse impact on the empirical performance of the Black-Scholes formula, and could not be explained by the price dynamics (2.1)-(2.2).
Footnote 6: More precisely, some dependence was present, but it was subtle enough to be disregarded, see e.g. the discussion in [83, Chapter 1].
Figure 2 also reveals two important typical characteristics of implied volatility smiles related to the change of the smile shape with \(T\).
* First of all, note that the smile gradually flattens out as the time to maturity \(T\) increases. However, this decrease is fairly slow: for example, on Fig. 2, the curvature is noticeable 262 days (Fig. 2g) and even 962 days (Fig. 2h) before maturity. The latter effect turns out to require special treatment; in this regard, see Subsection 3.3 below as well as [50, 59, 117].
* Second, observe the behavior of the smile _at-the-money_, i.e. when \(K=S(0)e^{rT}\) (or, in terms of log-moneyness, when \(\kappa=0\)): as \(T\) gets smaller, the smile at-the-money tends to become _steeper_. Figure 3 characterizes this phenomenon in more detail: for each \(T\), we took implied volatilities of 7 options with strikes closest to \(S(0)e^{rT}\), performed the least squares linear fit to them and depicted the absolute values of the resulting slopes on Fig. 3a. It turns out that the variation of absolute slopes with \(T\) for shorter maturities seems to be well-described by the _power law_\(CT^{-\frac{1}{2}+H}\) with \(H\approx 0\), _at least as the first approximation_ (see the discussion at the end of Subsection 3.3 below as well as [133] and [81]). As an illustration, red lines on Fig. 3a and 3b depict power law fits for the SPX implied volatility slopes corresponding to May 3, 2023; for our dataset, \(H=0.06226572\). A similar type of behavior is reported by e.g. Fouque, Papanicolaou, Sircar & Solna (2004) [109], Gatheral, Jaisson & Rosenbaum (2018) [123] and, more recently, by Delemotte, De Marco & Segonne (2023) [81].
In principle, a "_perfect_" model should capture7 the shape of the entire _implied volatility surface_\((T,K)\mapsto\widehat{\sigma}(T,K)\) (or, equivalently, \((T,\kappa)\mapsto\widehat{\sigma}(T,\kappa)\), see Fig. 4). For example, if one wants to reproduce the power law described above, one may demand the model-generated _at-the-money volatility skew_
Footnote 7: By “_capturing the volatility surface_”, we mean the established benchmark for assessing the performance of a model by its ability to reproduce the shape of \(\widehat{\sigma}(T,K)\) or, equivalently, \(\widehat{\sigma}(T,\kappa)\). As a rule, testing adheres to the following algorithm:
\[\Psi(T):=\left|\frac{\partial}{\partial\kappa}\widehat{\sigma}(T,\kappa) \right|_{\kappa=0} \tag{2.8}\]
to have the power-law asymptotics \(O(T^{-\frac{1}{2}+H})\), \(T\to 0\). However, as we will see later, the task of constructing a model that reproduces all of the stylized facts simultaneously - both for long and short maturities - is not straightforward at all.
Figure 2: Implied volatility smiles for different times to maturity. The values were calculated using S&P500 (SPX) option prices on May 3, 2023, retrieved from _Yahoo! Finance_. Vertical red lines correspond to the at-the-money level \(K=S(0)e^{rT}\).
Figure 4: Implied volatility surface \((T,\kappa)\mapsto\widehat{\sigma}(T,\kappa)\). The values were calculated using S&P500 (SPX) option prices on May 3, 2023, retrieved from _Yahoo! Finance_.
Figure 3: Absolute values of at-the-money implied volatility slopes on regular (a) and log-log (b) scales. Blue dots denote the data and red lines denote the regression fits \(e^{1.5753587}T^{-0.4377343}\) (a) and \(1.5753587-0.4377343\log T\) (b). The values were calculated using S&P500 (SPX) option prices on May 3, 2023, retrieved from _Yahoo! Finance_.
### Fat tails, leverage, clustering and long memory
The empirical evidence of volatility smiles makes a spectacular point against the geometric Brownian dynamics (2.1)-(2.2). However, it certainly does not stand alone as the sole argument in this regard. In fact, objections to the log-normality of prices appeared as early as the log-normal model itself. In this section, we briefly enumerate several pivotal stylized facts concerning log-returns and the volatility of financial time series. For a more detailed discussion of this topic, we also refer our readers to [125, Section 2.2], [112, Section 3], well-known survey articles [60, 61] or the book [204].
Fat tails and non-Gaussian distribution of log-returns.First of all, statistical analysis of price returns pointed out that their distribution has fat tails, i.e. the probabilities of extreme values tend to be significantly higher than predicted by log-normal models. In this context, we mention empirical studies of Mandelbrot [158] (1963) and Fama [103] (1965); see also [67] for an early review. In response to this phenomenon, Mandelbrot proposed modeling price log-returns with \(\alpha\)-stable distributions, which are characterized by infinite variance. However, this idea seems to contradict subsequent studies (such as [60]) which suggest that the variance of returns should be finite. Overall, [60] gives the following summary regarding the properties of the "_true_" log-returns distribution: it tends to be non-Gaussian, sharp-peaked, and heavy-tailed. Clearly, there are multiple parametric models satisfying these three properties and one can mention log-return models based on inverse Gaussian distributions [20], generalized hyperbolic distributions [176], truncated stable distributions [38, 65] and so on. For a more recent analysis of the topic, see also [102].
Leverage and Zumbach effect.Another interesting phenomenon not grasped by the geometric Brownian motion is the so-called _leverage effect_: negative correlation between variance and returns of an asset. This empirical artefact, initially noticed by Black [30] and then studied in more detail by Christie [56], Cheung & Ng [55] and Duffee [89], was explained by Black himself as follows: a decrease in a stock price results in a drop of firm's equity value and hence increases its financial _leverage_8. This, according to Black, makes the stock riskier and hence more volatile. Interestingly, the name "leverage effect" stuck due to this explanation although subsequent research [13, 105, 140] pointed out that this correlation may not be connected to the leverage at all. Zumbach in [204, Section 3.9.1] gives a different interpretation of this negative correlation: downward moves of stock prices are generally perceived as unfavorable, triggering many sales and thereby increasing the volatility. Conversely, upward moves do not result in such drastic changes in investors' portfolios, given that the majority of market participants already hold long positions. Zumbach [203] also studied another effect of the same nature now recognized as the _Zumbach effect_: pronounced _trends_ in stock price movement, irrespective of sign, increase the subsequent volatility. This effect stems from the fact that large price moves motivate investors to modify their portfolios, unlike scenarios where prices oscillate within a narrower range.
Footnote 8: In this context, the term “leverage” means the company’s debt relative to its equity.
Volatility clustering.The next empirical contradiction to the Black-Scholes-Merton framework arises from a direct econometric analysis of financial time series uncovering clusters of high and low volatility episodes. As noted by Mandelbrot [158], "_large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes_" (see Fig. 5b). There are several ways to interpret this phenomenon. Some authors (see e.g. [107, Chapter 3]) identify volatility clustering with _mean reversion_: in loose terms, this term refers to volatility's tendency to "return back" to the mean level of its long-run distribution (see e.g.
[107, Section 2.3.1 and Chapter 3]), with possible extended periods of staying above or below the latter. Another - and, perhaps, more nuanced - way to quantify clustering lies in analyzing the autocorrelation function of absolute log-returns (see e.g. an excellent review [62] on the topic), i.e.
\[\mathrm{corr}(|R(t)|,|R(t+\tau)|), \tag{2.9}\]
where the log-return \(R(t):=\log\left(\frac{S(t+\Delta)}{S(t)}\right)\) is defined for some given time scale \(\Delta\) (which may vary between a fraction of a second for tick data to several days). Multiple empirical studies [35, 36, 40, 61, 62, 65, 87, 88, 128] report that the autocorrelation function (2.9) is consistently positive and, moreover, shows signs of slow decay of the type \(O(\tau^{-\beta})\), \(\tau\to\infty\), with an exponent \(0<\beta\leq 0.5\).
Long range dependence.The \(O(\tau^{-\beta})\) decay of (2.9) as \(\tau\to\infty\) requires a separate discussion due to its profound implications. For \(\beta\in(0,1)\), such behavior is often referred to as the _long-range dependence_ (see e.g. [26, 187]), and if its statistical significance is established, it indicates the _presence of memory_ on the market. Notably, confirming the long-range dependence poses a substantial challenge: by its nature, memory manifests itself when \(\tau\to\infty\) which raises the concern that any statistical estimation procedure of (2.9) might be inconsistent due to e.g. the non-stationarity of financial time series over extended time periods (see e.g. discussion in [167, Section 1.4]). Moreover, as it is shown in [123, Section 4], misspecification of the model in the estimation procedure can result in spurious long memory conclusion, even if the "_correct_" model does not exhibit long memory at all. Nevertheless, several studies still report the presence of long range dependence on financial markets. For instance, Willinger et. al. [201] apply the so-called _rescaled range (\(R/S\)) analysis_ technique to the CRSP (Center for Research in Security Prices)
Figure 5: Daily split-adjusted values of the S&P500 (SPX) index (a) and the corresponding logarithmic returns (b). Note that the amplitude of fluctuations in log-returns tends to form clusters over time; the period of the highest variation in March 2020 corresponds to the shock caused by the COVID-19 pandemic. The data is retrieved from _Yahoo! Finance_.
daily stock time series and find some weak9 evidence of memory in the data. Lobato & Velasco [157] analyze volatility in connection to trading volumes and find that both of these financial characteristics exhibit the same degree of long memory. Another interesting point comes from the analysis of the implied volatility surface: Comte & Renault [59] noticed that the decrease of the smile amplitude as time to maturity increased was much slower than many advanced market models predicted. They argued that such an effect could be mimicked by having long memory in volatility (see also a simulation study [117] that directly confirms this claim).
Footnote 9: As the authors write, “_...we find empirical evidence of long-range dependence in stock price returns, but because the corresponding degree of long-range dependence [...] is typically very low [...] the evidence is not absolutely conclusive_”.
## 3 Continuous models of volatility
As previously discussed, the real-world data does not support the standard log-normal framework of (2.1)-(2.2). Luckily, developments in the option pricing theory subsequent to the seminal Black, Scholes and Merton papers allowed for some decent flexibility in terms of the choice of price models. Namely, we refer to the gradual translation of the Black-Scholes-Merton approach into the language of martingale theory which evolved in the celebrated _Fundamental Theorem of Asset Pricing_ - the result which connects non-arbitrage pricing and the existence of equivalent local martingale measures. In this regard, we mention the early research of Ross [184], Harrison & Kreps [138], Harrison & Pliska [139], Kreps [149] as well as subsequent seminal works of Delbaen & Schachermayer [78, 79, 80] (see also [191] for a detailed historical overview on the subject). This line of research eventually evolved into a general theory allowing for quite a broad variety of price models to choose from - hence giving researchers all the necessary tools to adjust the classical model (2.1) to account for volatility smiles and all other empirical inconsistencies.
As highlighted in Section 2, the particular problem of (2.2) lies in the fact that the volatility parameter \(\sigma\) cannot be constant: it varies with time in an unpredictable manner, is correlated with the current price level, and has clusters of low and high values with some evidence of long-range dependence. One of the possible ways to treat this problem is by modifying (2.2) by introducing an appropriate _stochastic volatility process_\(\{\sigma(t),\ t\geq 0\}\) instead of the deterministic coefficient \(\sigma\), i.e. by taking10
Footnote 10: In general, it is reasonable to treat the drift coefficient \(\mu=\{\mu(t),\ t\geq 0\}\) as a stochastic process as well. However, we will not cover modeling drift in this survey.
\[dS(t)=\mu(t)S(t)dt+\sigma(t)S(t)dW(t). \tag{3.1}\]
Naturally, now the problem comes down to the selection of a particular process \(\{\sigma(t),\ t\geq 0\}\) that, once inserted in (3.1), can best reproduce the behavior of real-world prices. In this section, we list prominent continuous volatility modeling approaches and briefly characterize their performance.
### Local volatility
CEV model.The first cluster of models covered in this survey are the so-called _local_ or _deterministic volatility models_. The core idea behind this approach is the assumption that the process \(\sigma=\{\sigma(t),\ t\geq 0\}\) in (3.1) is a non-random function of current price and time, i.e. \(\sigma(t)=\sigma(t,S(t))\). Perhaps the first model of this kind was the _constant elasticity of variance (CEV)_ model suggested by Cox in his 1975 note11[69]. CEV model assumes that \(\sigma(t,S(t))=\theta S^{\beta}(t)\), t
i.e. (3.1) takes the form
\[dS(t)=\mu S(t)dt+\theta S^{1+\beta}(t)dW(t), \tag{3.2}\]
where the parameter \(\beta\) is called the _elasticity parameter_. Initially, Cox considered \(\beta\in[-1,0)\): in such case, the increase in \(S(t)\) decreases the value of \(\sigma(t,S(t))=\theta S^{\beta}(t)\) and vice versa hence reproducing the leverage effect. The case \(\beta>0\), more inherent to commodity prices, was discussed in [100]. For more details on the model (3.2), we also recommend a survey article [156].
As a final remark regarding the CEV model, we note that the SDE (3.2) exhibits substantially different behavior depending on the exact value of \(\beta\). If \(\beta\neq 0\), the diffusion coefficient in (3.2) is not Lipschitz and hence the existence and uniqueness of the solution cannot be guaranteed by the classical result. For \(\beta\in\left[-\frac{1}{2},0\right)\), the solution exists by the celebrated Yamada-Watanabe theorem (see e.g. [202]), but the cases \(\beta\in\left(-1,-\frac{1}{2}\right)\) and \(\beta>0\) require a separate and very careful treatment. For more details in that regard, we refer the reader to [54, Chapter 5] as well as to the discussion in [10].
Local volatility and Dupire formula.The general local volatility was initially considered by Derman & Kani [82] and Dupire [94]: both assumed the risk-neutral dynamics of the form
\[dS(t)=rS(t)dt+\sigma(t,S(t))S(t)dW(t), \tag{3.3}\]
where \(r\) denotes the interest rate. Note that this model is way more intricate and flexible than it seems at first glance. Unlike the CEV approach, (3.3) does not specify any parametric form of the function \(\sigma\): instead, the no-arbitrage principle and properties of diffusions allow to fully recover \(\sigma\) from option prices using the celebrated Dupire formula (for its derivation, see the original paper [94] or [27, Subsection 2.2.1]): assuming no dividends,
\[\sigma(t,S)=\sqrt{2\frac{\frac{\partial C}{\partial T}(T,K)+rS\frac{\partial C }{\partial K}(T,K)}{S^{2}\frac{\partial^{2}C}{\partial K^{2}}(T,K)}}\Bigg{|}_ {(T,K)=(t,S)}, \tag{3.4}\]
where \(C(T,K)\), as before, denotes the price at \(t=0\) of the European call-option with payoff \(K\) and maturity \(T\). It is possible to prove (see e.g. [27, Subsection 2.2.2]) that no-arbitrage conditions imply that the expression in the right-hand side of (3.3) is well-defined. Moreover, recalling that
\[C(T,K)=C^{\text{B-S}}(T,S(0),K,\widehat{\sigma}(T,K)), \tag{3.5}\]
where \(C^{\text{B-S}}\) is defined by (2.6), one can plug the right-hand side of (3.5) to (3.4) in place of \(C(T,K)\) and obtain the theoretical relation between the implied volatility \(\widehat{\sigma}\) and local volatility function \(\sigma\) that can then be used to accurately calibrate (3.3) to the current implied volatility smiles (see e.g. [27, Section 2.3]). For more details on local volatility models, we refer the reader to [27, Chapter 2].
Shortcomings of local volatility.As mentioned above, local volatility models are perhaps the simplest and the most straightforward generalizations of the original log-normal model (2.1)-(2.2). In addition to their simplicity and tractability, they inherit another convenient property of the classical geometric Brownian motion: one can show that the market produced by (3.3) is complete, i.e. any option with \(S\) as an underlying asset can be perfectly hedged by a self-financing portfolio composed exclusively of \(S\) and a riskless asset. However, despite the obvious mathematical attractiveness of this feature, some sources actually view completeness as a disadvantage from the modeling viewpoint. As noted in [66, Chapter 10],
_"While absence of arbitrage is both a reasonable property to assume in a real market and a generic property of many stochastic models, market completeness is neither a financially realistic nor a theoretically robust property. From a financial point of view, market completeness implies that options are redundant assets12 and the very existence of the options market becomes a mystery, if not a paradox in such models."_
Footnote 12: Here “redundant” is understood in the sense that options are perfectly replicable in a complete market.
This observation suggests that local volatility models may be too restrictive to grasp the complexity of the stock dynamics. And, importantly, such a claim is supported by multiple studies. For example, Buraschi & Jackwerth [43] utilize a formal statistical testing procedure to check whether options are indeed "_redundant_", and their findings strongly reject this hypothesis questioning the viability of the model (3.3). Another empirical study by Dumas et. al. [91] analyzes the predictive performance of (3.3) and concludes that its out-of-sample results are no better than ones of the standard Black-Scholes approach. Gatheral et. al. [123] indicates that the dynamics (3.3) tends to generate future volatility surfaces "_completely unlike those we observe_". Ait-Sahalia & Jacod [14] utilize a statistical test based on local time to check whether stock prices follow13 the SDE of the form
Footnote 13: In fact, they consider even more general framework with jumps and microstructure noise and still reject the local volatility hypothesis.
\[dS(t)=a(S(t))dt+b(S(t))dW(t),\]
where \(a\) and \(b\) are some deterministic functions, and report a "_clear rejection_" of such model, advocating for alternative approaches. For more details on the performance of local volatility models, see also an overview in [196, Section 4].
### Stochastic volatility models
As indicated above, local volatility models can be subject to criticism as they might not possess sufficient flexibility to grasp the market in its full complexity. An alternative approach, known as _stochastic volatility_14, involves modeling with a dedicated stochastic process that exhibits only partial dependence with \(S\). For example, \(\sigma=\{\sigma(t),\ t\geq 0\}\) may be another diffusion driven by a separate Brownian motion \(B=\{B(t),\ t\geq 0\}\) correlated with \(W\). Naturally, within this framework, there are countless candidates for \(\sigma\), and choosing a particular one that characterizes the market well is a complex task. However, stylized facts presented in Subsection 2.3 offer several starting points that are useful to keep in mind.
Footnote 14: The term “stochastic volatility” may seem too vague given that the volatility \(\sigma(t,S(t))\) employed in local models is also a stochastic process. Nevertheless, in the literature, the term “stochastic” generally refers to approaches that treat volatility separately from the price dynamics.
* To begin with, given the nature of volatility, it seems reasonable to model it with a non-negative process. In practice, this can be achieved by modeling log volatility and then taking an exponential. Alternatively, one may utilize non-negative diffusions such as Bessel-type processes (see e.g. [180, Chapter XI]).
* Selecting an appropriate dependence structure between \(B\) and \(W\) from (3.1) provides all the necessary tools to account for the leverage effect: a typical assumption is \(\mathbb{E}[W(t)B(t)]=\rho t\) with \(-1<\rho<0\).
* Another property shared by multiple stochastic volatility models is _mean-reversion_. As mentioned above in Subsection 2.3, this behavior seems to be a common trait for real-life volatility. Moreover, a thoroughly chosen mean-reverting process can, to some extent,
mimic the clustering effect (see e.g. [107, Chapter 3]). A common approach to introduce mean reversion to the dynamics is to take a drift term of the form \[\theta_{1}(\theta_{2}-\sigma(t))\] in the SDE for volatility. This drift "_pulls \(\sigma\) back_" to the level \(\theta_{2}\) whenever \(\sigma\) deviates from it. The parameter \(\theta_{1}\) calibrates the speed of mean-reversion.
Starting from the 1987 pioneering works of Hull & White [142], Wiggins [200] and Scott [193], multiple generations of approaches and numerous models have emerged in the literature. While not even attempting to provide an exhaustive list, we present below a selection of notable contributions.
* Hull & White [142] assumed that the squared volatility \(\sigma^{2}=\{\sigma^{2}(t),\ t\geq 0\}\) is itself a geometric Brownian motion, i.e. price and volatility satisfy stochastic differential equations of the form \[dS(t) =\mu S(t)dt+\sigma(t)S(t)dW(t),\] \[d\sigma^{2}(t) =\theta_{1}\sigma^{2}(t)dt+\theta_{2}\sigma^{2}(t)dB(t)\] respectively, where \(B\) and \(W\) are two Brownian motions that are allowed to be correlated to account for the leverage effect. Note that the volatility process is positive but not mean-reverting.
* Wiggins [200] suggested a slightly more general dynamics of the form \[dS(t) =\mu S(t)dt+\sigma(t)S(t)dW(t),\] \[d\sigma(t) =f(\sigma(t))dt+\theta\sigma(t)dB(t).\]
* Scott [193] and Stein & Stein [198] considered the volatility to be an Ornstein-Uhlenbeck process, i.e. \[dS(t) =\mu S(t)dt+\sigma(t)S(t)dW(t),\] (3.6) \[d\sigma(t) =\theta_{1}(\theta_{2}-\sigma(t))dt+\theta_{3}dB(t).\] Note that \(\sigma\) is not positive: Ornstein-Uhlenbeck process is Gaussian and hence can take negative values with positive probability. In practice, this issue is treated by either taking the absolute value of \(\sigma\) or introducing a reflecting barrier to the volatility dynamics [192].
* Heston [141] introduced the SDE of the form \[dS(t) =\mu S(t)dt+\sqrt{\sigma(t)}S(t)dW(t),\] (3.7) \[d\sigma(t) =\theta_{1}(\theta_{2}-\sigma(t))dt+\theta_{3}\sqrt{\sigma(t)}dB( t).\] In this model, now commonly referred to as the _Heston model_, the volatility follows the so-called _Cox-Ingersoll-Ross_ or _square root_ process (see also [70]) which enjoys strict positivity provided that \(2\theta_{1}\theta_{2}\geq\theta_{3}^{2}\). Benhamou et. al. [24] considered a modification of (3.7) of the form \[dS(t) =\sqrt{\sigma(t)}S(t)dW(t),\] \[d\sigma(t) =\theta_{1}(\theta_{2}(t)-\sigma(t))dt+\theta_{3}(t)\sqrt{\sigma( t)}dB(t),\] \[d\langle W,B\rangle_{t} =\rho(t)dt,\]
with time-dependent \(\theta_{2}\), \(\theta_{3}\) and correlation \(\rho\) to account for structural changes on the market. Another modification of (3.7) was considered in [127]: there, the discounted price dynamics is assumed to follow \[dS(t) =\sqrt{\sigma(t)}S(t)dW(t),\] \[d\sigma(t) =\theta_{1}(Z_{t})(\theta_{2}(Z_{t})-\sigma(t))dt+\theta_{3}(Z_{t })\sqrt{\sigma(t)}dB(t),\] \[d\langle W,B\rangle_{t} =\rho(Z_{t})dt,\] where \(Z\) is a homogeneous continuous-time Markov chain that represents market switching between different regimes.
* Melino & Turnbull [163, 164] considered the model of the form \[\begin{split} dS(t)&=(a+bS(t))dt+\sigma(t)S^{ \beta}dW(t),\\ d\log(\sigma(t))&=(\theta_{1}+\theta_{2}\log( \sigma(t)))\,dt+\theta_{3}dB(t),\end{split}\] (3.8) where \(\beta\in\left[\frac{1}{2},1\right]\), i.e. the log volatility is a mean-reverting Ornstein-Uhlenbeck process and \(S\) can be regarded as a combination of the CEV equation (3.2) with stochastic volatility.
* Hagan et. al. [136] proposed the stochastic _alpha-beta-rho (SABR) model_ of the form \[\begin{split} dS(t)&=\sigma(t)S^{\beta}(t)dW(t), \\ d\sigma(t)&=\alpha\sigma(t)dB(t).\end{split}\] (3.9) where \(\alpha\geq 0\), \(0\leq\beta\leq 1\) and \(\mathbb{E}[W(t)B(t)]=\rho t\), \(-1<\rho<1\). Note that the case \(\beta\in\left(0,\frac{1}{2}\right)\) is quite intricate from the mathematical perspective and requires a special treatment; see [54] for more details.
* Lewis [155] and, later, Carr & Sun [49] (see also Baldeaux & Badran [19, Section 2]) considered the so-called _3/2-model_ \[\begin{split} dS(t)&=S(t)\sqrt{\sigma(t)}dW(t), \\ d\sigma(t)&=\kappa\sigma(t)(\theta-\sigma(t))dt+ \varepsilon\sigma^{3/2}(t)dB(t).\end{split}\] (3.10) The motivation behind the model partially comes from the statistical analysis of volatility models performed by Jawaheri [145] and Bakshi et. al. [18] as well as from its ability to replicate the VIX skew (see Section 4 for further details). For more details on the existence and properties of the solution to (3.10), we refer the reader to [54].
* Gatheral [122] presented the double CEV dynamics of the form \[\begin{split} dS(t)&=S(t)\sqrt{\sigma(t)}dW(t), \\ d\sigma(t)&=-\kappa(\sigma(t)-\sigma^{\prime}(t)) dt+\eta\sigma^{\alpha}(t)dB(t),\\ d\sigma^{\prime}(t)&=-\kappa^{\prime}(\sigma^{ \prime}(t)-\theta)dt+\eta^{\prime}\sigma^{\prime\beta}(t)dB^{\prime}(t),\end{split}\] (3.11) where \(\alpha,\beta\in\left[\frac{1}{2},1\right]\);
* Fouque et. al. [108] propose a _multiscale_ stochastic volatility. Their approach lies in modeling volatility with two diffusions, one fluctuating on a fast "time scale", and the
other fluctuating on a "slow" time scale. Namely, their model takes the form
\[dS(t) =\mu S(t)dt+\sigma(t)S(t)dW(t),\] \[\sigma(t) =f(Y(t),Z(t)),\] \[dY(t) =\frac{1}{\varepsilon}(\theta_{1}-Y(t))dt+\frac{\theta_{2}\sqrt{ 2}}{\sqrt{\varepsilon}}dB_{1}(t)\] \[dZ(t) =\delta c(Z(t))+\sqrt{\delta}g(Z(t))dB_{2}(t),\]
where \(f\) is a bounded positive function, \(\delta,\varepsilon>0\) are assumed to be small, \(Y\) is the fast scale volatility factor, i.e. a fast mean-reverting process, and \(Z\) is the slow scale volatility factor. Later in [110], Fouque & Saporito employ the same multiscale paradigm to model the Heston-type stochastic volatility-of-volatility parameter:
\[dS(t) =S(t)\sqrt{\sigma(t)}dW(t),\] \[d\sigma(t) =\kappa(\theta-\sigma(t))dt+\eta(t)\sqrt{\sigma(t)}dW^{\prime}(t),\] \[\eta(t) =f(Y(t),Z(t)), \tag{3.12}\] \[dY(t) =\frac{\sigma(t)}{\varepsilon}\alpha(Y(t))dt+\sqrt{\frac{\sigma (t)}{\varepsilon}}\beta(Y(t))dB(t),\] \[dZ(t) =\sigma(t)\delta c(Z(t))dt+\sqrt{\delta\sigma(t)}g(Z(t))dB^{ \prime}(t).\]
In addition to reproducing the leverage effect and clustering via mean reversion, Brownian stochastic volatility models turn out to have an additional important advantage: they have an ability to capture, to some extent, "smiley" patterns of the implied volatility (see e.g. [179], [107, Section 2.8.2] or [121]). However, by design, parametric stochastic volatility models impose structural constraints on the relationship between the dynamics of the spot and implied volatilities and hence may not be able to capture the exact shape of the implied volatility surface (see e.g. [121, Figure 3.6]).
A detailed analysis on that matter can be found in [154] and [8]: in particular, as stated in [8, Section 7.1] or [154, Remark 11.3.21], Brownian diffusion models of volatility tend to produce the at-the-money skew \(\Psi(T)=O(1)\), \(T\to 0\), where \(\Psi\) is defined by (2.8). This directly contradicts the empirical behavior of \(O(T^{-\beta})\), \(\beta\approx\frac{1}{2}\), mentioned in Subsection 2.3. In addition, as highlighted by Comte & Renault [59] (see also [50]), the decrease of the real-life volatility smile amplitude as \(T\to\infty\) seems to be much slower than predicted by the classical Brownian diffusions. Comte & Renault connect this phenomenon to the _long-range dependence_ in the volatility dynamics, which is well-aligned with several empirical studies listed above in Subsection 2.3 that also report long memory on the market.
In other words, the inherent properties of Brownian motion limit the modeling capabilities of diffusion models and it is no surprise that a lot of effort was made to advance the stochastic volatility framework further to account for the mentioned inconsistencies.
### Fractional and rough models
One of the ways to extend stochastic volatility models described in Subsection 3.2 involves substituting the standard Brownian driver \(B\) with an alternative process possessing the attributes to capture the intended stylized facts. Perhaps the most common option utilized in the literature is _fractional Brownian motion_\(B^{H}=\{B^{H}(t),t\geq 0\}\) defined as a Gaussian process with \(B^{H}(0)=0\) a.s., \(\mathbb{E}[B^{H}(t)]=0\) for all \(t\geq 0\) and
\[\mathbb{E}\left[B^{H}(t)B^{H}(s)\right]=\frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2 H}\right),\quad s,t\geq 0, \tag{3.13}\]
where \(H\) can take values in \((0,1)\) and is called the _Hurst index15_. Fractional Brownian motion was initially considered by Kolmogorov [148] and later reintroduced by Mandelbrot and van Ness [159] who also obtained its Volterra-type representation
Footnote 15: This parameter is named after Harold Edwin Hurst (1880–1978) who studied long-range dependence in fluctuations of the water level in the Nile River.
\[B^{H}(t):=\frac{1}{\Gamma\left(H+\frac{1}{2}\right)}\int_{-\infty}^{0}\left((t- s)^{H-\frac{1}{2}}-(-s)^{H-\frac{1}{2}}\right)dB(s)+\frac{1}{\Gamma\left(H+ \frac{1}{2}\right)}\int_{0}^{t}(t-s)^{H-\frac{1}{2}}dB(s), \tag{3.14}\]
where \(B=\{B(t),\ t\geq 0\}\) is a standard Brownian motion. In the literature, a truncated version of (3.14) called the _Riemann-Liouville fractional Brownian motion_ is often used:
\[B^{H}_{RL}(t):=\frac{1}{\Gamma\left(H+\frac{1}{2}\right)}\int_{0}^{t}(t-s)^{H -\frac{1}{2}}dB(s), \tag{3.15}\]
and the Volterra kernel
\[\mathcal{K}(t,s):=\frac{1}{\Gamma\left(H+\frac{1}{2}\right)}(t-s)^{H-\frac{1 }{2}}\,\mathbb{1}_{s<t} \tag{3.16}\]
in (3.15) is called the _fractional_ kernel.
Fractional Brownian motion is extremely convenient from the mathematical perspective: it is the only stochastic process that simultaneously [168]
* is Gaussian,
* has stationary increments and
* is _self-similar_, i.e. \[B^{H}(at)\stackrel{{\rm Law}}{{=}}a^{H}B^{H}(t),\quad\forall a \geq 0.\] (3.17)
In addition, if \(H=1/2\), \(B^{H}\) coincides with a standard Brownian motion and hence can be viewed as a broad generalization of the latter. Interestingly, the value \(H=1/2\) turns out to be the boundary between two distinct volatility modeling paradigms: one favoring \(H>1/2\) and the other advocating for \(H<1/2\), and the aim of this subsection is to characterize both of them.
Fractional models with long memory: \(H>1/2\).First of all, observe that
\[\begin{split}\mathbb{E}\left[B^{H}(1)\left(B^{H}(n)-B^{H}(n-1) \right)\right]&=\frac{1}{2}\left(n^{2H}-2(n-1)^{2H}+(n-2)^{2H} \right)\\ &\sim H(2H-1)n^{2H-2},\quad n\to\infty.\end{split} \tag{3.18}\]
Therefore, if \(H\in(1/2,1)\), the autocorrelation function behaves as \(O(n^{-\beta})\) with \(\beta\in(0,1)\), revealing the long memory property. In particular, if \(H\in(3/4,1)\), the behavior of (3.18) matches the empirical estimates \(0<\beta\leq\frac{1}{2}\) for absolute log-returns highlighted in Subsection 2.3.
Historically, the long memory of fractional Brownian motion for \(H>1/2\) was the original reason to employ the latter in volatility modeling. Namely, in 1998, Comte & Renault [59] suggested the first continuous time fractional volatility model of the form
\[\sigma(t)=\theta_{1}\exp\left\{\theta_{2}\int_{0}^{t}e^{-\theta_{3}(t-s)}dB^{ H}(s)\right\}, \tag{3.19}\]
which can mimic volatility persistence and explain slow decays in the smile amplitude of implied volatility surfaces when \(T\to\infty\) (in this regard, we also recommend the simulation study [117] that illustrates this phenomenon numerically). Other contributions studying stochastic volatility driven by fractional Brownian motion with \(H>1/2\) include:
* [182], which considers the model of the form \[\begin{split} dS(t)&=\sigma(t)dW(t),\\ \sigma(t)&=F\left(\int_{0}^{t}a(t,u)dB^{H}(u)+f(t) \xi_{0}\right)\end{split}\] where \(F\), \(a\) and \(f\) are nuisance parameters and \(\xi_{0}\) is a random initial condition;
* [57] and [28], which discuss a fractional counterpart of the model (3.6): \[\begin{split} dS(t)&=\mu S(t)dt+\sigma(Y(t))S(t)dW(t ),\\ Y(t)&=-\theta_{1}Y(t)dt+\theta_{2}dB^{H}(t),\end{split}\] (3.20) where \(\mu\), \(\theta_{1}\), \(\theta_{2}>0\) and \(\sigma\) is a deterministic function that is additionally assumed to have sublinear growth in [28];
* the series of papers [169, 170, 171] that proposes \[\begin{split} dS(t)&=\mu S(t)dt+\sigma(Y(t))S(t) dW(t),\\ dY(t)&=\frac{1}{2}\left(\frac{\theta_{1}}{Y(t)}- \theta_{2}Y(t)\right)dt+\frac{\theta_{3}}{2}dB^{H}(t),\end{split}\] (3.21) where \(\mu\), \(\theta_{1}\), \(\theta_{2}\), \(\theta_{3}>0\) and \(\sigma\) is a given function with sublinear growth. Note that (3.21) can be regarded as a fractional extension of the Heston model (3.7) since the process \(X(t)=Y^{2}(t)\) satisfies the pathwise SDE \[dX(t)=(\theta_{1}-\theta_{2}X(t))dt+\theta_{3}\sqrt{X(t)}dB^{H}(t).\]
In some models, long memory is incorporated not directly through fractional Brownian motion, but rather by combining standard Brownian diffusions with the fractional kernel (3.16). Examples of this approach can be found in e.g. [57] or [44], where the prices are assumed to follow
\[\begin{split} dS(t)&=\mu(t)S(t)dt+\sigma(t)S(t)dW(t ),\\ \sigma^{2}(t)&=\theta+\frac{1}{\Gamma\left(H+\frac{1 }{2}\right)}\int_{-\infty}^{t}(t-s)^{H-\frac{1}{2}}X(s)ds,\\ dX(t)&=\theta_{1}(\theta_{2}-X(t))dt+\theta_{3} \sqrt{X(t)}dB(t),\end{split}\]
where \(W\) and \(B\) are correlated standard Brownian motions.
As a final remark, we mention the special interplay between the long memory and self-similarity (3.17) properties of fractional Brownian motion. As previously discussed in Subsection 2.3, any measures of long-range dependence are hard to estimate statistically: the analysis of time series during overly extended time periods naturally raises concerns about data non-stationarity. However, if the data is additionally assumed to display self-similarity, its long-term behavior can be inferred from high-frequency observations over shorter periods. For a detailed discussion on self-similarity in financial time series as well as its connection with long memory, we refer the reader to [61, Subsections 2.3 and 2.4].
Rough revolution: \(H<1/2\).As highlighted in the simulation study [117], fractional Brownian motion with \(H>1/2\) indeed allows to capture the behavior of implied volatility surfaces for longer maturities. However, a high Hurst index does not seem to have any positive impact on the short-term fit. For example, Alos, Leon & Vives in Section 7.2.1 of their 2007 paper [8] analyze the behavior of the short-term implied volatility skew (2.8) generated by a variation of the model (3.20) with \(H>1/2\) and analytically prove that it does not behave as \(O(T^{-\beta})\), \(\beta\approx\frac{1}{2}\), when \(T\to 0\). Interestingly, in Section 7.2.2 of the same paper, they notice that the stochastic volatility model driven by a Riemann-Liouville fractional Brownian motion with \(H\in(0,1/2)\) produces the required skew asymptotics \(O(T^{-\frac{1}{2}+H})\) as \(T\to 0\).
The arguments in [8] were based on _Malliavin calculus_ (see e.g. [6, 173]): under certain regularity assumptions, the short-term explosion of the implied volatility skew translates to the explosion of the Malliavin derivative of volatility, the property which holds for e.g. fractional Brownian motion with \(H<1/2\). In 201416, Gatheral, Jaisson & Rosenbaum [123] advocated for \(H<1/2\) from a different perspective: using estimation techniques based on power variations, they came to a conclusion that volatility must have Holder regularity of order \(\approx 0.1\). Using (3.13), it is easy to check that
Footnote 16: Although paper [123] appeared in _Quantitative finance_ in 2018, the preprint had been available on SSRN and ArXiv since 2014.
\[\mathbb{E}[|B^{H}(t)-B^{H}(s)|^{2}]=|t-s|^{2H},\quad s,t\geq 0,\]
so, by the Kolmogorov-Chentsov theorem (see e.g. [126, p. 192]), the paths of \(B^{H}\) are Holder continuous up to the order \(H\). In other words, [123] concluded that fractional Brownian motions with very small Hurst index \(H\) are preferable choices for stochastic volatility modeling. Later in 2021, Fukasawa [113] re-visited the interplay between roughness and power-law of at-the-money volatility skew. The main theoretical result presented in [113] is as follows: _if_ the price \(S\) is a positive _continuous_ semimartingale and _if_ the at-the-money implied volatility skew (2.8) exhibits the power-law behavior of the form \(T^{-\frac{1}{2}+H}\), \(T\to 0\), \(H\approx 0\), then \(H_{0}\)-Holder continuity of the quadratic variation derivative
\[\frac{d}{dt}\langle\log S\rangle_{t} \tag{3.22}\]
leads to an arbitrage opportunity if \(H_{0}>H\). Note that (3.22) exactly coincides with \(\sigma=\{\sigma(t),\ t\geq 0\}\) in the general stochastic volatility model (3.1) and hence [113] implies that, in a continuous setting with power law of the implied volatility skew, the volatility process "_has to_" be rough to avoid arbitrage.
These collective observations rapidly developed into a vast research field known as "_rough volatility_", with hundreds of papers published over the years. For an extensive literature list on the topic that is regularly updated by specialists in the field, we refer the reader to [1]. Some notable models include (in all cases, \(H\in\left(0,\frac{1}{2}\right)\)):
* the rough fractional stochastic volatility (RFSV) model [123] \[\sigma(t)=\exp\left\{\theta_{1}+\theta_{2}\int_{-\infty}^{t}e^{-\theta_{3}(t- s)}dB^{H}(s)\right\},\] which can be regarded as a rough counterpart of the model (3.19);
* the rough Bergomi model [23, 144] \[\begin{split} dS(t)&=\sqrt{\sigma(t)}S(t)dW(t),\\ \sigma(t)&=\theta_{0}(t)\exp\left\{2\theta_{1}\int_ {0}^{t}(t-s)^{H-\frac{1}{2}}dB(s)-\theta_{1}^{2}t^{2H}\right\},\end{split}\] (3.23)
where \(\theta_{1}>0\) and \(\theta_{0}\) is a deterministic function;
* the mixed rough Bergomi model [130, 152] \[dS(t) =S(t)\sqrt{\sigma(t)}dB(t),\] (3.24) where \(\delta\in[0,1]\), \(\theta_{1}\), \(\theta_{2}>0\) and \(\theta_{0}\) is a deterministic function;
* the rough SABR model [114] where the price is given by \[dS(t) =\sqrt{\sigma(t)}\beta(S(t))dW(t),\] \[\sigma(t) =\theta_{0}(t)\exp\left\{\theta\sqrt{2H}\int_{0}^{t}(t-u)^{H- \frac{1}{2}}dB(s)-\frac{1}{2}\theta^{2}t^{2H}\right\}dB(t),\] with \(0\leq t\leq s\), \(\theta_{0}\) being a positive random process and \(\beta\) being a positive continuous function;
* the rough Stein-Stein model [137] \[dS(t) =\theta\sigma(t)S(t)dW(t),\] (3.25) \[\sigma(t) =\frac{1}{\Gamma\left(H+\frac{1}{2}\right)}\int_{0}^{t}(t-s)^{H- \frac{1}{2}}dB(s);\]
* the fast-varying rough volatility [119] \[dS(t) =\mu(t)dt+\sigma^{\varepsilon}(t)dW(t),\] (3.26) \[\sigma^{\varepsilon}(t) =F(Z^{\varepsilon}(t)),\] \[Z^{\varepsilon}(t) =\frac{\theta}{\sqrt{\varepsilon}}\int_{-\infty}^{t}\mathcal{K} _{H}\left(\frac{t-s}{\varepsilon}\right)dB(s),\] where \(\mathcal{K}_{H}(t):=C_{H}\left(t^{H-\frac{1}{2}}-\int_{0}^{t}(t-s)^{H-\frac{ 1}{2}}e^{-s}ds\right)\), \(C_{H}>0\), \(F\) is positive and bounded one-to-one function, \(\theta>0\) and \(\varepsilon>0\) is assumed to be small;
* the rough Heston model [99] \[dS(t) =\sqrt{\sigma(t)}S(t)dW(t),\] (3.27) \[\sigma(t) =\sigma(0)+\int_{0}^{t}\frac{(t-s)^{H-\frac{1}{2}}}{\Gamma\left(H +\frac{1}{2}\right)}\left(\theta_{1}(\theta_{2}-\sigma(s))ds+\theta_{3}\sqrt{ \sigma(s)}dB(s)\right)\] and its modification, the quadratic rough Heston model [124, 183], \[dS(t) =S(t)\sqrt{\sigma(t)}dW(t),\] (3.28) \[\sigma(t) =a(Z(t)-b)^{2}+c,\] \[Z(t) =\int_{0}^{t}\theta_{1}\frac{(t-s)^{H-\frac{1}{2}}}{\Gamma\left(H +\frac{1}{2}\right)}(\theta_{0}(s)-Z(s))ds+\int_{0}^{t}\theta_{2}\frac{(t-s)^{ H-\frac{1}{2}}}{\Gamma\left(H+\frac{1}{2}\right)}\sqrt{a(Z(s)-b)^{2}+cd}W(s),\] where \(B\), \(W\) are Brownian motions, \(a\), \(b\), \(c\), \(\theta_{1}\), \(\theta_{2}\), \(\theta_{3}>0\) and \(\theta_{0}\) is a deterministic function.
The rough Heston approach (3.26) is especially interesting from the modeling perspective: in addition to the power law behavior of (2.8), it can reproduce a special form of Zumbach effect [76, 98] and can be interpreted as the limit of a reasonable tick-by-tick price model based on two-dimensional Hawkes processes [97], i.e. deduces roughness of the volatility directly from the market microstructure.
Puzzles of fractionality and roughness.In principle, one can divide the majority of arguments in favor of rough volatility into two distinct categories:
* econometric studies like [123] which involve regularity estimations of volatility from historical high-frequency samples of an asset (e.g. S&P500 index);
* analyses from the options pricing perspective such as [8, 113] that justify roughness by its ability to reproduce e.g. the power law behavior of implied volatility skews (2.8).
However, as is often the case when modeling extremely complex systems like the financial market, neither of these arguments is convincing enough to be unequivocally declared undeniable and immune to valid criticism. For example, the methodology used in [123] was examined by Rogers [181], Cont & Das [63] as well as Fukasawa, Takabatake & Westphal [115, 116]: they apply the same regularity estimation procedure as in [123] to _synthetic_ datasets and report that the estimator tends to produce low values of _H regardless of the true parameter_ used in the simulation. This issue is consensually explained as follows in the abovementioned literature: since the volatility \(\sigma\) is not observable directly, one must first extract some volatility proxy from the stock data and then use _it_ as the sample for regularity estimation. However, this procedure results in additional approximation errors that bias the estimator of \(H\) towards zero. The authors of [123] acknowledge this issue themselves writing that "_estimation errors when estimating volatility can be quite significant for some models, leading to downward biases in the measurement of the smoothness_" [123, p. 946].
Interestingly, there were several subsequent attempts to account for the problem described above. For example, Fukasawa, Takabatake & Westphal [115, 116] develop a more robust estimation technique and still report that \(H<0.1\) is indeed the best fit for the volatility model
\[d\log\sigma^{2}(t)=\theta_{1}(t)dt+\theta_{2}dB^{H}(t) \tag{3.28}\]
with \(\theta_{1}\) being an unknown adapted cadlag process. Bolko et. al. [33] perform the generalized method of moment (GMM) estimation and also confirm the roughness of volatility for thirty-one leading stock indexes under the assumption that \(\log\sigma^{2}\) is a fractional Ornstein-Uhlenbeck process.
Next, the claim that the implied volatility skew follows the power law can also be challenged. For example, Guyon & El Amrani [133] investigate the volatility skew behavior in more detail and conclude that
"_...power law fits the term-structure of ATM skew of equity indexes quite well over a large range of maturities, from 1 month to a few years17. But this should not lead one to conclude that the ATM skew blows up like this power law at zero maturity. Zooming in on the short maturities shows that the power-law extrapolation fits the data poorly._"
Footnote 17: In the dataset we used to produce Fig. 3a above, the shortest maturity was 10 days.
In summary, Guyon & El Amrani [133, p. 14] argue that "_far from being infinite, the zero-maturity extrapolation of the skew18 is distributed around a typical value of 1.5_". On the contrary,
Delemotte, De Marco & Segonne [81] investigate the behavior of average volatility skews between 2007 and 2015 (a period that is different from 2020-2022 of Guyon & El Amrani [133]) and find that a power-law-type explosive behavior of the implied volatility skew when \(T\to 0\) is entirely appropriate! In addition, they argue [81, Section 3] that the average skew behaves differently for shorter and longer maturities: their analysis suggests that
\[\mathbb{E}[\Psi(T)]\propto\begin{cases}T^{-\frac{1}{2}+H_{1}},&T<T_{1},\\ T^{-\frac{1}{2}+H_{2}},&T>T_{1},\end{cases} \tag{3.29}\]
where \(H_{1}>H_{2}\) and \(T_{1}\) is roughly 2 months.
What is the reason for such a discrepancy in conclusions regarding very similar (and sometimes _literally the same_) datasets? Features like the behavior of the volatility skew are very intricate and their assessment based on discrete data leaves much space for interpretation and extrapolation. For example, Delemotte, De Marco & Segonne [81, pp. 2-3] compare their result to [133] and summarize the difference as follows:
"_As a result, both models could very reasonably be used to extrapolate the ATM skew for very short maturities, and in any case below the first maturity quoted on the market, while leading to different skew asymptotics: finite limiting skew in the 2-factors Bergomi model19, and exploding skew in the 2PL model20--indicating that the question of the explosive or non-explosive nature of the short-end of the skew curve might be (at least in the case of the SP500 index) hard to disambiguate_."
Footnote 19: The one of [133].
Footnote 20: The one of [81].
In some sense, this ambiguity is very natural: the volatility itself is a purely theoretical (albeit meaningful) concept, and hence there is no single "_ultimately true volatility model_". In this context, the question "_is volatility truly rough_?" lacks a definitive "_correct_" answer. Rough volatility is a valid modeling framework with its advantages and limitations which indeed mimics some important features of the data: for example, the empirically observed steepness of smiles at-the-money does increase close to maturity (as clearly visible on Fig. 2), and rough models seem to capture this effect better than classical Brownian diffusions, irrespective of whether this increase "truly" follows the power law. In turn, econometric analyses like [33, 115, 116] can be viewed as reasonable reality checks which can reveal possible points of contradiction with data and test specific models against each other: for example, the results of [33, 115, 116] could mean that (3.28) with small \(H\) seems to be more consistent with some features of the data than e.g. (3.8) with \(\beta=1\) or (3.19) with \(H>\frac{1}{2}\). On the other hand, other models may also be compatible with observations: after all, according to a famous aphorism commonly attributed to the statistician George Box, _all models are wrong, but some are useful_.
With this disclaimer in mind, let us finish this Subsection with a discussion of another intricate point coming from the interplay between roughness and long memory _in the specific context of fractional Brownian motion \(B^{H}\)_. More precisely, _long memory_ requires the Hurst index \(H>1/2\) whereas _roughness_ demands \(H<1/2\). Despite some studies that demonstrate spurious long memory appearing due to model misspecifications (see e.g. [123, Section 4]), roughness alone cannot explain the behavior of the entire volatility surface. For example, Funahashi & Kijima [118] demonstrate that volatility models based on fractional Brownian motion with \(H<1/2\) do not give the required rate of decrease in the smile amplitude as \(T\to\infty\) whereas \(H>1/2\) gives this effect (see also [117]).
It should be noted that this contradiction, referred to as the "_fractional modeling puzzle_" in [118], is somewhat synthetic: in general, long memory and roughness do not depend on each other and can co-exist within a single stochastic process. In other words, the reason for this "_fractional puzzle_" comes from the properties of fractional Brownian motion itself and a different modeling framework may utilize both of these features without any structural contradictions (see e.g. the model based on Brownian semistationary processes by Bennedsen, Lunde & Pakkanen [25]). Nevertheless, in a continuous setting, \(B^{H}\) seems to be an extremely convenient and valuable asset from the mathematical perspective:
* it is Gaussian, enabling the utilization of numerous methods from Gaussian process theory including efficient numerical methods, Malliavin calculus etc. (see e.g. [173, Chapter 5]);
* it has stationary increments and is ergodic, which facilitates various statistical estimation techniques (see e.g. [150]);
* despite being non-semimartingale, fractional Brownian motion has a developed stochastic integration theory [168].
In the literature, there are several possible approaches to incorporate long memory and roughness within the fractional framework. The most straightforward methodology is to use two fractional Brownian motions with different Hurst indices. Such model was utilized in e.g. [118] (see also [6, Section 7.7]) in the form
\[\begin{split} dS(t)&=\mu S(t)dt+\sigma(X^{1}(t),X ^{2}(t))dW(t),\\ dX^{i}(t)&=(\theta_{1}^{i}-\theta_{2}^{i}X^{i}(t) )dt+\theta_{3}^{i}dB^{H_{i}}(t),\quad i=1,2,\end{split} \tag{3.30}\]
with \(B^{H_{1}}\), \(B^{H_{2}}\) being two fractional Brownian motions with \(H_{1}>1/2\) and \(H_{2}<1/2\). The authors report that this model indeed manages to grasp the implied volatility surface with the power law for short maturities and slower decay of the smile amplitude for long maturities. Such a suggestion somehow resonates with the observations made in [81]: the authors also discuss the possibility of introducing several factors with different regularity into the model in order to reproduce (3.29) (although they consider \(0<H_{2}<H_{2}<\frac{1}{2}\)).
Another interesting possibility - _a multifractional Brownian motion_ - is advocated in Corlay et. al. [68] (see also [12]). There, the authors estimate the local roughness of the volatility and conclude that it is heavily variable and has periods of low (\(\approx 0.1\)) and high (\(\approx 0.8\)) regularity (see [68, Figure 2]). It is also important to note that [123, Section 2.6] also reports some dependence of the volatility roughness on time.
As a generalization of both approaches mentioned above, one can also consider the usage of _Gaussian Volterra processes_\(Z(t):=\int_{0}^{t}\mathcal{K}(t,s)dB(s)\). Such volatility drivers were considered in e.g. [51, 165] or the series of papers [84, 86].
### Usability challenges of stochastic volatility models
We finish this section with several remarks that, in our opinion, are worth the attention of the reader.
Positivity of volatility.As mentioned in Subsection 3.2, one of the natural expectations from a "_reasonable_" volatility model is the positivity of its paths. This requirement actually goes beyond the simple consideration that \(\sigma=\{\sigma(t),\ t\geq 0\}\) should resemble its original proxy (1.1) and is connected to the procedure of transition between the physical and the pricing measures. In the Black-Scholes-type models (3.1), martingale densities usually involve terms of the form
\(\int_{0}^{T}\frac{1}{\sigma(s)}dW(s)\) and \(\int_{0}^{T}\frac{1}{\sigma^{2}(s)}ds\) (see e.g. [29, Proposition 1.11]) which can be poorly defined, if the volatility hits zero with positive probability. Of course, one may model the market under the pricing measure in the first place, but, in this situation, one sacrifices the ambition to justify their approach with econometric analysis based on historical time series a la [123] which, of course, should be performed under the physical measure.
Possibility of moment explosionsA common issue for the stochastic volatility framework is the possibility of moment explosions in price [10]. That means that the moment of the price \(\mathbb{E}[S^{r}(t)]\) may be infinite for all time points \(t\) after some \(t_{*}\). Moment explosions can be a notable drawback from at least two perspectives. First, numerical schemes involving \(L^{r}\)-convergence become inaccessible. Another perspective is asset pricing: as it is noted in [10, Section 8], "_several actively traded fixed-income derivatives require at least \(L^{2}\) solutions to avoid infinite model prices_". In principle, both positivity and absence of moment explosions can be achieved by assuming bounds on the volatility; see e.g. [9, 28, 84, 86, 108, 119, 174] and the footnote on p. 2 in [183]. It seems that this assumption is not too aggravating: in industry, there seems to be a consensus [95] that real-world proxies of volatility are typically range-bound.
Importance of numerical methods.As noted in [121, p. 24], the standard Heston model is still widely used despite all the empirical inconsistencies outlined in Subsection 3.2. The reason for that is mainly the availability of algorithms for practically all possible applications. Stochastic volatility models normally do not have closed-form expressions for option pricing, portfolio optimization, hedging etc., and therefore the development of efficient numerics for them is of acute importance. It is especially relevant for fractional/rough models which are predominantly non-Markovian and hence cannot utilize various stochastic optimization techniques developed for Markov processes. In recent years, some work in this direction was done in e.g. [3, 4, 84, 85, 86, 183].
## 4 VIX and the joint calibration puzzle
As previously mentioned in the Introduction, volatility is an important tool for quantifying risk, in addition to being a critical variable in option pricing. Therefore, it is no surprise that market actors have been seeking dedicated financial instruments to hedge against drastic volatility changes and capitalize on overall volatility trends. Nowadays, derivatives written on various volatility proxies are extremely popular among investors: for instance, according to the Cboe Annual Report 2022 [52], the _CBOE Volatility Index_ (_VIX_, see Fig. 6) is one of the most traded underlying assets of the Chicago Board Options Exchange, alongside the S&P 500 (SPX) index.
Since volatility is not a directly observable value and, in some sense, it comes from the theoretical domain, its quantification is a necessary step before considering its use as an underlying asset. Naturally, various methodologies have been applied for such quantification; a detailed historical overview of this subject can be found in e.g. [46, Section 3]. Here we mention the early approaches of Gastineau (1977) [120] and Cox & Rubinstein (1985) [72, Appendix 8A], who proposed indices based on the implied volatility, as well as Brenner & Galai (1989) [41], who suggested a metric coming from the realized volatility and considered derivatives employing it as the underlying assets. Another index utilizing an average of S&P100 option implied volatilities was introduced by Fleming et. al. [106], and this methodology was used by the Chicago Board Options Exchange for computation of their early version of VIX between 1993 and 2003. The general idea behind the modern computation of VIX can be traced back to Breeden & Litzenberger (1978) [39], but the exact formulas were crystalized in the works of Dupire [93] and Carr
& Madan [48].
Nowadays, VIX is the preeminent volatility proxy on a global scale, closely connected to a volatility modeling challenge occasionally referred to as "_The Holy Grail of volatility modeling_" [124, 131]. In order to outline this problem in more detail and prevent any ambiguity, let us first dedicate some time and delve into the specifics of VIX computation and interpretation.
### Intermezzo: VIX and its interpretation
Assume that the SPX forward price \(S\) is a continuous martingale satisfying
\[dS(t)=\sigma(t)S(t)dW(t), \tag{4.1}\]
on a filtered probability space \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\},\mathbb{Q})\) under the risk-neutral probability measure \(\mathbb{Q}\), where \(W\) is a \(\mathbb{Q}\)-Brownian motion and \(\sigma=\{\sigma(t),\ t\geq 0\}\) is an adapted square integrable stochastic process. It is easy to verify that, under some mild assumptions on \(\sigma\), (4.1) has a unique solution of the form
\[S(t)=S(0)\exp\left\{-\frac{1}{2}\int_{0}^{t}\sigma^{2}(s)ds+\int_{0}^{t} \sigma(s)dW(s)\right\}.\]
Next, fix a period of time \(T\) (in the case of CBOE VIX, \(T=30\) days) and consider the value
\[-\frac{2}{T}\mathbb{E}_{\mathbb{Q}}\left[\log\frac{S(t+T)}{S(t)}\ \Big{|}\ \mathcal{F}_{t}\right]. \tag{4.2}\]
One the one hand, since, by the martingale property,
\[\mathbb{E}_{\mathbb{Q}}\left[\int_{t}^{t+T}\sigma(s)dW(s)\ \Big{|}\ \mathcal{F}_{t}\right]=0,\]
Figure 6: Daily values of the CBOE VIX index, 2005-2023. Note the two spikes in 2008 and 2020 which correspond to the respective economic crises. The data is retrieved from CBOE.com [2].
one can observe that
\[\begin{split}-\frac{2}{T}\mathbb{E}_{\mathbb{Q}}\left[\log\frac{S(t+T) }{S(t)}\ \Big{|}\ \mathcal{F}_{t}\right]&=-\frac{2}{T}\mathbb{E}_{\mathbb{Q}} \left[-\frac{1}{2}\int_{t}^{t+T}\sigma^{2}(s)ds+\int_{t}^{t+T}\sigma(s)dW(s) \ \Big{|}\ \mathcal{F}_{t}\right]\\ &=\frac{1}{T}\int_{t}^{t+T}\mathbb{E}_{\mathbb{Q}}\left[\sigma^{2 }(s)\ \big{|}\ \mathcal{F}_{t}\right]ds.\end{split} \tag{4.3}\]
On the other hand, by Taylor's formula with integral remainder,
\[\log\frac{S(t+T)}{S(t)} =\frac{S(t+T)-S(t)}{S(t)}-\int_{S(t)}^{S(t+T)}\frac{(S(t+T)-K)}{K^ {2}}dK\] \[=\frac{S(t+T)-S(t)}{S(t)}-\int_{0}^{S(t)}\frac{(K-S(t+T))_{+}}{K^ {2}}dK-\int_{S(t)}^{\infty}\frac{(S(t+T)-K)_{+}}{K^{2}}dK,\]
and hence, since
\[\mathbb{E}_{\mathbb{Q}}\left[\frac{S(t+T)-S(t)}{S(t)}\ \Big{|}\ \mathcal{F}_{t} \right]=0\]
by the martingale property, (4.2) can be re-written as
\[\begin{split}-\frac{2}{T}\mathbb{E}_{\mathbb{Q}}\left[\log \frac{S(t+T)}{S(t)}\ \Big{|}\ \mathcal{F}_{t}\right]&=\mathbb{E}_{ \mathbb{Q}}\left[\frac{2}{T}\int_{0}^{S(t)}\frac{(K-S(t+T))_{+}}{K^{2}}dK\ \Big{|}\ \mathcal{F}_{t}\right]\\ &\qquad+\mathbb{E}_{\mathbb{Q}}\left[\int_{S(t)}^{\infty}\frac{ (S(t+T)-K)_{+}}{K^{2}}dK\ \Big{|}\ \mathcal{F}_{t}\right]\\ &=\frac{2}{T}\int_{0}^{S(t)}\frac{\mathbb{E}_{\mathbb{Q}}\left[( K-S(t+T))_{+}\ |\ \mathcal{F}_{t}\right]}{K^{2}}dK\\ &\qquad+\int_{S(t)}^{\infty}\frac{\mathbb{E}_{\mathbb{Q}}\left[( S(t+T)-K)_{+}\ |\ \mathcal{F}_{t}\right]}{K^{2}}dK\\ &=\frac{2e^{rT}}{T}\left(\int_{0}^{S(t)}\frac{P_{t}(K,T)}{K^{2}} dK+\int_{S(t)}^{\infty}\frac{C_{t}(K,T)}{K^{2}}dK\right),\end{split} \tag{4.4}\]
where \(r\) is the instantaneous interest rate and \(P_{t}(K,T)\), \(C_{t}(K,T)\) are, respectively, the prices (at moment \(t\)) of put and call options with strike \(K\) and expiry date \(t+T\). To summarize, under the setting specified above, (4.3) and (4.4) yield
\[\frac{1}{T}\int_{t}^{t+T}\mathbb{E}_{\mathbb{Q}}\left[\sigma^{2}(s)\ \big{|}\ \mathcal{F}_{t} \right]ds=\frac{2e^{rT}}{T}\left(\int_{0}^{S(t)}\frac{P_{t}(K,T)}{K^{2}}dK+ \int_{S(t)}^{\infty}\frac{C_{t}(K,T)}{K^{2}}dK\right). \tag{4.5}\]
VIX is computed (see e.g. [53] or [151, Section 1.5.2]) by discretizing the right-hand side of (4.5) (note that the latter relies exclusively on the empirically observable values, SPX forward \(S(t)\) and put/call option prices):
\[\text{VIX}_{t}^{2}(T):=\frac{2e^{rT}}{T}\left(\sum_{j=-M}^{-1}\frac{P_{t}(K_{j },T)}{K_{j}^{2}}(K_{j+1}-K_{j})+\sum_{i=1}^{N}\frac{C_{t}(K_{i},T)}{K_{i}^{2}} (K_{i}-K_{i-1})\right)-\frac{1}{T}\left(\frac{S(t)}{K_{0}}-1\right)^{2},\]
where \(K_{-M}<K_{-(M-1)}<...<K_{0}\) are payoffs of all listed out-of-the-money put options with maturity at \(t+T\), \(K_{0}\leq S(t)\), and \(K_{1}<K_{2}<...<K_{N}\) are payoffs of all listed out-of-the-money call options with maturity at \(t+T\), \(K_{1}>S(t)\). In the meanwhile, the left-hand side of (4.5) is a standard notion of the forward variance, precisely the value one intends to capture.
### VIX smile and the joint calibration puzzle
Despite any potential inaccuracies, VIX stands as the most popular proxy for market volatility in the world and, therefore, it is unsurprising that derivatives with VIX as an underlying are actively traded on the market. In particular, in March 2004, CBOE launched VIX futures and, in February 2006, VIX options were introduced - and, naturally, a problem of pricing such derivatives emerged.
First of all, having VIX option prices, one can use the procedure described in Subsection 2.2 and construct the corresponding implied volatility surface \(\widehat{\sigma}(T,K)\), with \(T\) denoting the time to maturity and \(K\) being the strike. Contrary to the convex "smiles" for derivatives written on classical stocks and stock indices, the VIX implied volatility \(\widehat{\sigma}(T,K)\) consistently exhibits concave behavior in \(K\) for every fixed \(T\) with positive slope at the money (see [7, Figure 1] as well as [19, 110] or [147]).
It turns out that continuous stochastic volatility models have mixed success in reproducing this phenomenon, especially for shorter maturities - in this regard, we recommend an excellent review [186] on the topic. Alos, Garcia-Lorite & Gonzalez [7] (see also [6, Chapter 10]) use Malliavin calculus tools and analyze selected models to check whether they produce the positive slope at the money for short maturities. Their results show that
* SABR model (3.9) gives the flat skew, but its modification called the _mixed SABR model_ \[\sigma(t)=\theta_{0}\left(\delta\exp\left\{2\theta_{1}W(t)-\theta_{1}^{2}t \right\}+(1-\delta)\exp\left\{2\theta_{2}W(t)-\theta_{2}^{2}t\right\}\right),\] where \(\delta\in[0,1]\), \(\theta_{0}\), \(\theta_{1}\), \(\theta_{2}>0\), generates the required positive slope provided that \(\delta\neq 0,1\) and \(\theta_{1}\neq\theta_{2}\);
* Heston model (3.7) normally generates negative VIX skew (see also [127, Figure 3]); however, under some conditions on coefficients (e.g. when Feller condition is violated, see [147]), the slope can be positive;
* the rough Bergomi model (3.23) as well as its mixed modification (3.24) both give the required slope (see also [144]).
Other notable continuous stochastic volatility that are able to capture the upward smirk include:
* the \(3/2\)-model (3.10) [19, Section 2];
* double CEV-model (3.11) [121];
* Heston with stochastic vol-of-vol model (3.12) [110];
* quintic Ornstein-Uhlenbeck model [5] \[\sigma(t) =\sqrt{\theta_{0}(t)}\frac{p(X(t))}{\sqrt{\mathbb{E}\left[p^{2}(X (t))\right]}},\] (4.6) \[X(t) =\varepsilon^{-\theta}\int_{0}^{t}e^{-\frac{\theta}{\varepsilon}( t-s)}dB(s),\] and, more generally, a class of Gaussian polynomial models [4] \[\sigma(t) =\sqrt{\theta_{0}(t)}\frac{p(X(t))}{\sqrt{\mathbb{E}\left[p^{2}(X (t))\right]}},\] (4.7) \[X(t) =\int_{0}^{t}\mathcal{K}(t-s)dB(s),\] where \(\theta>0\), \(p\) is a polynomial and \(\mathcal{K}\) is a square integrable Volterra kernel;
* quadratic rough Heston model (3.27).
As one can see, multiple continuous models, both standard Brownian diffusions and rough volatility models, are able to reproduce the shape of the VIX smile. However, there is an additional dimension to that problem: since there is a direct connection between SPX and VIX, models calibrated with respect to the SPX data should be consistent with VIX and vice versa. In other words, one seeks a market model that would _jointly calibrate to both VIX and SPX simultaneously_. As noted by Guyon in [131], "_Without such models, financial institutions could possibly arbitrage each other, and even market making desks within the same institution could do so..._".
It turns out that this problem, known as the "_joint calibration puzzle_", is extremely difficult to solve; so much so that some researchers label it "_the Holy Grail of volatility modeling_" [124, 131]. For a long time, continuous-time stochastic volatility models without jumps failed to produce the perfect SPX-VIX joint calibration. Some notable attempts include the 3/2 model [19], double CEV-model [121], Heston with stochastic vol-of-vol model [110] and rough Berghomi model [144] - in all cases, the joint fit was only partially successful, predominantly failing for short maturities (see also an excellent presentation by J. Guyon [130] on the topic). The most fruitful approaches in solving the joint calibration puzzle were outside of the continuous-time-continuous-price paradigm:
* Cont & Kokholm [64] use jump-diffusions to directly model forward variances and SPX; jumps indeed allow to de-couple the ATM S&P500 skew and the ATM VIX implied volatility giving a good joint fit;
* Guyon [129, 131] utilizes nonparametric discrete time model building joint probability measure on SPX and VIX on a discrete set of dates; this allows to obtain an almost perfect joint calibration.
However, in recent years there have been several successful attempts inside the fully continuous framework, all with a higher number of parameters:
* Gatheral, Jusselin & Rosenbaum [124] and Rosenbaum & Zhang [183] show that quadratic rough Heston model (3.27) calibrates very well to SPX and VIX smiles;
* Abi Jaber, Illand & Li [4, 5] provide a good fit with Gaussian polynomial models (4.7);
* Bourgey & Guyon [132, Section 4] extend the discrete time model of [129, 131] to a continuous-time setting;
* Guyon & Mustapha [135] achieve an impressive calibration with the model \[dS_{t} =-\frac{1}{2}\sigma_{S}^{2}(t,S_{t},Y_{t})dt+\sigma_{S}(t,S_{t}, Y_{t})dW_{t},\] \[dY_{t} =\mu_{Y}(t,S_{t},Y_{t})dt+\sigma_{Y}(t,S_{t},Y_{t})\left(\rho(t, S_{t},Y_{t})dW_{t}+\sqrt{1-\rho^{2}(t,S_{t},Y_{t})}dB_{t}\right),\] where the drift, volatility and correlation functions \(\sigma_{S}\), \(\mu_{Y}\), \(\sigma_{Y}\) and \(\rho\) are modeled with neural networks.
### VIX and continuous models: interpretation issues
We finalize this section by highlighting a significant caveat related to the interpretation and tractability of VIX within the context of continuous stochastic volatility models.
Despite the undoubted elegance of VIX computation methodology, the precision of VIX per se in capturing the volatility of SPX heavily depends on the underlying assumptions. Namely, it is crucial that SPX is assumed to follow the continuous dynamics (4.1) and any deviations from this model lead to some deterioration of the VIX tractability. In particular, if there are jumps in the SPX forward dynamics, (4.3) does not hold with an additional term appearing on the right-hand side of the expression (see e.g. [15]). Therefore, if (4.1) is violated, (4.5) does not hold either, and VIX loses its original interpretation.
It is important to note that the viability of the model (4.1) can be tested statistically by comparing VIX to _variance swap contracts_. By definition, a _variance swap_ is a futures contract with a payoff at the moment \(T\) of the form
\[\frac{1}{T}\sum_{k=1}^{n}\left(\log\frac{S(t_{k})}{S(t_{k-1})}\right)^{2}-VS,\]
where \(0=t_{0}<t_{1}<\cdots<t_{n}=T\) is a partition of \([0,T]\) and \(VS\) denotes the corresponding swap rate that is determined by the market. Non-arbitrage arguments imply that, if the dynamics (4.1) indeed holds and some mild assumptions on \(\sigma\) are satisfied,
\[VS=\mathbb{E}_{\mathbb{Q}}\left[\frac{1}{T}\sum_{k=1}^{n}\left(\log\frac{S(t_ {k})}{S(t_{k-1})}\right)^{2}\right]\rightarrow\frac{1}{T}\int_{0}^{T}E_{ \mathbb{Q}}\left[\sigma^{2}(s)\right]ds\]
as \(\max_{k}|t_{k}-t_{k-1}|\to 0\). In other words, under the model (4.1), market swap rates \(\sqrt{VS}\) should approximately coincide with VIX. However, as reported by Ait-Sahalia et. al. in their analysis [15] using the over-the-counter swap rate data, there is a statistically significant gap between \(VS\) and VIX\({}^{2}\): the values of \(VS-\text{VIX}^{2}\) are mostly positive, larger during market turmoils but sizable even in quiet times. For more details in that regard as well as for possible alternatives to the current VIX computation method, we also refer the reader to the discussions in [11, 47, 162].
In other words, there is some evidence showing that the tractability of VIX as a volatility index is not that straightforward and the real-world observations of VIX are not sampled from
\[\sqrt{\frac{1}{T}\int_{t}^{t+T}\mathbb{E}_{\mathbb{Q}}\left[\sigma^{2}(s)\ \big{|}\ \mathcal{F}_{t}\right]ds} \tag{4.8}\]
as models with continuous price dynamics assume21. Therefore, fitting stochastic volatility models with the continuous price dynamics to VIX without any adjustments may in principle produce unstable results.
Footnote 21: As suggested in [162], in the presence of jumps, VIX measures _the risk-neutral entropy_ of the simple return on the S&P 500 index rather than (4.8).
## 5 Concluding remarks
In this survey, we outlined - though with considerably broad brushstrokes - the existing literature on stochastic volatility. Clearly, the number of approaches is very large and we acknowledge that we are far from describing all of them, even within the considerably narrower framework of continuous models. Nevertheless, we hope that the reader received a general understanding of the motivation behind this modeling paradigm.
We conclude our presentation by saying that updating, refining and improving models still seems an endless race. Right now, new techniques and modeling paradigms are being developed
- in particular, we mention [134] which finds strong arguments for fully path-dependent volatility models as well as machine learning and signature methods such as in [42, 45] or [73, 74] (the model in the latter actually includes the Gaussian polynomial models (4.7) as a special case). Perhaps these novel tools will become the new classics and open new frontiers in understanding the financial market.
|
2310.15125 | Explosive Chrysopoeia | Fulminating gold, the first high-explosive compound to be discovered,
disintegrates in a mysterious cloud of purple smoke, the nature of which has
been speculated upon since its discovery in 1585. In this work, we show that
the colour of the smoke is due to the presence of gold nanoparticles. | Jan Maurycy Uszko, Stephen J. Eichhorn, Avinash J. Patil, Simon R. Hall | 2023-10-19T14:41:06Z | http://arxiv.org/abs/2310.15125v1 | **Explosive Chrysopoeia**
## Abstract
**Fulminating gold, the first high-explosive compound to be discovered, disintegrates in a mysterious cloud of purple smoke, the nature of which has been speculated upon since its discovery in 1585. In this work, we show that the colour of the smoke is due to the presence of gold nanoparticles.**
## Main
The alchemist's fascination with the transmutation of base metals into gold (chrysopoeia) led to the discovery of the first high-explosive compound, fulminating gold.\({}^{1}\)Its first synthesis was described in 1585 by Sebalt Schwartzer, requiring four to five days to complete.\({}^{2}\) Since then, the process has been studied and improved to the point where fulminating gold can now be synthesized in minutes by simply mixing gold (III) compounds with ammonia.\({}^{1}\) The ease at which this material can be synthesized has stimulated research into it, leading to reviews of the current state of research into fulminating gold being periodically published by scientific journals from "Uber die Stickstoffverbindungen des Goldes" in 1915 to "Fulminating Gold and Silver" in
2019 [1, 3, 4]. Besides academic interest, the other motivator for research on fulminating gold has been safety. The danger of accidentally creating highly explosive, touch-sensitive side products resulted in warnings on the use of gold (III) compounds being printed in journals, editorial letters, chemical textbooks as far back as 1851, and safety manuals like the popular "Bretherick's Handbook of Reactive Chemical Hazards" [1, 5, 6, 7]. The interest in fulminating gold has even spread to digital media, with a video showing the synthesis of fulminating gold by Thomas De Prinse on his YouTube channel "Explosions&Fire" reaching almost 1 million views since it was first posted in 2020 [8]. In this video, De Prinse repeats a supposition that is often stated in relation to the source of the unusual red or purple colouration of the smoke created in the detonation of fulminating gold, that it is due to the presence of gold nanoparticles. There is circumstantial evidence that the smoke consists of gold nanoparticles, as it has been used to coat objects in a purple/crimson patina as described in "Opera Chimica" by Glauber [4, 9],[4, 9] much in the same way that solutions of gold nanoparticles can be used to coat substrates with purple/red layers [10]. To date, there has however been no experimental verification of this. Here, we show for the first time that the explosion of fulminating gold creates gold nanoparticles, ranging in size from 10 to 300 nm. Furthermore, given the extreme rapidity of their creation, they are more isotropic than nanoparticles created by conventional methods.
A typical synthesis was as follows: chloroauric acid (20 mg, Sigma Aldrich) was dissolved in 1 ml of deionized water in a polypropylene tube. To this solution, ammonium hydroxide (28 - 30 w/w % Fisher Scientific) was added dropwise until an orange precipitate formed. The suspension was divided into four aliquots and each placed on separate aluminium foils to dry overnight in air at room temperature. After drying, samples of approximately 5 mg were detonated by the application of heat to the aluminium foil, whilst carbon-coated TEM grids (copper 200 mesh, Agar Scientific) were held above the foil to catch the resultant purple cloud.
Figure 1 shows a high-resolution TEM image of a single nanoparticle with visible lattice fringes, that have a spacing of 0.24 nm, consistent with the (111) crystal planes of Au. The selected area electron diffraction pattern illustrated in Fig. 2 and Table 1 confirms that they are indeed Au(0), as per the Joint Committee on Powder Diffraction Standards (JCPDS) card no. 04-0784.
\begin{table}
\begin{tabular}{c|c|c|c|c} \multicolumn{4}{c}{Ring identification (JCPDS \#04-0784)} \\ \hline Plane & \multicolumn{2}{c}{Radius [1/nm]} & \multicolumn{2}{c}{\(d\)-spacing [nm]} \\ & theor. & measured & theor. & measured \\ \hline (1 1 1) & 4.245 & 4.072 & 0.236 & 0.246 \\ (0 0 2) & 4.902 & 4.668 & 0.204 & 0.214 \\ (0 2 2) & 6.932 & 6.705 & 0.144 & 0.149 \\ (1 1 3) & 8.129 & 7.847 & 0.123 & 0.127 \\ (1 3 3) & 10.684 & 10.380 & 0.094 & 0.096 \\ (2 2 4) & 12.007 & 11.522 & 0.083 & 0.087 \\ (2 2 4) & 12.007 & 12.217 & 0.083 & 0.082 \\ (0 4 4) & 13.865 & 13.956 & 0.072 & 0.072 \\ \end{tabular}
\end{table}
Table 1: Miller indices for electron diffraction rings with the theoretical (theor.) and measured radii and \(d\)-spacings of the rings.
Figure 2: Selected Area Electron Diffraction ring pattern from gold nanoparticles. The rings are indexed to Au as per the Joint Committee on Powder Diffraction Standards (JCPDS) card no. 04-0784.
TEM images of grids that were exposed to the purple cloud showed clusters of spherical nanoparticles exhibiting a wide size distribution from 30 nm to over 300 nm, with an average particle diameter of 40 nm [\(\sigma=44\)] illustrated in Figs. 3 and 4. The broad particle size distribution is indicative of the extreme rapidity of synthesis, with no possibility of achieving a lower polydispersity via the usual mechanisms of Ostwald ripening or through ligand passivation [11, 12]. The absence of well-defined facets in the nanoparticles is intriguing and indicates the accelerated synthesis. The formation of the usual faceted or even triangular morphology of Au nanoparticles is effectively prevented through their creation by detonation. In this way, larger gold nanoparticles can be created with a sphericity more commonly seen in the early stages of formation when the nanoparticles are small [12].
Figure 3: TEM image of a cluster of gold nanoparticles captured from detonated fulminating gold. The image demonstrates visually the broad dispersion of particle sizes.
his work is proof of the long-supposed nature of the cloud produced on the detonation of fulminating gold, but also potentially opens the door to fast solvent- and capping agent-free syntheses of metal nanoparticles.
|
2305.06489 | Intrinsically patterned two-dimensional transition metal halides | Patterning and defect engineering are key methods to tune 2D materials'
properties. However, generating 2D periodic patterns of point defects in 2D
materials has been elusive until now, despite the well-established methods for
creating isolated point defects and defect lines. Herein, we report on
intrinsically patterned 2D transition metal dihalides on metal surfaces
featuring periodic halogen vacancies that result in alternating coordination of
the transition metal atoms throughout the film. Using low-temperature scanning
probe microscopy and low-energy electron diffraction, we identified the
structural properties of patterned FeBr$_2$ and CoBr$_2$ monolayers grown
epitaxially on Au(111). Density-functional theory reveals that the Br-vacancies
are facilitated by low formation energies and accompanied by a lateral
softening of the layers leading to a significant reduction of the lattice
mismatch to the underlying Au(111). We demonstrate that interfacial epitaxial
strain engineering presents a versatile strategy for controlled patterning in
2D. In particular, patterning 2D magnets provides new pathways to create
unconventional spin textures with non-collinear spin. | Feifei Xiang, Neeta Bisht, Binbin Da, Mohammed S. G. Mohammed, Christian Neiß, Andreas Görling, Sabine Maier | 2023-05-10T22:48:03Z | http://arxiv.org/abs/2305.06489v1 | # Intrinsically patterned two-dimensional transition metal halides
###### Abstract
Patterning and defect engineering are key methods to tune 2D materials' properties. However, generating 2D periodic patterns of point defects in 2D materials has been elusive until now, despite the well-established methods for creating isolated point defects and defect lines. Herein, we report on intrinsically patterned 2D transition metal dihalides on metal surfaces featuring periodic halogen vacancies that result in alternating coordination of the transition metal atoms throughout the film. Using low-temperature scanning probe microscopy and low-energy electron diffraction, we identified the structural properties of patterned FeBr\({}_{2}\) and CoBr\({}_{2}\) monolayers grown epitaxially on Au(111). Density-functional theory reveals that the Br-vacancies are facilitated by low formation energies and accompanied by a lateral softening of the layers leading to a significant reduction of the lattice mismatch to the underlying Au(111). We demonstrate that interfacial epitaxial strain engineering presents a versatile strategy for controlled patterning in 2D. In particular, patterning 2D magnets provides new pathways to create unconventional spin textures with non-collinear spin.
## I Introduction
Defect engineering and polymorphisms are two widely used concepts to create novel architectures and introduce new functionalities into two-dimensional (2D) materials. These concepts have been widely studied in van der Waals (vdW) materials like transition metal dichalcogenides (TMDCs), with much attention given to polymorphs, point defects, and line defects. However, the controlled assembly of 2D periodic patterns of point defects has remained elusive.[1; 2] Vacancy lattices are particularly interesting for the selective functionalization of 2D materials with atoms and molecules as well as tuning their electronic properties. Instead, patterning in 2D was focused intensely on Moire patterns using vdW-materials heterostacks.[3]
We demonstrate here the controlled 2D patterning in single-layer transition metal halides (TMH). TMH gained significant interest since the recent discovery of intrinsic ferromagnetism in 2D vdW-materials.[4; 5] Hence, the electronic and magnetic properties of first-row 2D transition metal trihalides MX\({}_{3}\) (M = V, Cr, Mn, Fe, Co, Ni; X = Cl, Br, I) came into the focus of first-principles calculations and experimental studies.[6] In contrast, only a limited number of experimental surface-science studies are available for 2D transition metal dihalides (TMDs) so far that provide atomic scale insights on the structure, growth, and defects in real space.[7; 8; 9; 10; 11; 12; 13] This might be related to challenges in their preparation: On the one hand, some materials decompose during the growth by molecular beam epitaxy (MBE) in ultra-high vacuum (UHV) associated with a loss of halogens.[8] On the other hand, some _ex-situ_ prepared and exfoliated layers suffer from limited environmental stability in ambient conditions.[14; 15]
The study of polymorphism and intermediate stoichiometries (different from MX\({}_{2}\) and MX\({}_{3}\)) in 2D transition metal halides is just beginning.[16; 17] Analogous to the structure of TMDC, TMD monolayers can adopt both trigonal prismatic (1H) or octahedral coordination (1T) of the metal cation, see Fig S1.[18; 19] However, 1T stacking is the energetically most favorable stacking for most TMHs. In conclusion, strategies for stabilizing meta-stable polymorphs and periodic patterning for 2D materials, in general, require further research. In this respect, interface engineering using lattice mights is a powerful and promising tool, as will be outlined.
Here, we report on the growth and characterization of intrinsically patterned single-layer iron bromide (FeBr\({}_{2}\)) and cobalt bromide (CoBr\({}_{2}\)) on Au(111). Interestingly, the periodic arrangement of halogen vacancies results in an alternating coordination (6-fold and 5-fold) of the transition metal atoms across the film. The decomposition during thermal evaporation of FeBr\({}_{2}\) and CoBr\({}_{2}\) powder facilitates the formation of halogen-vacancy lattices in the respective 2D layers. The Br-vacancies are realized due to their low formation energies and stabilized by a significant reduction of the misfit strain at the TMD-Au interface. Detailed structural characterization using low-temperature scanning tunneling microscopy (STM), non-contact atomic force microscopy (nc-AFM), low-energy electron diffraction (LEED), and density-functional theory (DFT) provides comprehensive insights into the interfacial properties of intrinsically patterned FeBr\({}_{2}\) and CoBr\({}_{2}\) monolayers on Au(111). The alternating coordination number
of the transition metal atoms throughout the films, opens the way to intriguing magnetic and electronic properties, making the films interesting candidates for applications in spintronics.[20; 21] In particular, patterning FeBr\({}_{2}\) and CoBr\({}_{2}\), which are predicted to be 2D ferromagnets,[19; 22] provides new pathways to create unconventional spin textures with non-collinear spins.
## Results
### Synthesis and structure of intrinsically patterned 2D FeBr\({}_{2}\) and CoBr\({}_{2}\)
Fig. 1a-c show STM images illustrating the typical morphology of single layer FeBr\({}_{\rm x}\) films grown in UHV by thermal evaporation of anhydrous FeBr\({}_{2}\) and FeBr\({}_{3}\) powder on Au(111) kept at 450 K and 390 K, respectively. It is remarkable that the structure of both powders is almost identical, providing direct evidence for non-stoichiometric sublimation. The atomic-resolution STM images reveal a hexagonal lattice with a period of \(392\pm 30\) pm, similar to the unit cell of a FeBr\({}_{2}\) single crystal with 377.2 pm, [23] which resembles the Br atoms in the top TMD halide layer. In addition to the atomically resolved Br structure, the STM topographies exhibit a well-ordered hexagonal superstructure of dark depressions (\(60-110\) pm in depth in STM) highlighted by the red unit cells. The superstructure has a size of \(10.37\pm 0.3\) A and is mostly defect-free over the entire layers. We note the superstructure also formed at room temperature preparations. In addition to the regular superstructure, triangular-shaped irregularities in the top Br lattice are seen, which we attribute to Br-bottom defects and which will be discussed in the next section.
The superstructure can be attributed to either an ordered vacancy lattice or a Moire pattern caused by the rotation of the FeBr\({}_{2}\) lattice with respect to the Au(111) lattice. Generating periodically arranged point defects in 2D materials with this high quality is an extremely challenging task, however, and remains elusive to date.[1; 2] Nevertheless, we assign the occurrence of the superstructure to a periodic Br vacancy lattice for the following reasons: (i) Apart from the non-stoichiometric deposition of FeBr\({}_{2}\) and FeBr\({}_{3}\) resulting in the same 2D-TMD structures, we frequently observe atomic chains on Au(111) forming a mesh (see Fig. 1b), which we assign to residual Br atoms coexisting to the 2D-TMD islands. Both corroborate a thermal decomposition of the powders during sublimation, leading to Br-deficient 2D-TMD. (ii) The topographic contrast of the superstructure is mostly bias-independent over a large voltage range (-3V to 2 V, see Fig. S2) and also clearly seen in constant-height mode STM and nc-AFM images, which rules out that the superstructure is related to an electronic effect, for instance, an electronic Moire pattern arising from the rotated TMD with respect to the Au lattice, see discussion about defects below.
Figure 1: **STM of FeBr\({}_{2}\) and CoBr\({}_{2}\) with a Br vacancy lattice on Au(111).** (a-c) STM images of VBr\(-\)FeBr\({}_{2}\) at (low and high) submonolayer coverage upon deposition of FeBr\({}_{2}\) powder on Au(111) kept at 450 K. (d-e) STM images of VBr\(-\)FeBr\({}_{2}\) after deposition of 0.4 ML FeBr\({}_{3}\) powder on Au(111) kept at 390 K. (f-g) STM images of VBr\(-\)CoBr\({}_{2}\) after submonolayer deposition of CoBr\({}_{2}\) powder on Au(111) held at 390 K. While (f) shows perfect VBr\(-\)CoBr\({}_{2}\), in (g) typical triangular-shaped defects are seen similar to VBr\(-\)FeBr\({}_{2}\). STM parameters: (a) \(I=100\) pA, \(U=-1\) V; (b) \(I=100\) pA, \(U=100\) mV; (c) \(I=800\) pA, \(U=50\) mV; (d) \(I=100\) pA, \(U=-500\) mV; (e) \(I=100\) pA, \(U=-50\) mV; (f) \(I=100\) pA, \(U=5\) mV; (g) \(I=100\) pA, \(U=10\) mV.
(iii) Only one atom in the unit cell is recessed from the surface; there is no periodic modulation observed, common for Moire patterns.
Next, we tried to reproduce the experimental STM images using DFT calculations at the PBE+D3 level using VASP to corroborate the conclusion that the observed superstructures are vacancy lattices in the top halide layer. The unit cell of the pristine FeBr\({}_{2}\) structure contains 21 atoms (7 Fe and 14 Br), matching the unit cell of the superstructure observed experimentally. Initially, the pristine and defective 1T-FeBr\({}_{2}\) layers were relaxed in the gas phase and subsequently deposited onto the surface using a commensurate unit cell size. As expected, the periodic depressions are not reproduced in calculated STM images of a DFT-optimized pristine 1T-FeBr\({}_{2}\) layer on Au(111), see Fig. 2. In fact, the calculated STM image of the pristine FeBr\({}_{2}\) layer on Au(111) shows only the atomic corrugation of the Br atoms in the top layer, not even a Moire pattern, see also Fig S3. In order to identify the most likely point defect causing the periodic depressions in the experimental images, we calculated FeBr\({}_{2}\)-films with periodic Br-vacancies in the top layer, as the depressions coincide with Br-sites. The simulated constant-current and constant-height STM images show a depression at the Br-vacancy site in good agreement with experimental images, as depicted in Fig. 2c-f. Importantly, introducing the Br vacancies in the top layer leads to a periodic alternation of 5-fold and 6-fold coordinated Fe atoms throughout the films (high-lighted by light and dark-colored Fe atoms in Fig. 2a-b, which may lead to intriguing magnetic and electronic properties. Thus, the excellent match between the experimental findings and the simulated STM calculations supports that a Br-vacancy lattice in the top Br-layer of the TMD is formed. We refer to the Br-vacancy lattices as V\({}_{\rm Br}-\)FeBr\({}_{2}\) and V\({}_{\rm Br}-\)CoBr\({}_{2}\), in the following.
Interestingly, the formation of a Br vacancy lattice in the Br top layer is not unique to FeBr\({}_{2}\), but was also observed for submonolayers of CoBr\({}_{2}\) deposited on the Au(111) kept at 390 K, see Fig. 1f. We note the similarity in lattice parameters between the two TMDs, which may imply that the structures could be stabilized by the mismatch to the Au surface.[19] However, the structure is different for NiBr\({}_{2}\), likely because for NiBr\({}_{2}\) thermal decomposition is negligible.[8] In the following, we mainly focus on FeBr\({}_{2}\).
The Br-vacancy superstructure denotes a \((\sqrt{7}\times\sqrt{7})\)R19.1\({}^{\circ}\) structure with respect to the FeBr\({}_{2}\) lattice, as seen by the Fast Fourier transform of an atomically resolved STM image of a V\({}_{\rm Br}-\)FeBr\({}_{2}\) layer in Fig. 3a. Furthermore, the unit cell with \(L=\sqrt{13}a_{\rm Au}\) is commensurate to the Au lattice as proven by LEED, Fig. 3b-d. We observe a \(\pm 13.9^{\circ}\) rotation of the superstructure toward the high symmetry axes of Au lattice, as well as a \(\pm 5.21^{\circ}\) and \(\pm 26.99^{\circ}\) rotation of the FeBr\({}_{2}\) lattice toward the Au lattice (Fig. S7/Fig. S8). Hence, there are four distinct domains with \(L=\sqrt{13}a_{\rm Au}\), see Tab. S1, that result in a total of 24 domains, including the ones rotated by multiples of \(60^{\circ}\) with respect to \(L\)
Figure 2: **Structure of pristine and patterned FeBr\({}_{2}\) films on Au(111).** (a-b) DFT-optimized structural model of pristine FeBr\({}_{2}\) and V\({}_{\rm Br}-\)FeBr\({}_{2}\) on Au(111) with corresponding calculated constant-height (CH) and constant-current (CC) STM images at 500 mV in (c-f). (g-h) Experimental constant-current (420mV, 200 pA) and constant-height (420 mV) STM images of V\({}_{\rm Br}-\)FeBr\({}_{2}\) on Au(111). Color code: yellow, Au; light green, top Br; dark green, bottom Br; dark red, 6-fold coordinated Fe; red, 5-fold coordinated Fe.
see Fig. S6. In accordance, the simulated LEED pattern leads to two unique subpatterns shown in blue and green in (Fig. 3d). The simulated LEED fits the experiment perfectly and also confirms that the observed FeBr\({}_{2}\) superstructure is commensurate with the underlying Au substrate. Similar LEED patterns for V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) and V\({}_{\rm Br}\)\(-\)CoBr\({}_{2}\) in Fig. 3b-c demonstrate that halogen vacancy lattices can be obtained for several TMDs.
The apparent height of the V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) monolayers measures 1.8\(\pm\)0.1 A (Fig. S10), which is significantly lower compared with the known bulk interlayer spacing of 6.23 A.[25; 26] The lower apparent height may be explained by the difference in the local density of states of the V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) and the Au probed by the STM measurements at the applied bias voltage, as also observed for TMDCs on Au(111).[27] Our DFT calculations showed for the pristine FeBr\({}_{2}\), FeBr\({}_{2}\) with Br top vacancy, and FeBr\({}_{2}\) with Br top and bottom vacancy an averaged vertical Br-Br distance between the top and bottom layer of 2.79 A, 2.75 A, and 2.68 A, respectively. The pristine FeBr\({}_{2}\) has an averaged adsorption height of 2.91 A and 4.33 A for the bottom Br and the Fe atoms above the surface. The herringbone reconstruction of the Au(111) surface is lifted by the V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) islands, leading to an irregular orientation of the solution lines around the V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) islands.
#### Defects in single layer FeBr\({}_{2}\) and CoBr\({}_{2}\)
Next, we discuss the Br-vacancies in the top layer and other typical point defects observed in single layers FeBr\({}_{2}\) and CoBr\({}_{2}\) based on STM and nc-AFM measurements. In the constant-height nc-AFM images of FeBr\({}_{2}\), Fig. 4a-b, we assign the hexagonal lattice of bright features (positive frequency shifts) to the outer bromine atoms, which are close enough to the tip to generate repulsive forces, and the darker surrounding and depressions to the lower-lying metal atoms and Br-vacancy, respectively, whose larger distance from the tip resulted in attractive forces, see also Fig. S5. The simulated nc-AFM images based on the DFT-optimized models on Au(111) using the probe particle model with a Br at the tip apex show excellent agreement with the experimental nc-AFM data, see Fig. S4 The nc-AFM data corroborate that the periodic depressions are related to topographic effects, such as vacancies or atoms recessed into the TMD layer. Additionally, we
can exclude substitutional defects since Br and Fe atoms possess similar vdW radii. They would result in similar nc-AFM images and are not expected to appear as a depression in the halides lattice.
The triangular-shaped defect, embraced by three Br vacancies and composed of three dimmer atoms surrounding a depression, can be observed in both constant-height STM (Fig. 4g-h) and nc-AFM images with a similar appearance in Fig. 4a-b. This type of defect most frequently occurs and is independent of the preparation conditions of both 2D-TMDs, FeBr\({}_{2}\) and CoBr\({}_{2}\). The simulated nc-AFM images have revealed that in V\({}_{\rm Br}\)-FeBr\({}_{2}\) single layers, the Br atoms are observed as bright protrusions. Hence, the overlaid lattice in Fig. 4b confirms that the periodic depressions of the superstructure and, likewise also, the three dimmer atoms of the triangular-shaped defect are centered at Br-sites within the halide top layer. Consequently, the dark depression in the center of the triangular-shaped defect is either a Fe vacancy (purple triangle in Fig. 4c) or a Br vacancy (blue triangle in Fig. 4c) in the bottom layer. We note that all the triangular-shaped defects have the same orientation, i.e. are in the same half-unit cell within the layer. Therefore, only one of the proposed defects occurs in the experiment. As the formation energy for halide vacancies is significantly lower than for transition metal or for antisite defects in TMDs,[28] we assign the dark depressions in the center of the triangular defect to vacancies in the Br-bottom layer. This allows the surrounding three top-Br to relax toward the surface, consistent with a lower apparent contrast in constant-height STM and nc-AFM images. We note the triangular-shaped defects can also form clusters near step edges, see Fig. 4h. In conclusion, we dominantly see Br vacancies, in agreement that we observe a decomposition and loss of halogens during the growth of FeBr\({}_{2}\), while Fe-based vacancies are very rare, see Fig. S11. Hence we confirm that in V\({}_{\rm Br}\)-FeBr\({}_{2}\), the Fe-sublayer is intact in contrast to FeBr\({}_{3}\).
Based on similar reasoning, it can be concluded that V\({}_{\rm Br}\)-FeBr\({}_{2}\) layers exhibit 1T-stacking as opposed to a 1H-stacking. If one assumes a 1H-FeBr\({}_{2}\) model, as shown in Fig. 4d, the formation of the triangular defect would require an energetically expensive Fe vacancy, which may not result in the spontaneous formation of the triangular-shaped defects, which we see under all preparation conditions and for all three material systems, i.e. FeBr\({}_{2}\), CoBr\({}_{2}\), and FeBr\({}_{3}\).
Our next step is to simulate the appearance of the bottom layer Br vacancy using DFT calculations. We used the model for the pristine layer and selected the two Br vacancy sites, one in the top and one in the bottom layer, symmetrically positioned within the unit
Figure 4: **Defects in FeBr\({}_{2}\) identified by STM, nc-AFM, and DFT.** (a-b) Constant-height nc-AFM images of FeBr\({}_{2}\) islands with a triangular-shaped defect. The overlaid grid in (b) represents the Br lattice in the top layer. The unit cell can be divided into two parts. The triangular-shaped defect always occurs in the same half of the unit cell, which either leads as shown in (c) in a 1T-FeBr\({}_{2}\) to a Br bottom (blue) or Fe vacancy (purple) and in (d) in a hypothetical 1H-FeBr\({}_{2}\) to a Fe defect (blue). (e-f) DFT-optimized 1T-FeBr\({}_{2}\) on Au(111) with one top and one bottom Br vacancy and corresponding simulated constant-height and constant-current STM (\(U=0.5\) V) in (k-l). (g-h) Single and multiple triangular-shaped defects imaged by constant-height STM. (i-j) Along Au-steps or dislocation lines on Au(111) narrow FeBr\({}_{2}\)-ribbons are observed. These are the only structures without Br vacancies in the top layer. Color code: yellow, Au; light green, top Br; dark green, bottom Br; dark red, 6-fold coordinated Fe; red, 5-fold coordinated Fe. STM/nc-AFM parameters: (a) z=80 pm with respect to set point \(U=5\) mV, \(I=560\) pA; (g) \(U=-50\) mV, z – constant (inverted); (h) \(U=400\) mV (i) \(I=560\) pA, \(U=50\) mV; (j) \(I=400\) pA, \(U=20\) mV; inset: \(I=300\) pA, \(U=20\) mV.
cell, such that both vacancy sites are located over the same Au- adsorption site (top-site). The simulated constant-height and constant-current STM images in Fig. 4k-l show that the three Br in the top layer located around the defect has a reduced apparent height. The observation of a \(\sim\) 10 pm lowered adsorption height of the surrounding Br over the Au surfaces in the optimized structure agrees well with the experimental STM images.
We note some of the periodic Br-top-vacancies have a darker contrast (\(\sim\) 40 pm height difference) in constant-current STM experiments, as seen, for example, in Fig. S12. This is likely due to the adsorption of atoms into the vacancy, which can be H from the rest gas in UHV or free Br atoms that are covalently bound to the Au substrate, hence with a lower adsorption height above the surface and not coordinated to Fe. The latter would partially restore the 1:2 (Fe:Br) stoichiometry on the surface
### Growth mechanism of V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\)
In this section, we show how the observed bromine vacancies in FeBr\({}_{2}\) on Au(111) soften the monolayer and provide a possible way to reduce lattice-mismatch strain to the substrate. Importantly, the pristine FeBr\({}_{2}\) lattice would need to be stretched by around 4% to fit Au(111). Instead, the occurrence of periodic vacancies can lead to an expansion or compression of the lattice in TMDs,[28] e.g. the interatomic distances between transition metal cations within the layers is 3.69 A in FeBr\({}_{3}\) and 3.78 A in FeBr\({}_{2}\).[25] Therefore, we investigated the softness of the pristine (FeBr\({}_{2}\)) as well as one- (V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\)) and two-(2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\)) Br vacancies in FeBr\({}_{2}\) monolayers based on the energetic and structural evolution as a function of biaxial strain using DFT. As compared to the automated unit cell optimization offered by VASP, this approach has an advantage in that it simplifies the identification of multiple potential energy surfaces (PESs) that may exist due to the presence of energetically similar minima. Fig. 5 shows the potential energy for freestanding monolayers as a function of superstructure size. Since the energy continuously varies without abrupt decrease, we can conclude that Fe-Br bonds are not broken, and only elastic deformation of layers is taking place. The pristine FeBr\({}_{2}\) and 2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) monolayer show a single minimum at 9.81 A and 10.70 A, respectively. The V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) lattice features two local energy minima at 9.68 A and 10.71 A corresponding to the equilibrium and a meta-stable state. With an energy difference of less than 100 meV, the meta-stable state is experimentally accessible. Hence, the vacancies soften the FeBr\({}_{2}\) layer toward larger unit cell sizes due to incorporating 5-fold coordinated Fe atoms into the lattice. Therefore, the lattice misfit to the relaxed Au cell (10.45 A) is significantly reduced from the pristine to a meta-stable V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) from 6.1% to -2.5% and to a 2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) lattice to -2.4%, respectively.
In order to better understand the possibility of defect formation in FeBr\({}_{2}\) single layers, we calculated the formation energies for Br vacancies in the top and bottom halide layers in gas phase. The calculated formation energy for a single Br vacancy in pristine FeBr\({}_{2}\) to obtain V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) is 45 meV/Br, while for two Br vacancies (one in the top- and one in the bottom halide layer) to get 2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) is 56 meV/Br. Hence, the calculated formation energies are comparable to the thermal energy at room temperature, which indicates that the formation of Br defects is energetically feasible. A low formation energy is crucial for the growth of long-range ordered and high-quality vacancy lattices. It agrees well with the experimental observations that a perfect Br-vacancy lattice in the top layer is achieved independent of film size. Only near Au-steps or dislocation lines on Au(111) narrow FeBr\({}_{2}\) ribbons are observed without Br-top vacancies, see Fig. 4i-j. The formation of Br vacancies in the bottom halide layer might be diffusion-limited and hence less frequently observed, consistent with that larger islands of 2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) are observed along layer edges. The V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\) lattices on Au(111) are stable up to around 470 K, before they desorb intact from the surface. In conclusion, we find that the growth of the periodic vacancy lattice is facilitated by strain engineering at the TMD-gold interface in combination with the low energy
Figure 5: **Growth mechanism of patterned FeBr\({}_{2}\) and vacancy formation based on insights from DFT and STM.** Potential energy curve for pristine FeBr\({}_{2}\), V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\), and 2V\({}_{\rm Br}\)\(-\)FeBr\({}_{2}\). The energy of the equilibrium state is set to 0 eV in each curve.
costs for Br vacancy formation.
## Discussion
Intrinsically patterned 2D materials are crucial for the selective functionalization by the adsorption of molecules or atoms, as well as tuning their electronic and magnetic properties. While the introduction of periodic Fe vacancies into single-layer FeBr\({}_{2}\) leads to the formation of single-layer FeBr\({}_{3}\),[29; 30] we achieved the first time the formation of periodic halogen vacancy lattices in TMDs by strain engineering. Importantly, the halogen vacancy lattices offer a way to modulate the coordination number of Fe along the 2D layer and hence alter the charge distribution, while in pristine FeBr\({}_{3}\) and FeBr\({}_{2}\) the iron coordination is constant in the entire film. We confirm previous results that 2D-FeBr\({}_{2}\) is ferromagnetic, see Fig. S9.[19] The change in coordination opens pathways to exotic spin textures with non-collinear spin, which is challenging in pristine 2D-TMDs as their properties are mostly inherited from their vdW bulks and hence have a simple spin texture. Moreover, the first-time intermediate stoichiometric FeBr\({}_{2}\) and CoBr\({}_{2}\) layers (different from MX\({}_{2}\) and MX\({}_{3}\)) are studied in this work, i.e. Fe\({}_{7}\)Br\({}_{13}\) and Co\({}_{7}\)Br\({}_{13}\). The formation of 2D materials with intrinsic vacancy patterns is likely not exclusive to FeBr\({}_{2}\) and CoBr\({}_{2}\) on Au(111). Other TMDs and eventually also TMDCs, are likely to exhibit similar patterns, provided that the formation energy of the vacancies is low and the lattice-mismatch strain is appropriately controlled in the respective system.
## Conclusions
In conclusion, we used low-temperature STM, nc-AFM, LEED, and DFT to identify the structure of single FeBr\({}_{2}\) and CoBr\({}_{2}\) layers on Au(111). Interestingly, we observe a periodic superstructure of Br vacancies in the top halide layer on both materials. We show that the formation of the regular vacancy lattice is explained by low Br-vacancy formation energy accompanied by an increased softness of the TMD layers and a significant decrease in the lattice-mismatch strain. Despite the large number of transition metal halide and chalcogenide materials available, their periodic patterning in 2D by creating point defects has been elusive so far. The versatile strategy for long-range ordered patterning is crucial for selective functionalization with molecular and atomic species, as well as tuning their electronic and magnetic properties. In particular, as FeBr\({}_{2}\) and CoBr\({}_{2}\) are 2D ferromagnets, this provides new pathways to create unconventional spin textures with non-collinear spin.
## Experimental Methods
**STM and nc-AFM**. The STM and nc-AFM measurements were carried out at T = 4.8 K in an ultrahigh vacuum (UHV) system with a base pressure lower than 3\(\times\)10\({}^{-10}\) mbar. A commercial STM/nc-AFM (Scienta-Omicron GmbH) equipped with a Nanonis control unit (SPECS GmbH) was used in this work. A qPlus tuning fork sensor[31] (\(k\approx 1800\) Nm\({}^{-1}\), \(f_{0}\approx 24.95\) kHz, \(Q\approx 13900\)) with a chemically etched tungsten tip was used to acquire most STM and nc-AFM images. For a few STM measurements, a Pt/Ir tip was used. The STM tips were prepared and formed by controlled indentation into the Au(111) surface. For STM, the tip was grounded, and the bias was applied to the sample during the measurements. All bias voltages mentioned in the manuscript refer to the sample bias. The nc-AFM measurements were recorded in frequency modulation mode, operated at a constant amplitude (\(A_{p-p}\approx 120\) pm). The amplitude was calibrated using the normalized time-averaged tunneling current method. [32; 33] For the nc-AFM experiments, an HF2LI phase-locked loop from Zurich Instruments was used. The tip was grounded during the nc-AFM measurements. The STM and nc-AFM data were processed using the WSxM software.[34]
**LEED**. The LEED patterns of FeBr\({}_{2}\) were acquired using a commercial SpectaLEED from Scienta Omicron, while the CoBr\({}_{2}\) samples were measured with an MCP-LEED from OCI Vacuum Microengineering Inc. The LEED patterns are simulated with LEEDPat4.2 software.[35]
**Sample preparation**. The Au(111) single crystal (MaTeck) was cleaned by several cycles of Argon ion sputtering followed by annealing to 650 K for 15 min. All sample temperature values provided in this manuscript refer to measurements at a thermocouple that is located in close proximity to the sample. The FeBr\({}_{2}\) (Iron(II) bromide, anhydrous, purity 98%, Alfa Aesar), FeBr\({}_{3}\) (Iron(III) bromide, anhydrous, purity \(\geq\)98%, Alfa Aesar) and CoBr\({}_{2}\) (Cobalt(II) bromide, anhydrous, purity \(\geq\)97%, Alfa Aesar) powders were evaporated in UHV from a Knudsen cell (Kentax GmbH) located in the preparation chamber. The sublimation from a quartz crucible occurred at 550 K (FeBr\({}_{2}\)/FeBr\({}_{3}\)) and 590 K (CoBr\({}_{2}\)) at an evaporation pressure of around 10\({}^{-9}\) mbar. During deposition, the Au(111) sample was kept at 390 K-450 K, the temperatures are indicated in the respective figure captions. The deposition rate of FeBr\({}_{2}\), FeBr\({}_{3}\) and CoBr\({}_{2}\) was checked by a quartz microbalance
and the powders were thoroughly degassed respectively before the sublimation to Au(111). The deposition rate of FeBr\({}_{2}\), FeBr\({}_{3}\) and CoBr\({}_{2}\) used in this work are 0.02-0.07 ML/min, respectively. The coverage of the FeBr\({}_{2}\) film was controlled by varying the evaporation time.
**Calculations**. The first principles calculations reported here are performed within the framework of the density functional theory (DFT), employing the VASP code.[36; 37] In the periodic VASP calculations, the projector augmented wave (PAW)[38] method was employed to describe the core electrons. The exchange-correlation energy and potential are treated within the spin-polarized generalized gradient approximation (GGA) using the exchange-correlation functional of Perdew-Burke-Ernzerhof (PBE)[39] with the DFT-D3 dispersive corrections[40] (using Becke-Johnson damping). The energy cutoff for the plane wave is kept at 400 eV for the ground state calculations. Energies were converged to 10\({}^{-5}\) eV and geometries were relaxed until the forces on all atoms were below 0.001 eV/A, respectively. A Methfessel-Paxton-smearing [41] of first order with smearing parameter \(\sigma\)=0.2 eV was used. To model the FeBr\({}_{2}\) structures in the gas phase and on the substrate, we have constructed a superstructure corresponding to a \(\vec{v}_{\text{FeBr}_{2}}=(3,1)\) on \(\vec{v}_{\text{Au}}=(4,1)\) domain. The pristine FeBr\({}_{2}\) structure contains 21 atoms (7 Fe and 14 Br atoms). The gold slab consists of six layers, with the bottom three layers fixed to the bulk geometry and the remaining three layers free to relax. To prevent interactions between the slab and its periodic images and to account for the finite size of the slab model, gas phase systems were computed with 16 A of vacuum space and surface calculations were computed with 30 A of vacuum space in the \(z\)-direction. A set of (6\(\times\)6\(\times\)1) \(\Gamma\) centered k-point sampling are used for both gas phase and surface calculations.
The formation energy E\({}_{\text{for}}\) of defects is calculated using the equation E\({}_{\text{for}}\)=E\({}_{\text{defective}}\) - E\({}_{\text{FeBr}_{2}}\) + \(\sum n_{i}\mu\), [28] where E\({}_{\text{defective}}\) and E\({}_{\text{FeBr}_{2}}\) represent the total energy of the defective and pristine FeBr\({}_{2}\) single layers, and n\({}_{i}\) and \(\mu\) are the number and chemical potential of the removed atom, respectively. To calculate \(\mu\), we use the relation \(\mu_{\text{FeBr}_{2}}=\mu_{\text{Fe}}\) + 2\(\mu_{\text{Br}}\), where \(\mu_{\text{Fe}}\), \(\mu_{\text{Br}}\), and \(\mu_{\text{FeBr}_{2}}\) are the total energies of Fe, Br, and single-layer FeBr\({}_{2}\). For calculating \(\mu_{\text{Fe}}\), we assume the stable bulk form of Fe(bec-Fe) at the Fe-rich limit.[19] Hence, \(\mu_{\text{Br}}\) can be evaluated from Fe-bulk and the total energy of single-layer FeBr\({}_{2}\).
Constant-height and constant-current STM images were simulated within the Tersoff-Hamann model.[42; 43] The tip was placed \(\sim 2\) A over the plane of the top Br layer for constant-height images at different bias voltages. For the constant-current STM images, the isodensity values are adjusted from 10\({}^{-3}\) to 114 electron/A\({}^{3}\). Constant-height nc-AFM simulations were based on the probe particle model established by Hapala, et al. [44] and widely used to model the nc-AFM imaging process with functionalized tips. We assumed a Br at the tip apex as the tip was intentionally crashed softly into the FeBr\({}_{2}\) monolayer. Frequency shift images were calculated for the Br-functionalized tip assuming a harmonic spring stiffness of 0.5 N/m and an effective charge of \(-0.03e\) for the probe particle.
## Acknowledgements
This work was funded by the German Research Foundation (DFG) through the SFB 953 _Synthetic Carbon Allotropes_ (project number 182849149) and the Interdisciplinary Center for Molecular Materials (ICMM) at the Friedrich-Alexander-Universitat Erlangen-Nurnberg. We thank Andreas Dorr, Dengyuan Li and Sajjan Mohammad for their experimental support and discussions.
## Notes
We became aware that in a recently published preprint, similar STM measurements of FeBr\({}_{2}\) on Au(111) were presented.[45] However, a different conclusion concerning the structure and chemical composition was reached.
|
2310.02459 | Distributionally Safe Reinforcement Learning under Model Uncertainty: A
Single-Level Approach by Differentiable Convex Programming | Safety assurance is uncompromisable for safety-critical environments with the
presence of drastic model uncertainties (e.g., distributional shift),
especially with humans in the loop. However, incorporating uncertainty in safe
learning will naturally lead to a bi-level problem, where at the lower level
the (worst-case) safety constraint is evaluated within the uncertainty
ambiguity set. In this paper, we present a tractable distributionally safe
reinforcement learning framework to enforce safety under a distributional shift
measured by a Wasserstein metric. To improve the tractability, we first use
duality theory to transform the lower-level optimization from
infinite-dimensional probability space where distributional shift is measured,
to a finite-dimensional parametric space. Moreover, by differentiable convex
programming, the bi-level safe learning problem is further reduced to a
single-level one with two sequential computationally efficient modules: a
convex quadratic program to guarantee safety followed by a projected gradient
ascent to simultaneously find the worst-case uncertainty. This end-to-end
differentiable framework with safety constraints, to the best of our knowledge,
is the first tractable single-level solution to address distributional safety.
We test our approach on first and second-order systems with varying
complexities and compare our results with the uncertainty-agnostic policies,
where our approach demonstrates a significant improvement on safety guarantees. | Alaa Eddine Chriat, Chuangchuang Sun | 2023-10-03T22:05:05Z | http://arxiv.org/abs/2310.02459v1 | # Distributionally Safe Reinforcement Learning under Model Uncertainty:
###### Abstract
Safety assurance is uncompromisable for safety-critical environments with the presence of drastic model uncertainties (e.g., distributional shift), especially with humans in the loop. However, incorporating uncertainty in safe learning will naturally lead to a bi-level problem, where at the lower level the (worst-case) safety constraint is evaluated within the uncertainty ambiguity set. In this paper, we present a tractable distributionally safe reinforcement learning framework to enforce safety under a distributional shift measured by a Wasserstein metric. To improve the tractability, we first use duality theory to transform the lower-level optimization from infinite-dimensional probability space where distributional shift is measured, to a finite-dimensional parametric space. Moreover, by differentiable convex programming, the bi-level safe learning problem is further reduced to a single-level one with two sequential computationally efficient modules: a convex quadratic program to guarantee safety followed by a projected gradient ascent to simultaneously find the worst-case uncertainty. This end-to-end differentiable framework with safety constraints, to the best of our knowledge, is the first tractable single-level solution to address distributional safety. We test our approach on first and second-order systems with varying complexities and compare our results with the uncertainty-agnostic policies, where our approach demonstrates a significant improvement on safety guarantees.
## I Introduction
In many real-world applications, there often can be unmodelled dynamics and uncertainties, both internally (e.g., systems failure/ dysfunctionality, perceptional noise) and externally (e.g., gust, tough terrain). Those uncertainties make control problems and training processes more challenging and susceptible to errors, potentially leading to undesirable/ disastrous outcomes. Moreover, there are often more restrictive settings where environmental configurations change more drastically. For example, there can always be "unknown unknowns" during the deployment of autonomous systems, such as off-road autonomy, advanced air mobility, etc. Also, one of the fundamental challenges towards (super) human-level intelligence for robots is the ability of trained policies to generalize beyond the specific environments they initially ever encountered during training. However, with the presence of those pervasive uncertainties, safety is not compromisable in many safety-critical scenarios with human health/ lives at stake. For example, in self-driving cars, decision-making in real-time in complex and uncertain environments can be very challenging, considering various surroundings, moving pedestrians, etc.
Learning-based control has attracted lots of attention to combine the advantages of machine learning and modern control theory. On one hand, data-driven machine learning methods have arisen to learn control policies from interactions with the environments without an accurate prior model. On the other hand, while control theory has been extensively investigated and in general has rigorous formal guarantees of performance, it often needs relatively accurate models, which are expensive, if not impossible to obtain. To achieve robust and safe decision-making and control with the presence of uncertainties, there has been a large volume of learning-based approaches in the literature. In general, robust learning and control treats uncertainty with a soft criterion and aims to get the performant policy under the (worst-case) uncertainty, yielding a minimax optimization problem [1, 2, 3, 4, 5, 6, 7]. While they can provide some degrees of safety, it is not guaranteed. On the other hand, in safe learning, safety is imposed as a hard constraint [8, 9, 10, 11, 12, 13, 14, 15]. However, incorporating uncertainty in safe learning will naturally lead to a bi-level problem, where at the lower level the (worst-case) safety constraint is evaluated within the uncertainty ambiguity set.
In this paper, we consider a distributionally safe reinforcement learning (DSRL) problem with a distributional shift measured by a Wasserstein metric. Such distributional shift quantifies drastic uncertainties and it leads to a distributional optimization at the lower level of the overall safe learning problem to evaluate the worst-case safety constraint. Both the distributional optimization and the bi-level nature make the distributional safe learning problem intractable. We first use duality theory to transform the lower-level optimization problem into the finite-dimensional parametric space instead of infinite-dimensional probability space. Moreover, we further reduce the bi-level safe learning problem as single-level by differentiable convex programming [16] to sequentially find the worst case uncertainty and simultaneously guarantee that the safety constraints are satisfied. Both modules are computationally efficient, with the former as a projected gradient ascent and the latter as a convex quadratic programming. Moreover, this constrained learning pipeline is end-to-end differentiable. To the best of our knowledge, this is the first attempt to address distributional safety with a tractable single-level solution.
The rest of the paper is organized as follows. Section II reviews the preliminaries needed to complete this work. In section III-A, we formulate our distributionally safe approach using distributional robust optimization. In section III-B we reduce the problem into a single-level learning-based approach. Section IV contains numerical simulations and
results comparisons, and section V has concluding notes of our work.
### _Related works on safe/robust learning_
There are multiple ways to mitigate quantified uncertainty in the context of safe/ robust learning, such as [17, 18], those focusing on the area of robotics [19, 20], and comprehensive surveys [21, 22]. Specifically, the uncertainty variable can be treated as a context variable representing different tasks and can be subsequently solved as multi-task or meta-learning problems [23, 24]. Moreover, given optimization theories, robust learning algorithms have also been developed based on interior point methods [8, 9], successive convexification [10] and (augmented) Lagrangian methods [11, 12, 13, 14, 15]. In learning-based control, Lyapunov theory, model predictive control, and control barrier functions are also employed to develop robust learning algorithms [25, 26, 27, 28, 29, 30]. Additionally, if the uncertainty is considered in the worst-case scenario, minimax policy optimization [1, 2, 3] or its generalization Stackelberg games [4, 5, 6, 7] are often the frameworks to promote resilience. Other works include meta-adaptive nonlinear control integrating learning modules for fast adaptation in unpredictive settings [31, 32].
In terms of robust learning and control under _distributional shift_, model-based approaches [33] such as approximate dynamic programming [34, 35] and model predictive control [36, 37, 38] have been proposed, either under a Wasserstein metric or chance-constrained criterion. In the model-free regime, one line of work is to generate environments/ tasks with distributional shifts for policy training to achieve robustness [39, 40, 41]. Moreover, to balance the worst-case (robustness) and average performance, [42] trains policies over task groups by adding regularization to the worst possible outcomes. In offline RL [43, 44] proposes a distributionally robust formulation with tabular Markov decision processes with an uncertainty set specified by the Kullback-Leibler (KL) divergence. Those approaches are often based on duality in the distributionally robust optimization theories [45, 46, 47].
## II Preliminaries
### _High-order CBF_
In control theory, control barrier functions play a crucial role in ensuring that a dynamic system can accomplish target objectives while ensuring that safety is not compromised, a control barrier function essentially evaluates the system's safety and returns a scalar quantity. Consequently, our objective is to determine a control input that keeps the system within the safety boundaries as determined by the control barrier function. Mathematically, consider the nonlinear control-affine system:
\[\dot{x}(t)=f(x(t))+g(x(t))u(t) \tag{1}\]
where \(f\) and \(g\) are globally Lipschitz, \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) are the states and control inputs, respectively, constrained in closed sets, with initial condition \(x(t_{0})=x_{0}\). The time dependency on \(t\) can be omitted for notational simplicity.
**Definition 1**: _[_28_]_\(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a barrier function for the set \(C=\{x\in\mathbb{R}^{n}:h(x)\geqslant 0\}\) if \(\exists\) an extended class-\(\mathcal{K}\) function \(\alpha(\bullet)\) such that:_
\[\begin{split}\sup_{u\in U}[L_{f}h(x)+L_{g}h(x)u+\alpha(h(x))] \geqslant 0\\ \inf_{\text{int}(C)}[\alpha(h(x))]\geqslant 0\quad\text{ and }\quad \lim_{\partial C}\alpha(h(x))=0\end{split} \tag{2}\]
Because not all systems are first-order in inputs, we can use higher-order control barrier functions to constrain higher-order systems.
**Definition 2**: _[_48_]_For the nonlinear system (1) with the \(m^{th}\) differentiable function \(h(x)\) as a constraint, we define a sequence of functions \(\psi_{i}\) with \(i\in\{1,2,...,m\}\), starting from \(\psi_{0}=h(x)\): \(\psi_{i}(x,t)=\dot{\psi}_{i-1}(x,t)+\alpha_{i}\left(\psi_{i-1}(x,t)\right)\), and define \(C_{i}(t)\) sequence of safe sets associated with each \(\psi_{i}\): \(C_{i}(t)=\{x\in\mathbb{R}^{n}:\psi_{i-1}(x,t)\geqslant 0\}\). The function \(h(x)\) is a high order control barrier function if there exist extended class-\(\mathcal{K}\) functions \(\alpha_{i}(\bullet)\) such that \(\psi_{m}(x,t)\geqslant 0\)._
Control barrier functions offer significant promise in the development of secure dynamic systems, with applications in various robotic and autonomous systems.
### _Reinforcement learning: DDPG_
Reinforcement Learning agents learn to make sequential decisions by interacting with an environment to achieve a certain goal. One popular RL approach is Deep Deterministic Policy Gradient (DDPG), an off-policy algorithm that addresses continuous action space systems. In DDPG, the agent employs two neural network architectures, the actor and the critic. The actor-network represents the agent's policy, mapping the observed state directly to a specific action in the continuous action space \(\mu\left(s\mid\theta^{\mu}\right)\) and aiming to maximize the cumulative reward function as \(J(\theta)=\sum_{k=1}^{T}\gamma^{k}R(s_{k},a_{k})\), where \(\gamma\) and \(R\) are the discount factor and rewards, respectively. The critic-network, on the other hand, evaluates the quality of the actions chosen by the actor \(Q\left(s,a\mid\theta^{Q}\right)\). It estimates the expected cumulative reward, known as the Q-value, by taking both the current state and the action as input \(\mathcal{L}(\theta)=\mathbb{E}_{s,a,r,s^{\prime}}\left((y-Q(s,a|\theta^{Q}) \right)^{2}\) and \(y=R+\gamma Q^{\prime}\left(s,\mu^{\prime}(s\mid\theta^{\mu^{\prime}})\mid \theta^{Q^{\prime}}\right)\). These networks work together to guide the agent into achieving the desired goal. DDPG employs a replay buffer, which stores a tuple of past experiences \(\langle S,A,\mathcal{R},S^{\prime}\rangle\), where \(S\) is a set of agent states in the environment, \(A\) is a set of agent actions, \(\mathcal{R}\) is the reward function, and \(S^{\prime}\) is a set of agent next states. During training, the agent samples random batches of experiences from this buffer to update the actor and critic networks. DDPG's efficiency allows training agents to perform accurate and complex tasks in a wide range of real-world scenarios.
### _Differentiable convex programming_
Differentiable convex programming is a powerful technique that allows computation of the gradients of an optimization problem objective function with respect to the parameters of the problem, by taking matrix differentiation of the Karush-Kuhn-Tucker (KKT) conditions. A notable example of a differentiable optimization method is CVXlayers [49],
which incorporates differentiable optimization problems in a disciplined fashion. In a broader sense, this methodology can be applied to differentiate through disciplined convex programs by initially mapping them into cone programs [50], computing the gradients, and subsequently mapping back to the original problem. It is worth noting that the convex quadratic program (QP) can be differentiated through the KKT conditions [16], which serve as equivalent conditions for global optimality. According to the KKT conditions, at the optimal solution, the gradient of the Lagrangian function with respect to the program's input and parameters must be zero. Consequently, by taking the partial derivative of the Lagrangian function with respect to the input and extending it through the chain rule to the program's parameters, their gradients can be obtained. For a generalized QP:
\[\min_{x}\quad\frac{1}{2}x^{T}Qx+q^{T}x,\quad\text{ s.t. }\ Ax=b\quad Cx\leq h, \tag{3}\]
we can write the Lagrangian of the problem as:
\[L(z,\nu,\lambda)=\frac{1}{2}z^{T}Qz+q^{T}z+\nu^{T}(Az-b)+\lambda^{T}(Gz-h) \tag{4}\]
where \(\nu\) are the dual variables on the equality constraints and \(\lambda\geq 0\) are the dual variables on the inequality constraints. The KKT conditions for stationarity, primal feasibility, and complementary slackness are:
\[\begin{split} Qz^{\star}+q+A^{T}\nu^{\star}+G^{T}\lambda^{\star} &=0\\ Az^{\star}-b&=0\\ D\left(\lambda^{\star}\right)\left(Gz^{\star}-h\right)& =0\end{split} \tag{5}\]
By differentiating these conditions, we can shape the Jacobian of the problem as follows.
\[\left[\begin{array}{c}d_{z}\\ d_{\lambda}\\ d_{\nu}\end{array}\right]=-\left[\begin{array}{cc}Q&G^{T}D\left(\lambda^{ \star}\right)&A^{T}\\ G&D\left(Gz^{\star}-h\right)&0\\ A&0&0\end{array}\right]^{-1}\left[\begin{array}{c}\left(\frac{\partial\ell}{ \partial z^{\star}}\right)^{T}\\ 0\\ 0\end{array}\right] \tag{6}\]
Using the chain rule, we can get the derivatives of any loss function with respect to any of the parameters in the QP.
## III Distributionally Safe Reinforcement Learning under Model Uncertainty
Stochastic optimization solves problems with stochastic variables under a prior distribution. However, in many real-world engineering applications, such a prior distribution is often inaccurate or can change in the deployment. As a result, Distributionally Robust Optimization (DRO) addresses optimization problems with the presence of uncertainty, whose prior probability distribution might shift. Compared to robust optimization admitting parameters within certain sets or stochastic optimization admitting a distribution of parameters, DRO takes a more conservative formulation to solve a more challenging problem, where the distribution of the parameters shifts from the prior. Then following the minimax formulation, DRO aims to minimize the objective function with a worst-case distributional shift in the ambiguity set. This general strategy provides an adaptable/ resilient solution that exhibits strong performance in different/ unseen testing environments.
### _Distributional safety under model uncertainty using the Wasserstein metric: a bi-level problem_
Control barrier functions have found application in maintaining the safety constraints of control systems. In the realm of reinforcement learning, CBFs can guarantee that the actions taken by an agent comply with safety constraints, while maximizing a cumulative reward function. There are several methods to incorporate the CBF in reinforcement learning. For example, CBF can be incorporated in the reward function of the agent. Specifically, rewarding safe actions and penalizing the agent for violating the safety constraints leads to an agent that is biased towards safe actions. Another way of using CBF in RL is to incorporate the CBF module as a safety shield for the RL actions. The CBF will then monitor the actions and rectify any action that violates the safety constraint to the closest safe action [51]. In the presence of model uncertainty or noise, the conventional CBF approach does not hold the same efficacy. Some work addressed robust CBFs by estimating the dynamics noise and incorporating it directly into the CBF [52], while some other works have used the conditional value at risk of the constraints to build a safe CBF [53].
In this section, we aim to integrate distributional shift measured by a Wasserstein metric into CBF to guarantee the safety of the system under the model uncertainty. Consider the nonlinear system with additive uncertainty:
\[\dot{x}(t)=f(x(t))+g(x(t))u(t)+\omega \tag{7}\]
where \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) and \(\omega\sim p_{0}(\omega)\) are the states, control inputs, and the disturbance acting on the system, respectively. Here \(p_{0}(\omega)\) is the prior probability distribution of the model uncertainty. The metric to measure the distance between two probability distributions, i.e., distributional shift, is the Wasserstein metric (\(W_{d}(\bullet,\bullet)\)), which is often the case in many works addressing distributional robustness [54]. Comparing the Wasserstein metric with the KL divergence [55], the most obvious advantage of the former is symmetry (i.e., \(W_{d}(p_{1},p_{2})=W_{d}(p_{2},p_{1})\)), which however does not hold for the latter in general. Then the ambiguity set of the perturbed distribution \(p(\omega)\) from the nominal distribution \(p_{0}(\omega)\) under a Wasserstein metric [54] is expressed as \(\mathcal{P}=\{p(\omega)\in p(\mathbb{W})|W_{d}\big{(}p(\omega),p_{0}(\omega) \big{)}\leq\rho\}\), with \(\rho\) as the threshold of such a shift. Moreover, \(\mathbb{W}\) is the support of \(p(M)\) and is assumed to be convex and closed, which is a common assumption in related works.
The objective of safe reinforcement learning is to generate a control input \(u_{r}\) to achieve certain goals characterized by the reward function in the MDP while satisfying safety constraints. The typical way for goal-reaching robotic navigation is to drive a potential function \(V(x)\) to be zero, with \(V(x)=\left\|x-x_{f}\right\|_{2}^{2}\). The RL policy will generate an action without safety guarantee first as \(u_{\text{RL}}(t)=\mu\left(x_{t}\mid\theta^{\mu}\right)+\mathcal{N}_{t}\), where \(\mu(\bullet\mid\theta^{\mu})\) is a policy parameterized by deep neural networks \(\theta^{\mu}\) and \(\mathcal{N}\) is a random process for promoting exploration. Then the barrier function method [28] ensures that the action
\(u_{\text{RL}}\) complies with safety constraints1. With the presence of the model uncertainty \(\omega\) under distributional shift, the CBF of the safety constraint \(h(x)\geq 0\), defined in definition(1), can be rewritten using the chain rule as follows:
Footnote 1: The state \(x\) and \(s\), the control/ action \(u\) and \(a\), terminologies in control theory and reinforcement learning, are used interchangeably here.
\[\begin{split}&\sup_{u\in\mathbb{U}}\inf_{\begin{subarray}{c} \omega\sim p(\omega)\\ p(\omega)\in\mathcal{P}\end{subarray}}\left[\frac{\partial h(x)}{\partial x}( \dot{x}(t))+\alpha(h(x))\right]\geqslant 0\\ =&\sup_{u\in\mathbb{U}}\inf_{\begin{subarray}{c} \omega\sim p(\omega)\\ p(\omega)\in\mathcal{P}\end{subarray}}\left[\frac{\partial h(x)}{\partial x}( f(x)+g(x)u+\omega)+\alpha(h(x))}{\underset{:=H(x,u,\omega)}{\underset{:=H(x,u,\omega)}{ \underset{}{\underset{}{\underset{}{\underset{}{\underset{}{\underset{}{ \underset{}}{\right}}}}}}}}}\right]\geqslant 0\end{split} \tag{8}\]
where \(\alpha(\bullet)\) is an extended class-\(\mathcal{K}\) function. For simplicity, we use a linear function \(\kappa(\bullet)\) as the class-\(\mathcal{K}\) function. The infimum tries to evaluate the worst-case safety constraint while the supremum to find a feasible control input such that the safety constraint is still satisfied even with the worst perturbation. While the worst-case criterion is adopted here (possibly with over-conservatism), other ones such as the chance-constrained criterion are compatible with this framework as well.
Using (8) as the safety shield to solve for the rectified action \(u_{\text{R}}\) under the worst-case distribution of \(\omega\) leading to the following formulation
\[\begin{split}&\min_{u_{r}\in[\underline{u},\bar{u}]}||u_{r}-u_{ \text{RL}}||^{2}\\ &\text{s.t.}\quad\inf_{p(\omega)\in\mathcal{P}}\mathbb{E}_{ \omega\sim p(\omega)}\Big{\{}H(x,u,\omega)\big{|}W_{d}\big{(}p(\omega),p_{0}( \omega)\big{)}\leq\rho\Big{\}}\geqslant 0\end{split} \tag{9}\]
which is a bi-level programming. The supreme in (8) disappears as part of the feasibility of the high-level minimization in (9). However, addressing this bi-level DRO problem can be challenging due to the high computational complexity of its bi-level nature on top of an already challenging distributionally constrained optimization problem at the lower level.
For the low-level DRO, obtaining the infimum over the expectation of the distributions \(p(\omega)\) has been proved difficult. Various techniques, such as convex relaxations, scenario approximations, or sample-based methods, are used to handle the computational challenges associated with DRO. [34, 46, 56]. That is because the low-level infimum problem requires a search for the worst-case safety violation \(H(x,u,\omega)\) within the infinite-dimensional probability space \(\mathcal{P}\), making it intractable to solve [57, 58]. As a result, we will demonstrate how we can solve the low-level safety estimation problem in an efficient way based on distributionally robust optimization [47]. Using the Kantorovich duality [59, 60], the infimum in (9) can be further transformed into a more tractable problem
\[\begin{split}&\min_{u_{r}\in[\underline{u},\bar{u}]}||u_{r}-u_{ \text{RL}}||^{2}\\ &\text{s.t.}\quad\mathbb{E}_{\omega_{0}\sim p_{0}(\omega)}\inf_{ \omega}\Big{\{}H(x,u,\omega)\big{|}d\big{(}\omega,\omega_{0}\big{)}\leq\rho_{d }\Big{\}}\geqslant 0\end{split} \tag{10}\]
where \(d(\omega,\omega_{0})=\|\omega-\omega_{0}\|_{p}^{2},p\geq 1\) denotes the the "cost" for an adversary to perturb \(\omega_{0}\) to \(\omega\)[45]. This equivalent dual reformulation [45, 60] allows for solving the infimum over the noisy state \(\omega\in\mathbb{R}^{n}\), a _parametric finite-dimensional space_ instead of over \(p(w)\) in the infinite-dimensional probability space \(\mathcal{P}\) in (9). The radius \(\rho_{d}\) can be evaluated with Monte-Carlo simulation based on the original ambiguity set measured by the Wasserstein metric. However, even with the complexity of DRO alleviated, the problem is still bi-level, and hence in our case, we propose a single-level reformulation to improve tractability without compromising safety.
### _Reduce the bi-level learning to single-level: a differentiable convex programming-based approach_
To address the complexity of the bi-level optimization problem (10), we will take advantage of differentiable programming to reduce the problem into a single-level optimization problem. The basic idea is that for general optimization problems, there is a trade-off between optimizing objectives and satisfying constraints. Specifically here, when the dynamics perturbation \(\omega\) makes the constraint satisfaction more challenging, then the control input will lean more on constraint satisfaction rather than optimizing the objective. Eventually, the effect of the model uncertainty will be quantified by the objective function values. For example, for a variable value not satisfying the constraints, the objective value tends to go to infinity. As a result, we move the infimum of the constraints to the maximization of the objective function. For safe reinforcement learning, the loss function is the negative of the discounted cumulative reward with the safety-proofed actions. We decompose the bi-level problem in (10) into two parts: the safety calibration to get \(u_{r}\) followed by the update of \(\omega\) to get the worst case uncertainty. Specifically, for the former, we can start by solving for the optimal action subject to the sample noise under a prior distribution, i.e., \(\omega_{0}\sim p_{0}(w)\) via the simple CBF-based quadratic program:
\[\begin{split}&\min_{u_{r}\in[\underline{u},\bar{u}]}||u_{r}-u_{ \text{RL}}||^{2}\\ &\text{s.t.}\quad\quad\mathbb{E}_{\omega_{0}\sim p_{0}(\omega)}H (x,u,\omega_{0})\geqslant 0\end{split} \tag{11}\]
Note that in one-step propagation, multiple samples \(\omega_{0}\)'s will be sampled as parameters to evaluate the mean in (11). Then we can propagate our plant using the rectified action \(u_{r}\), and calculate the episodic loss function as
\[\mathcal{L}=-\sum_{k=1}^{T}\gamma^{k}R(x_{k},u_{r,k}). \tag{12}\]
By the end of each episode, we can extract a sample of the noise \(\omega_{0}\) and the gradient of the loss function with respect to the rectified action \(\frac{\partial\mathcal{L}}{\partial u_{r}}\). As discussed before, the model uncertainty will try to maximize the loss function, which will be used to find the worst-case perturbation \(\omega\) by gradient-based algorithms. Therefore, we can use the chain rule to obtain the gradient of the loss function with respect to the noise \(\omega\) as:
\[\frac{\partial\mathcal{L}}{\partial\omega}=\frac{\partial\mathcal{L}}{\partial u _{r}}\frac{\partial u_{r}}{\partial\omega} \tag{13}\]
To obtain the second term, differentiable convex programming will be leveraged for solving (11) and extract the gradient of
the rectified action (\(u_{r}\), the variable) with respect to the noise (\(\omega\), the parameter) as \(\frac{\partial u_{r}}{\partial\omega}\), evaluated at the samples \(\omega_{0}\)'s for trajectory rollout. Note that the gradients flow through the QP (11) so that the whole pipeline is end-to-end differentiable. Since we aim to find the worst-case uncertainty dynamics within the ambiguity set \(\mathcal{B}=\{\omega|\mathbb{E}_{\omega_{0}\sim p_{0}(w)}d(\omega-\omega_{0}) \leqslant\rho_{d}\}\) by maximizing the loss function, projected gradient ascent will be employed to update the \(\omega\) as
\[\omega\leftarrow\text{Proj}_{\mathcal{B}}\bigg{[}\omega+\alpha\frac{\partial \mathcal{L}}{\partial\omega}\bigg{]}. \tag{14}\]
The projection of \(\bar{\omega}\) onto a ball can be done analytically by performing a change of basis to the \(l_{2}\) ball towards the center \(\mathbb{E}(\omega_{0})\). Then projecting \(\bar{\omega}\) onto the closest point within the ball leads to
\[\text{Proj}_{\mathcal{B}}(\bar{\omega})=\mathbb{E}(\omega_{0})+\rho_{d}\frac{ \bar{\omega}-\mathbb{E}(\omega_{0})}{\max(\rho_{d},||\bar{\omega}-\mathbb{E}( \omega_{0})||)} \tag{15}\]
Fig.1 illustrates the workflow of the proposed approach. Algorithm1 summarizes the overall distributional safe RL framework with DDPG [61] and the learnable worst-case noise \(\omega\) under a distributional shift. In summary, we transformed bi-level programming for distributional safety into a sequential problem: a convex quadratic program followed by a projected gradient ascent learning module for the worst-case noise. Both parts are computationally cheap and differentiable convex programming is employed for end-to-end differentiability to update both policy parameters with constraints and worst-case uncertainty.
## IV Simulations and Results
In this section, we assess the performance of the suggested safe reinforcement learning approach using two cases of Dubin's car, a first-order and second-order system, and a simplified 3D quadcopter. Our goal is to compare the performance of the deterministically learned policy in the presence of dynamics uncertainty, with the distributionally safe learned policy. The comparison is carried out under the condition that all other settings and parameters are kept identical to ensure a consistent assessment. By carrying out this comparison, we aim to show the effectiveness of the distributionally safe approach in dealing with uncertain environments.
#### Iv-1 **First-Order Dubins Car**
The first simulation is carried out using the first-order Dubins car environment with the following kinematics(16).
\[\left(\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{\theta}\end{array}\right)=\left[\begin{array}{ccc}\cos\theta&-\sin \theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1\end{array}\right]\left(\begin{array}{c}u_{x}\\ u_{y}\\ u_{\theta}\end{array}\right), \tag{16}\]
where \(u_{x},u_{y},u_{\theta}\) are the input velocity along the \(x\) axis, the sideways input velocity, and the angular input velocity, respectively. To reach its final destination \(x_{f}\) from an initial state \(x_{o}\), we use a reward that penalizes the squared distance between the car and the goal state multiplied by a coefficient as \(d\left\|x-x_{f}\right\|_{2}^{2}\), and penalizes every time step by a constant \(s\) for minimum time goal-reaching. Hence, the reward is defined as \(R=-d\left\|x-x_{f}\right\|_{2}^{2}-s\), with \(d>0\) and \(s\geq 0\).
Fig. 2 presents a comparison between trajectories generated by a deterministically learned policy, and distributionally safe policies before and after training converges.
```
1:Require: Environment setting, learning rates \(\alpha,\beta\), discount factor \(\gamma\), target network update rate \(\tau\), and samples \(\omega_{0}\sim p_{0}(\omega)\).
2:Initialize critic network \(Q\left(s,a\mid\theta^{Q}\right)\), actor \(\mu\left(s\mid\theta^{\mu}\right)\) with weights \(\theta^{Q}\) and \(\theta^{\mu}\).
3:Initialize target network \(Q^{\prime}\) and \(\mu^{\prime}\) with weights \(\theta^{Q^{\prime}}\leftarrow\theta^{Q},\theta^{\mu^{\prime}}\leftarrow\theta^{ \mu}\).
4:Initialize replay buffer \(\mathcal{D}\).
5:Initialize a random \(\omega\).
6:forpisode \(=1,\ldots,M\)do
7: Initialize a random noise \(\omega_{0}\sim p_{0}(\omega)\) from samples.
8: Initialize a random process \(\mathcal{N}\) for action exploration.
9: Receive initial observation state \(s_{1}\).
10:for\(t=1,\ldots,T\)do
11: Select action \(a_{t}=\mu\left(s_{t}\mid\theta^{\mu}\right)+\mathcal{N}_{t}\) according to the current policy and exploration noise.
12: Get rectified action \(a_{t_{R}}\) via (11) and samples \(\omega_{0}\).
13: Execute action \(a_{t_{R}}\) and observe reward \(R_{t}\) and new state \(s_{t+1}\).
14: Store transition \((s_{t},a_{t},a_{t_{R}},R_{t},s_{t+1},\omega_{0})\) in \(\mathcal{D}\).
15: Sample a random mini-batch of \(N\) transitions \((s_{t},a_{t},a_{t_{R}},R_{t},s_{t+1},\omega_{0})\) from \(\mathcal{D}\).
16: Update critic using learning rate \(\beta\).
17: Update the actor \(\theta^{\mu}\) using the gradient ascent with the sampled gradient of the return in (12).
18:\(\theta^{\mu}\leftarrow\theta^{\mu}+\alpha\nabla_{\theta^{\mu}}J(\theta)\).
19: Update the target networks with rate \(\tau\).
20:\(\theta^{\prime}\leftarrow\tau\theta+(1-\tau)\theta^{\prime}\).
21:endfor
22: Sample noise \(\omega_{0}\) from all transitions \((s_{t},a_{t},a_{t_{R}},R_{t},s_{t+1},\omega_{0})\) in \(\mathcal{D}\) and evaluate the loss \(\mathcal{L}\) and its gradient \(\frac{\partial\mathcal{L}}{\partial\omega}\).
23: Update \(\omega\) using the projected gradient ascent \(\omega\leftarrow\text{Proj}_{\mathcal{B}}\big{[}\omega+\alpha\frac{\partial \mathcal{L}}{\partial\omega}\big{]}\) with the gradeient evaluation in (13) and the projection in (15).
24:endfor
25:Return:\(\theta^{\mu},\theta^{Q},\omega\).
```
**Algorithm 1** Distributionally Safe Reinforcement Learning
We can see that although the deterministic policy is trying to avoid the obstacle by swerving in the same shape as the obstacle, it still violated the safety constraint because it could not take into account the shift/ noise present in the dynamics of the system. The DSRL policy adapted to the noise after several episodes of training and started avoiding the obstacle
Fig. 1: Overview of the single-level, end-to-end differentiable convex programming learning-based approach for distributionally safe reinforcement learning using a Wasserstein metric. Black (dashed) lines represent data flow and trajectory rollout, while colored solid lines represent gradient back-propagation for updating both policies and worst-case noise.
while keeping a very safe margin.
In Fig. 3 (left), we can see that although the learning of the DSRL policy is fluctuating between episodes, the general trend of the cumulative reward increases over the total number of episodes, demonstrating a promising learning curve.
#### Iv-B2 **Second-Order Dubins Car**
To demonstrate our approach on a higher-order system, we use the second-order Dubin's car with the following dynamics:
\[\left(\begin{array}{c}\ddot{x}\\ \ddot{y}\\ \ddot{\theta}\end{array}\right)=\left[\begin{array}{ccc}\cos\theta&-\sin \theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1\end{array}\right]\left(\begin{array}{c}u_{x}\\ u_{y}\\ \tau_{c}\end{array}\right), \tag{17}\]
We also have the following adjusted reward function to penalize the velocities at the final destination for learning to brake as well: \(R=-d\left\|x-x_{f}\right\|_{2}^{2}-b\left\|v-v_{f}\right\|_{2}^{2}-s\). Fig. 4 and Fig.3(right) illustrate the trajectories of the benchmark and two episodes from the learning episodes, and the cumulative rewards. In the second-order case, we can see that the benchmark misses the finish line by a slight margin due to the influence of noise. All other behaviors are similar to the first-order case.
#### Iv-B3 **Quadcopter**
To inspect the performance of the DSRL on a system with more complex dynamics and safety constraints, we design high-level controllers for a heading-locked quadcopter environment with the following dynamics
\[\left(\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{z}\end{array}\right)=\left(\begin{array}{c}T\sin\theta\\ T\cos\theta\sin\phi\\ T\cos\theta\cos\phi-g\end{array}\right) \tag{18}\]
where \(x,y,z\) are the inertial displacements of the quadcopter and \(\theta,\phi,T\) are the pitch, roll, and total thrust, respectively. While training, the quadcopter is penalized by the distance between the initial and terminal positions: \(R=-\left\|x-x_{f}\right\|_{2}^{2}\). In this section, we simulate a landing scenario of the quadcopter within a prescribed glide slope defined by a cone with a half angle \(\delta\). The safety constraint can be written concisely as:
\[r_{I}^{T}M_{gs}r_{I}\geqslant 0 \tag{19}\]
where \(M_{gs}=[[-\cot(\delta_{gs})^{2},0,0]^{T},[0,-\cot(\delta_{gs})^{2},0]^{T},[0,0,1]^{T}]\) and \(r_{I}=[x,y,z]\) is the inertial position of the quadcopter.
Fig.5 presents the trajectories generated by the DSRL at the start and end of the learning, and the one generated by the deterministic policy subject to dynamic noise. The learned \(\omega\) outperforms the deterministic policy, as it always keeps a safe margin from the constraint, hence, maintaining safety throughout the whole trajectory.
## V Conclusions
This work addresses the challenges of tractable safe reinforcement learning under distributional shift. We employ differentiable convex programming along with distributinoally robust optimization to enable the safe learning over unpredictable probability distributions of the model's uncertainty in a tractable manner. We evaluate our approach on first and second-order Dubin's car and a simplified quad-copter model and compare our results with the deterministic policies'
Fig. 4: Second-order Dubin’s car trajectories with the presence of model uncertainty for deterministic policy, the DSRL policies before and after convergence.
Fig. 5: Quadcopter trajectories with the presence of model uncertainty for deterministic policy and the DSRL policy after convergence.
Fig. 3: DSRL returns over the learning period for 3 random seeds for the Dubin’s car: Left: first-order; Right: second order.
Fig. 2: First-order Dubin’s car trajectories with the presence of model uncertainty for deterministic policy, the DSRL policies before and after convergence.
performance, where our approach outperforms the benchmark policies regarding safety guarantee under model uncertainty.
|
2307.10492 | Blockchain-Based Federated Learning: Incentivizing Data Sharing and
Penalizing Dishonest Behavior | With the increasing importance of data sharing for collaboration and
innovation, it is becoming more important to ensure that data is managed and
shared in a secure and trustworthy manner. Data governance is a common approach
to managing data, but it faces many challenges such as data silos, data
consistency, privacy, security, and access control. To address these
challenges, this paper proposes a comprehensive framework that integrates data
trust in federated learning with InterPlanetary File System, blockchain, and
smart contracts to facilitate secure and mutually beneficial data sharing while
providing incentives, access control mechanisms, and penalizing any dishonest
behavior. The experimental results demonstrate that the proposed model is
effective in improving the accuracy of federated learning models while ensuring
the security and fairness of the data-sharing process. The research paper also
presents a decentralized federated learning platform that successfully trained
a CNN model on the MNIST dataset using blockchain technology. The platform
enables multiple workers to train the model simultaneously while maintaining
data privacy and security. The decentralized architecture and use of blockchain
technology allow for efficient communication and coordination between workers.
This platform has the potential to facilitate decentralized machine learning
and support privacy-preserving collaboration in various domains. | Amir Jaberzadeh, Ajay Kumar Shrestha, Faijan Ahamad Khan, Mohammed Afaan Shaikh, Bhargav Dave, Jason Geng | 2023-07-19T23:05:49Z | http://arxiv.org/abs/2307.10492v1 | # Blockchain-Based Federated Learning: Incentivizing Data Sharing and Penalizing Dishonest Behavior
###### Abstract
With the increasing importance of data sharing for collaboration and innovation, it is becoming more important to ensure that data is managed and shared in a secure and trustworthy manner. Data governance is a common approach to managing data, but it faces many challenges such as data silos, data consistency, privacy, security, and access control. To address these challenges, this paper proposes a comprehensive framework that integrates data trust in federated learning with InterPlanetary File System, blockchain, and smart contracts to facilitate secure and mutually beneficial data sharing while providing incentives, access control mechanisms, and penalizing any dishonest behavior. The experimental results demonstrate that the proposed model is effective in improving the accuracy of federated learning models while ensuring the security and fairness of the data-sharing process. The research paper also presents a decentralized federated learning platform that successfully trained a CNN model on the MNIST dataset using blockchain technology. The platform enables multiple workers to train the model simultaneously while maintaining data privacy and security. The decentralized architecture and use of blockchain technology allow for efficient communication and coordination between workers. This platform has the potential to facilitate decentralized machine learning and support privacy-preserving collaboration in various domains.
Federated Learning, Blockchain, Data Trust.
## 1 Introduction
In recent years, data sharing has become increasingly important for collaboration and innovation in various fields. The adoption of secure and trustworthy multi-center machine learning poses numerous challenges, including data sharing, training algorithms, storage, incentive mechanisms, and encryption. In this paper, we aim to tackle these challenges and propose a comprehensive solution for collaborative machine-learning applications. However, managing and sharing data in a secure and trustworthy manner poses several challenges, such as data silos, privacy, security, access control, and data consistency. Data governance has been proposed as a common approach to managing
data, but it still faces several challenges, and as a result, data trust has emerged as a nascent sub-area of data management [1].
This research paper proposes a trustworthy and robust framework for federated learning participants. Federated Learning (FL) is a privacy-preserving distributed Machine Learning (ML) paradigm [2]. The proposed comprehensive framework integrates data trust, InterPlanetary File System1 (IPFS), blockchain, and smart contracts to establish a secure and mutually beneficial data-sharing distributed FL platform. The framework is designed to provide incentives, access control mechanisms, and penalties for any dishonest or malicious behavior -- sharing bad data or non-compliance with protocols. The framework aims to foster trust among stakeholders, encourage data sharing for mutual benefit, and discourage actions that may compromise data security and accuracy. Our proposed approach is built on the use of smart contracts that enable monitoring of data sharing, access control, and compensation. To participate in the federated learning process, users must register and contribute their data while being required to provide a collateral deposit to deter dishonest behavior.
Footnote 1: [https://ipfs.tech/](https://ipfs.tech/)
Our proposed framework prioritizes data privacy and security by utilizing the encryption-enabled InterPlanetary File System (IPFS) as a decentralized peer-to-peer file system to store and access data. By using IPFS, federated learning models can be trained on data that is stored on a distributed network of users' devices, reducing the need for centralized storage. The utilization of encryption enabled IPFS ensures that the user's data privacy is safeguarded throughout the learning process. The framework is designed to provide a fair and transparent approach to compensate users for their contributions while ensuring the privacy and security of their data.
The rest of the paper is organized as follows. Section 2 provides a succinct analysis of existing architectures and identifies their shortcomings. Our proposed model for the solution architecture is presented in Section 3. Section 4 contains the experimental results and discussion. Lastly, Section 5 concludes the paper by outlining future directions for further research and improvements to the proposed model.
## 2 Background and Related Works
The primary challenge in data governance is to dismantle data silos [3] and ensure data consistency, compatibility, privacy, security, access control, ownership, and rewards for sharing. It is, therefore, imperative to have data governance frameworks that can evolve with new technologies to address emerging challenges. FL is a learning technique where a central server connects with many clients (e.g., mobile phones, and pads) who keep their data private. Since communication between the central server and clients can be a bottleneck, decentralized federated learning (DFL) [4] is used to connect all clients with an undirected graph, which reduces communication costs and increases privacy by replacing server-client communication with peer-to-peer communication. DFL offers communication efficiency and fast convergence, and the advantages of FL are summarized in [5].
Several variants of Federated Average (FedAvg) [2] exist with theoretical guarantees. In [6], the momentum method is used for local client training, while [7] proposes adaptive FedAvg with an adaptive learning rate. Lazy and quantized gradients are used in [8] to reduce communications, and in [9], the authors propose a Newton-type scheme. Decentralized (sub) gradient descents (DGD) are studied in [10, 11, 12, 13], and DSGD is proposed in [14]. Asynchronous DSGD is analyzed in [15], and quantized DSGD is proposed in [16]. Decentralized FL is popular when edge devices do not trust central servers to protect their privacy [17]. Finally, the authors in [16] propose a novel FL framework without a central server for medical applications. The authors in [18] propose a secure architecture for privacy-preserving in smart healthcare using Blockchain and Federated Learning, where Blockchain-based IoT cloud platforms are used for security and privacy, and Federated Learning technology is adopted for scalable machine learning applications. In [19], authors propose a blockchain-based Federated Learning (FL) scheme for Internet of Vehicles (IoV), addressing security and privacy concerns by leveraging blockchain and a reputation-based incentive mechanism.
Compared to prior research on FedAvg, our paper proposes a decentralized framework to improve FL's resilience to node failures and privacy attacks. Unlike previous decentralized training approaches, our algorithm utilizes IPFS and efficient encryption methods to securely converge model training among many nodes.
## 3 System Model
### Data Trust, Access Control and Incentive Method
Data trust ensures that data is available for data mining with increasing legal protections for privacy and sustains the underlying ownership of the data and digital rights, which is the primary focus of the data management field [20]. Our work emphasizes sharing data, transparency, control, and incentives for users in the federated learning setting. Specifically, it explores a particular type of technical platform, distributed ledgers with smart contracts. The smart contract is designed to oversee data sharing, compensation, and access control. Participants would be allowed to register and contribute their data to the federated learning process, and a collateral deposit would be required from each participant to discourage any dishonest behavior. The collateral deposit serves as a financial penalty for participants who fail to provide quality data or who intentionally provide misleading information. If a participant fails to provide accurate data or engages in any dishonest behavior, the deposit will be forfeited. The forfeited deposit will then be used to compensate other participants who have contributed accurate data to the federated learning process. Through the implementation of a smart contract, the total compensation for data sharing is updated and distributed to participants based on their contribution. The contract would also ensure that each participant can only register once, and that compensation can only be distributed when the total compensation amount is positive. The proposed smart contract system provides a reliable and secure framework for federated learning, user data, and blockchain integration. It offers a fair and transparent way of compensating participants for their contributions while ensuring the privacy and security of the data.
### IPFS Storage
In addition to privacy concerns, data storage is another key challenge in federated learning. Traditional data storage approaches are not well-suited for federated learning, as they often require centralized storage of data. This centralized storage approach can increase the risk of data breaches and raises concerns around data ownership and control. To address this challenge, we have proposed using the InterPlanetary File System (IPFS), a peer-to-peer distributed file system that allows data to be stored and accessed in a decentralized manner. By using IPFS, federated learning models can be trained on data that is stored on users' devices, without the need for centralized storage.
### Confidentiality and Privacy
In the context of IPFS, hashing is not sufficient to ensure the confidentiality and privacy of the stored data, as the content can still be accessed by anyone who has access to the network. To address this, both symmetric key encryption and asymmetric cryptography are applied [21], and we further use smart contracts for access control and to provide confidentiality and privacy to the data stored in IPFS. This approach ensures that even if an attacker gains access to the IPFS network, they would not be able to read the encrypted content without the secret. Although encryption may introduce some overhead in terms of performance and complexity, it is necessary for ensuring the security of data in IPFS.
We used the cryptography Python library to securely share and store machine learning models using IPFS. The library provides various cryptographic primitives and recipes for encryption, digital signatures, key derivation, hashing, and more, adhering to best security practices. The code initially connects to an IPFS daemon, loads a model from IPFS, generates an RSA key pair and an AES key, and encrypts the AES key with the public key using hybrid encryption. The actual data is encrypted using the symmetric key (AES), and the symmetric key is encrypted using an asymmetric key (RSA). This ensures that the data can only be decrypted by the intended recipient who possesses the corresponding private key. Our research further implements a method for encrypted model states to be fetched from a group of workers, decrypted with the AES key, and returned as decrypted model states. This allows multiple parties to share their model states securely. In addition, we have also implemented a method for pushing a model state to IPFS. The model state is encrypted using the AES key and the AES key is encrypted with the public key. This encryption mechanism allows the model state to be stored on the IPFS network in an encrypted form, ensuring that only authorized parties can access it. To optimize memory usage, our code maintains a list of model hashes that have been pushed to IPFS and clears the list once a specified number of models have been pushed. This optimization technique helps prevent system resources from being overwhelmed and causing performance issues. By clearing the list after a certain number of models have been pushed, the code ensures that memory usage remains within reasonable limits.
### Decentralized Network Architecture
Our research proposes a blockchain-based architecture for federated learning, consisting of a smart contract and IPFS. The smart contract coordinates the FL task, distributing rewards and penalizing bad actors. The metric used for rewarding workers based on performance considers factors such as model accuracy, consistency, precision and recall on the unseen test dataset. Our proposed architecture contains essential information such as the participants' details, model evaluations submitted at the end of each round, and the reward to be distributed. The IPFS, on the other hand, stores the models trained by participants at the end of each round.
As shown in Figure 1, blockchain-based federated learning involves two classes of actors: the requester and the workers. The requester initiates the FL task by deploying the smart contract, pushing the initial model to the IPFS, and specifying additional parameters such as the number of rounds and the reward to be distributed. The requester can push any model for any task to IPFS. On the other hand, workers participate in the FL task created by the requester. They train the model through a round-based system on their own data and earn rewards based on their performance.
**Workflow.** To begin the task, the requester deploys the smart contract and sets the number of training rounds (N) and the total reward (D) for workers. The requester also pushes the initial model to the IPFS-based model storage, which will be used as the basis for the trained models. Workers can then join the task by interacting with the smart contract, and once enough workers have joined, the requester triggers the training phase through the smart contract. During the training phase, workers train the model for N rounds on their local data, with each round beginning with workers retrieving the trained models of all other workers from the IPFS-based model storage to evaluate them on their local data. The scores are pushed to the smart contract, which aggregates them to obtain the Top K best-performing workers in the previous round. Rewards are distributed to the workers based on their performance. The trained models are then pushed to the IPFS-based model storage, and the process repeats for N rounds. The requester is not involved in any interaction during the training phase, but some operations such as score aggregation can be offloaded to the requester machine to save on computational resources and transaction costs. Once the training phase concludes, the requester
Figure 1: Blockchain and IPFS-based federated learning.
can retrieve the final global model from the IPFS-based model storage and close the task by calling a function of the smart contract.
**Smart Contracts.** The smart contract contains various functions that enable the task requester to initialize, initiate, and oversee the FL task. It also allows workers to participate in the task, submit evaluations, and exit from it. Below is a brief overview of the different functions within the smart contract.
_initializeTask._ This function is called by the requester to initialize the FL task. It takes two parameters: the URI of the machine learning model and the number of rounds in the FL task. The function requires a deposit to be made in the smart contract.
_startTask._ This function is called by the requester to start the FL task. It changes the status of the task to "running".
_joinTask._ This function is called by the workers to join the FL task. It registers the worker in the smart contract and returns the URI of the machine learning model.
_submitScore._ This function is called by the workers to submit the score of their local model after each round's evaluation phase.
_removeWorker._ This function is called by workers to remove themselves from the task.
_nextRound._ This function is called by the requester to advance the FL task to the next round.
_getSubmissions._ This function is called by the requester to get the submissions from all workers for the current round.
_submitRoundTopK._ This function is used to get the top k rank of the workers who will be rewarded for their performance in a task or job. This information is used to distribute the rewards among the top-performing workers.
_distributeRewards._ This function is used to reward top-performing users in a round by splitting the total reward among them. The first users receive half of the total reward, while the remaining users receive a smaller share.
### Aggregation/Averaging method
As shown in Figure 2, in federated learning, workers train the model on their local data, and their models are stored in IPFS-based model storage. The workers retrieve their own model and the models of other workers from the storage and append them to a dictionary. To improve the model's accuracy, the workers use an averaging function that takes all the stored models as input and returns the average model. The averaging is done by adding up all the models and dividing the result by the number of workers who contributed to their models.
Overall, this process allows for multiple workers to collaborate on model training without relying on centralized data storage. By averaging the models, the final global model can be improved by incorporating the knowledge of all workers in the task. By contrast, FedAvg in centralized fashions leads to massive communication between the central server and clients, which causes channel blocking. Furthermore, the central server is vulnerable to attack, compromising the privacy of the entire system.
## 4 Experimental Results and Discussions
We evaluated the proposed platform using the MNIST dataset2. We used a simple feed-forward Neural Network (CNN) model with N layers to classify 0-9 handwritten numbers. The algorithms were developed using the PyTorch framework on the Ethereum blockchain. The simulation modeling was done on Ganache, a local Ethereum blockchain for testing. The training dataset comprised 60,000 images, while the test dataset consisted of 10,000 images. The training dataset was divided evenly between workers at the start of the training and each worker used the test dataset for their evaluations and scoring. Our implementation employed a decentralized client and server system and ran on a local machine. When the requester initiated the process, each worker trained the model taken from IPFS sequentially and securely saved the trained model to the IPFS until the maximum number of epochs was reached. Alternatively, workers could be spawned in parallel and train the model simultaneously on multiple devices. Here we tested our results with one machine that runs each step sequentially.
Footnote 2: [https://pytorch.org/vision/stable/generated/torchvision.datasets.EMNIST.html#torchvision.datasets.EMNIST](https://pytorch.org/vision/stable/generated/torchvision.datasets.EMNIST.html#torchvision.datasets.EMNIST)
### Performance Analysis
We first trained the classification model with our Decentralized Federated Learning framework and showed a convergence of above 95% accuracy under 90 epochs as shown in Figure 3. We also studied other performance metrics like precision and recall and got 0.973 and 0.97 respectively which shows great performance in the classification task. The total training time by 3 workers was 6525.46 seconds. Each worker takes
Figure 2: Aggregation/Averaging methods in our DFL framework.
about 36 minutes to converge on a Xeon CPU with 8 cores which is a comparable time to convergence in decentralized Federated Learning frameworks. We also compared the impact of double encryption on the convergence time. As shown in the left graph of Figure 3, there is an additional 2 minutes and 34 seconds overhead for all three workers or 51 seconds for each worker. The communication cost for our double encryption and decryption process and secure key pair transfer protocol was only 2% of the time required for convergence with the same accuracy. The accuracy for each worker is plotted sequentially for each round of 3 epochs. As shown in Figure 3, when the models start to train, all three workers start with low accuracies that improve within their first round (i.e., 3 epochs) and followed by the next worker's training.
### Accuracy (Workers vs Epochs) Analysis
The graphs in Figure 4 show the accuracy of a federated learning model trained over multiple epochs, with the left graph showing results for a model trained with 3 workers and the second graph showing results for a model trained with 5 workers. Accuracy is the percentage of correct classifications that a trained machine learning model achieves. By analyzing the graphs, we can see that both models reach an acceptable accuracy over a similar number of epochs.
Figure 4: Accuracy vs Epochs for 3-worker (left) and 5-worker (right) models.
Figure 3: Comparison of Running Time: Encrypted vs Unencrypted Models for 3-Worker Model.
This shows that dividing data between more workers does not have a negative impact on model convergence but can speed up the training process and scale up the training process. It can also reduce the required compute power for each worker which qualifies low-end devices to be used as compute nodes. In the left graph with 3 workers, the model's accuracy has a more stable pattern, which is due to having more data to train on for each worker. In a realistic case by increasing the training dataset, we can improve the stability of models with more workers. Overall, we can conclude that the decentralized federated learning model is performing well and improving over time, but the number of workers should be chosen as a ratio of the training dataset.
## 5 Conclusion
We proposed a decentralized federated learning architecture that leverages blockchain, smart contracts and IPFS for secure and efficient training of a global model with decentralized data. Experimental results showed that our proposed framework achieved above 95% accuracy under 90 epochs with a comparable convergence time to centralized federated learning frameworks. We also compared the impact of double encryption on the convergence time and showed that it only resulted in a minimal overhead cost of 2%. Overall, our proposed approach addresses several challenges associated with managing and sharing data in a secure and trustworthy manner by providing a comprehensive framework that establishes trust among stakeholders, promotes data sharing that benefits all involved parties, and deters any actions that could compromise the security and accuracy of the shared data. Future research can explore the scalability and feasibility of the proposed model in a real-world scenario.
|
2308.00004 | Towards Equitable Privacy | Ensuring equitable privacy experiences remains a challenge, especially for
marginalised and vulnerable populations (MVPs) who often hesitate to
participate or use digital services due to concerns about the privacy of their
sensitive information. In response, security research has emphasised the
importance of inclusive security and privacy practices to facilitate meaningful
engagement of MVPs online. However, research in this area is still in its early
stages, with other MVPs yet to be considered (such as low-income groups, and
refugees), novel engagement methods yet to be explored, and limited support for
software developers in building applications and services for MVPs. In 2022, we
initiated a UK Research Council funded Equitable Privacy project to address
these gaps. Our goal is to prioritise the privacy needs and requirements of
MVPs in the design and development of software applications and services.
We design and implement a new participatory research approach -- community
studybeds -- in collaboration with third-sector organisations that support MVPs
to identify and tackle the challenges these groups encounter. In this paper, we
share the initial reflections and experiences of the Equitable Privacy project,
particularly emphasising the utilisation of our community studybeds. | Kopo M. Ramokapane, Lizzie Coles-Kemp, Nikhil Patnaik, Rui Huan, Nirav Ajmeri, Genevieve Liveley, Awais Rashid | 2023-07-28T20:51:49Z | http://arxiv.org/abs/2308.00004v1 | # Towards Equitable Privacy
###### Abstract
Ensuring equitable privacy experiences remains a challenge, especially for marginalised and vulnerable populations (MVPs) who often hesitate to participate or use digital services due to concerns about the privacy of their sensitive information. In response, security research has emphasised the importance of inclusive security and privacy practices to facilitate meaningful engagement of MVPs online. However, research in this area is still in its early stages, with other MVPs yet to be considered (such as low-income groups, and refugees), novel engagement methods yet to be explored, and limited support for software developers in building applications and services for MVPs. In 2022, we initiated a UK Research Council funded Equitable Privacy project to address these gaps. Our goal is to prioritise the privacy needs and requirements of MVPs in the design and development of software applications and services. We design and implement a new participatory research approach - community studybeds - in collaboration with third-sector organisations that support MVPs to identify and tackle the challenges these groups encounter. In this paper, we share the initial reflections and experiences of the Equitable Privacy project, particularly emphasising the utilisation of our community studybeds.
## 1 Introduction
While the right to privacy is often regarded as a universal entitlement, achieving equitable implementation of privacy principles online remains a significant challenge. Marginalised and vulnerable populations (MVPs) often abstain from online participation or using digital services due to fears surrounding the potential exposure of their sensitive information [25]. Consequently, a growing body of security research has highlighted the importance of developing inclusive security and privacy practices to facilitate the meaningful engagement of MVPs online. For instance, Wang [38] calls for research work that empowers people with "various characteristics, abilities, needs, and values," while Das Chowdhury et al. [8] underscore the necessity of embracing and responding to these diversities when developing PETs using the capability approach. They also argue that the current way of assessing or designing PETs is more utility-based (i.e., focused on technical and usability aspects) and does not consider the realities of MVPs. Sannon and Forte [32] further highlight that MVPs can have unique privacy needs and tend to experience disproportionate harm when their privacy is violated.
Consequently, a body of work has attempted to understand the security and privacy needs of MVPs. For instance, prior works [1, 2, 19, 27] have explored privacy concerns and behaviours of people with visual impairments, emphasising that these issues arise because they have not been included in the design process. Others have focused on issues such as intimate partner violence (IPV), examining how technology is used to abuse [36, 35, 14, 26] and how survivors protect themselves [24, 4, 21]. Some works [22, 18, 23] have addressed the need to support service providers. Others [31, 39] have advocated for integrating accessibility into security and privacy tools. Despite these efforts, most technology continues to prioritise the needs of the masses, often overlooking or unintentionally excluding MVPs from their use cases. However, designing for the groups at the edges can also create solutions that benefit the broader population. It is essential to recognise that the definition of an MVP is nuanced; an individual may not be socio-economically disadvantaged but can still be a victim of IPV or surveillance, or suddenly become a refugee (as seen in recent events in Ukraine [13] and Sudan [11]).
In our pursuit of narrowing the disparity between the general population and MVP in their access to privacy protections, last year, in 2022, we commenced the Equitable Privacy
project. The Equitable Privacy project 1 aims to prioritise the privacy needs of marginalised and vulnerable populations in designing and developing software applications and services, thus bringing MVPs and developers together.
Footnote 1: [https://gow.epsrc.ukri.org/NGBOVieWGrant.aspx?GrantRefEP/W025361/1](https://gow.epsrc.ukri.org/NGBOVieWGrant.aspx?GrantRefEP/W025361/1)
This paper presents our initial reflections and experiences striving for equitable privacy, employing community studybeds as our core participation research methodology. First, we discuss Equitable Privacy, focusing on inclusive technology and addressing the security and privacy risks MVPs face. Next, we share our experiences setting up community studybeds, highlighting the methodology and insights gained from the process.
## 2 Equitable Privacy (EP)
_Equitable privacy_ is a conceptual framework that aims to ensure the just and fair provision of privacy to all individuals, regardless of their social, economic, and demographic backgrounds. The framework recognises that privacy experiences are not uniform and that certain individuals or communities are more vulnerable or disadvantaged, facing unique challenges or vulnerabilities regarding privacy protection. For instance, individuals in abusive and coercive relationships, refugees, or political activists have nuanced privacy and information control needs [30, 3]. While a monitoring app may be supportive in certain settings, such as healthcare, it can become a means of oppression for these user groups [15, 7]. EP also recognises that the design of privacy mechanisms, lack of transparency, accessibility or accountability in how data is utilised, often leads to distrust and disenfranchisement. This can create a perception of privacy mechanisms being turned against them--victims of sexual assault, for example, have expressed a lack of trust in online reporting systems due to fears about privacy, anonymity, and traceability [29].
Another example pertains to individuals and groups that experience barriers to access [5]. There is a growing recognition that many individuals and groups face barriers to digital access such as financial constraints, limited accessibility, capacity limitations, and socio-cultural factors [33, 6]. Consequently, security practitioner communities are increasingly considering how these barriers affect how individuals and groups access and protect information [28, 10], as these barriers can have significant implications for informational privacy. For example, the barriers to access may result in individuals sharing devices to access essential services involving sensitive personal information, such as healthcare, welfare, finance, and victim support. Alternatively, they may rely on assistance from friends and family [10]. While such support is often beneficial, it can also result in fraud and harms [21]. These issues not only heighten insecurity for individuals already experiencing socioeconomic, emotional, physical, or political precarity, but they may also impede digital participation or the adoption of digital and privacy technologies.
The notion of equitable privacy recognises that not only certain dimensions of identity, such as race, ability, ethnicity, gender, age, and socio-economic status, often introduce disparities and inequalities in privacy protections but also the intersections between these dimensions can simultaneously both amplify and hide these disparities and inequalities. The EP framework highlights the pressing need to identify and mitigate the privacy-related risks and harms that may disproportionately affect marginalised or disadvantaged groups.
## 3 Community Studybeds
To better understand the privacy needs and challenges of MVPs, we engage with them through Community studybeds. Such studybeds serve as sites of co-investigation and exploration, building upon established frameworks such as Living Labs and Testbeds. These frameworks involve multiple stakeholders and focus on co-creating innovation in real-world contexts [12, 20]. However, community studybeds differentiate themselves by utilising a participatory design approach [37] that places people and their privacy concerns at the core of the study design. The study contexts are established in consultation with the participant groups, ensuring relevance and alignment with their experiences. Moreover, a community researchbed approach emphasises establishing partnerships with community groups, including third-sector organisations, with a shared emphasis on capacity building. Rather than treating community groups as passive participants, they are considered active partners co-designing research direction as well as actively participating in the research. The timing and pace of the community researchbed activities are also determined in collaboration with the participant groups, allowing for a more inclusive and participatory approach [9]. We have currently established three community studybeds with four different organisations in two locations: one organisation in Sunderland and three organisations in Bristol.
### Sunderland Community Group
At the time of writing, one community researchbed had been established in Sunderland, North East England with a voluntary organisation that takes the role of research partner. The inquiry focus of this researchbed is digitally-enabled scams, and an initial engagement has been completed using the Neighbourhood Ideas Exchange toolkit from public goods lab, Proboscis. This consultation enabled us to discuss how digitally-enabled scams appear in day-to-day life, their impact on participants' daily lives, and the resulting adverse consequences. The participants included representatives from four voluntary and third sector and local government organisations. The principle of equity is core to both the community researchbed design and to the processes of establishing and
carrying out the equitable privacy inquiry.
**Equity in focus and context:** During the initialisation of the community researched, researchers worked with community workers and representatives from participant groups to establish the relevant context for an equitable privacy inquiry. It was agreed that digitally-driven fraud and scams was the most appropriate context because they represent a constant pressure that affects everyday digital interactions.
**Equity in design and process:** Following participatory design principles, the participant groups shaped the subsequent engagements for the inquiry, set out the reciprocity arrangement (i.e., the benefits that the individuals and groups would receive in return for taking part), and the timings of the engagements.
**Equity in outputs and dissemination:** As part of the reciprocity agreement, the participant groups and the research partner organisation takes an active role in the research analysis and in the dissemination process for the outputs. The community researched inquiry will next move to a wider community engagement. The host organisation has designed a community information package and between July and August 2023 will lead scams and fraud awareness and discussion sessions. The data analysis will be co-developed with the research partner and participant groups and be used to shape equitable privacy interventions.
### Bristol Community Groups
We have established two community studybeds hosted by three voluntary organisations that work with different communities in Bristol, South West England. The first commmunity researched in Bristol focuses on energy and the associated risks related to energy management systems. It is hosted by two voluntary organisations. Organisation A utilises technology and the arts to generate creative solutions, ensuring the inclusion of individuals and groups at risk of social and digital exclusion. Organisation B tackles energy issues in Bristol by engaging individuals and community groups with an interest in energy. The second community researched is hosted by an organisation (Organisation C) that is specifically dedicated to working with survivors of sexual abuse.
Regarding the first community researched, our initial engagement with the partner organisations began with meetings to understand the services they offer and the community they serve. During this time, we also shared the goals of our project and what we hope to achieve. In our second meeting with Organisation A, we introduced the community workers to a tabletop game called "Decisions and Disruptions 2." This game, developed by our research group, challenges players to manage the security of a small utility company with a given budget. The game presents various security scenarios, requiring players to consider potential threats, infrastructure vulnerabilities, past and ongoing cyber-attacks, and budget limitations. This activity not only helped build rapport and highlight our potential contribution to the partnership but also raised security awareness among the community workers [17, 34]. Regarding the second community researched, we have only met with partner Organisation C. This engagement established the context of our inquiry and discussed the conduct of research engagements and the responsibilities of each partner.
Footnote 2: [https://www.decisions-disruptions.org/](https://www.decisions-disruptions.org/)
Similar to our work in Sunderland, our goal in the initial engagements was to ensure fairness and equal opportunities for our partner organisations in establishing the community research bed and investigating the issues at hand.
**Equity in focus and context:** Since both Organisation A and Organisation B were already involved in energy projects at various capacities, the researchers met and discussed their respective projects to identify common interests and potential benefits for both parties. With Organisation A, the researchers and community workers explored how community members could be encouraged to share their energy-related data through a community dashboard. On the other hand, the researchers and Organisation B agreed to organise energy awareness clinics, during which the researchers would focus on understanding the community members' concerns regarding energy-related technologies while the community workers would raise awareness about effective energy management.
Our initial engagement with Organisation C followed a similar pattern. The researchers shared information about their ongoing projects on online citizen protection while the community workers described their work with survivors of sexual abuse. Both parties agreed to focus on issues concerning the sharing of digital material as evidence after reporting abuse.
**Equity in design and process:** In collaboration with Organisation A, the researchers organised the first workshop on developing the community dashboard. The community workers took the lead in planning, deciding on the inquiry method, recruitment process, and workshop date. Since Organisation B was already conducting workshops with various groups in Bristol, the community workers shared their event calendar with the researchers, and together they identified which workshops would be utilised as energy clinics for the studies. In the initial meeting with Organisation C, the community workers shared ideas with the researchers on how both parties could collaborate for mutual benefit. Discussions included engagement methods with community members, the duration of these engagements, and the scheduling of activities.
**Equity in outputs and dissemination:** Following the initial workshop, Organisation A collected and took the lead in analysing the workshop materials. The community workers analysed the data and prepared an online board to share the key outputs of the workshop. Prior to releasing the findings, both parties held a debrief meeting to reflect on the workshop and discuss the findings.
### Developer Panel
As part of our Equitable Privacy project, we aim to support developers in designing and developing software applications and services that enable equitable privacy experiences. To achieve this, we are currently working on establishing a developer panel to identify and address technological gaps in developing applications and services for MVPs.
We are currently in the process of assembling a panel by leveraging our connections with industry professionals and software development communities that we have established through our previous projects. Also, we will invite developers who voluntarily engage with MVPs in their own time to join the panel. This diverse panel, comprising developers with varied project experiences and a range of end-users for whom they have developed applications, will offer unique perspectives on privacy, fairness, and the specific needs of MVPs. It will also open up new avenues for research. The panel will also shed light on the challenges developers face as we study them using API features and existing privacy tools. Similar to our approach to the community studybeds, we intend to ensure equity in the context, design of activities, and dissemination of outputs through close collaboration with the developer panel.
## 4 Initial Lessons from Establishing Community Studybeds
**Partnerships.** Enabling equitable privacy experiences requires partnerships between research partners, community workers, and the groups they serve. In setting up community studybeds, engaging community representatives as partners has provided us with a deeper understanding of the issues they address in the community, the existing disparities, and how we can effectively engage different participation groups. It has also helped us contextualise the focus of our studies, design our inquiries to align with the practical needs of running activities with community groups, and makes the process of engagement more accessible for participants.
**A deeper understanding of vulnerability is necessary.** Researchers often approach studies and issues related to MVPs with their understanding of who is considered vulnerable. However, working with our partner organisations has highlighted that while there are commonalities in the concept of "vulnerability" across various groups and organisations, it can have subtle differences in meaning. For example, Organisation B defined _vulnerability_ as anyone struggling to pay their energy bills, whereas Organisation A may have a different perspective. It is crucial for researchers to avoid imposing their definitions and instead work closely with community workers to understand the meaning within each specific context.
**Considerations for interviews.** In typical privacy studies, conducting interviews with participants is often seen as a routine practice without significant concerns. However, our partner organisations have emphasised the importance of considering the comfort levels of community groups during interviews. For instance, participants may feel uncomfortable sharing their experiences with a researcher who resembles their abuser (e.g., a male interviewer interviewing a woman). By working in partnership with organisations, we can identify these nuanced issues that may not be apparent if community workers and groups are merely treated as participants.
**More than just study activities.** We have also learned that to enhance engagement from community groups, it is essential to consider the needs of individuals whose participation may be influenced by the presence of others accompanying them. For example, organising workshops may require arrangements for childminders or providing engaging activities for accompanying individuals. Recognising that some people may have other responsibilities that prevent their participation is crucial in fostering inclusivity, and understanding the diverse circumstances of community members.
## 5 Limitations
An equitable approach does not necessarily result in an equitable outcome. The power imbalances between users of technology and the technology companies are not swept away by this approach. Furthermore, the principles of an equitable are often challenging to fully implement. Whilst the principles of voluntary participation, reciprocity, and context design and selection are intended to be in the hands of research partners and the community resesarchbed participants, the social dynamics of the community researched mean that these ideals are not always fully realised. However, such an approach does offer a step towards making user-centred privacy research fairer and more just.
## 6 Conclusion
We presented our initial reflections and experiences of the Equitable Privacy project, focusing on using community studybeds as a participatory research methodology. Taking this approach, the community researchbed becomes a space in which individuals can voice concerns regarding equity, influence the direction of the inquiry, and guide the selection of interventions. The use of community studybeds highlights the effectiveness of partnership in understanding the privacy needs of MVPs for designing and developing software and services that prioritise equitable privacy experiences.
## Acknowledgments
This work is generously funded the EPSRC (EP/W025361/1). |
2310.05336 | GReAT: A Graph Regularized Adversarial Training Method | This paper presents GReAT (Graph Regularized Adversarial Training), a novel
regularization method designed to enhance the robust classification performance
of deep learning models. Adversarial examples, characterized by subtle
perturbations that can mislead models, pose a significant challenge in machine
learning. Although adversarial training is effective in defending against such
attacks, it often overlooks the underlying data structure. In response, GReAT
integrates graph based regularization into the adversarial training process,
leveraging the data's inherent structure to enhance model robustness. By
incorporating graph information during training, GReAT defends against
adversarial attacks and improves generalization to unseen data. Extensive
evaluations on benchmark datasets demonstrate that GReAT outperforms state of
the art methods in robustness, achieving notable improvements in classification
accuracy. Specifically, compared to the second best methods, GReAT achieves a
performance increase of approximately 4.87% for CIFAR10 against FGSM attack and
10.57% for SVHN against FGSM attack. Additionally, for CIFAR10, GReAT
demonstrates a performance increase of approximately 11.05% against PGD attack,
and for SVHN, a 5.54% increase against PGD attack. This paper provides detailed
insights into the proposed methodology, including numerical results and
comparisons with existing approaches, highlighting the significant impact of
GReAT in advancing the performance of deep learning models. | Samet Bayram, Kenneth Barner | 2023-10-09T01:44:06Z | http://arxiv.org/abs/2310.05336v2 | # GReAT: A Graph Regularized Adversarial Training Method
###### Abstract
This paper proposes a regularization method called GReAT, Graph Regularized Adversarial Training, to improve deep learning models' classification performance. Adversarial examples are a well-known challenge in machine learning, where small, purposeful perturbations to input data can mislead models. Adversarial training, a powerful and one of the most effective defense strategies, involves training models with both regular and adversarial examples. However, it often neglects the underlying structure of the data. In response, we propose GReAT, a method that leverages data graph structure to enhance model robustness. GReAT deploys the graph structure of the data into the adversarial training process, resulting in more robust models that better generalize its testing performance and defend against adversarial attacks. Through extensive evaluation on benchmark datasets, we demonstrate GReAT's effectiveness compared to state-of-the-art classification methods, highlighting its potential in improving deep learning models' classification performance.
keywords: Adversarial learning, graph regularization, semi-supervised learning. +
Footnote †: journal:
## 1 Introduction
Deep learning is a subset of machine learning (ML) that uses artificial neural networks with multiple layers and neurons to analyze and learn from large amounts of data. Deep learning algorithms automatically learn and extract relevant features from the data to make predictions. The feature extraction process allows algorithms to achieve higher levels of accuracy and
perform more complex tasks. In the last decades, deep learning has achieved impressive results in various domains, including image and text classification, speech recognition, image generation, and natural language processing Krizhevsky et al. (2012); Liang and Hu (2015); Goodfellow et al. (2020). Supervised learning methods achieved the most successful results. In this learning technique, the model is trained on a labeled dataset, meaning the input data is accompanied by its corresponding output labels. Supervised learning aims to predict new, unseen data based on the patterns learned from the labeled data. Deep neural networks adjust the weights and biases of the network through back-propagation using output labels and original labels.
Semi-supervised learning combines both supervised and unsupervised learning techniques. In this type of learning, a model is trained on a data set that has labeled and unlabeled instances. The goal is to use the labeled data to make predictions on unlabeled and new, unseen, data. This method is used often when there is a limited amount of labeled data but a larger amount of unlabeled data. There are various algorithms for propagating the labels through the graph, such as label propagation Bengio et al. (2006); Yang et al. (2016), pseudo-labeling Lee (2013), transductive SVMs Joachims (1999), and self-training Amini et al. (2022). Label propagation provides outstanding performance for semi-supervised learning to classify graph nodes. It is based on the idea that a node's labels can be propagated to its neighbors based on the assumption that nodes with similar labels are more likely to be connected.
Despite their significant success, deep learning models are known to be vulnerable to adversarial examples. These examples are created by adding small, carefully chosen perturbations to the input data. The perturbed data remains visually similar to the original data but is misclassified by the model Goodfellow et al. (2015); Carlini and Wagner (2017); Nguyen et al. (2015). The existence of adversarial examples has drawn significant attention to the machine-learning community. Showing the vulnerabilities of machine learning algorithms has opened critical research areas in the attack and robustness areas. Studies have shown that adversarial attacks are highly effective on many existing AI systems, especially on image classification tasks; Szegedy et al. (2014); Biggio and Roli (2018); Carlini and Wagner (2017); Moosavi-Dezfooli et al. (2016); Sharif et al. (2016); Kurakin et al. (2016); Eykholt et al. (2018); Bayram and Barner (2022). For instance, Szegedy et al. (2014) show that even small perturbations in input testing images can significantly change the classification accuracy.
Goodfellow et al. (2015) attempts to explain the existence of adversarial
examples and proposes one of the first efficient attack algorithms in white-box settings. Madry et al. (2019) proposed projected gradient descent (PGD) as a universal first-order adversarial attack. They stated that network architecture and capacity play a significant role in adversarial robustness. Notable other popular adversarial attack methods are the Carlini-Wagner Attack (CW) Carlini and Wagner (2017), Basic Iterative Method (BIM) Kurakin et al. (2016), and Momentum Iterative Attack Papernot et al. (2016). Tramer et al. (2017) show the transferability of black-box attacks among different ML models.
Adversarial defense mechanisms can broadly be classified into two categories. The first category predominantly centers on pre-processing techniques tailored for DL models, to mitigate adversarial perturbations in the adversarial examples. Methods in this category include feature denoising, Xie et al. (2019), Fourier filtering, Bafna et al. (2018), and random resizing coupled with random padding, Xie et al. (2017), among others. The second category targets modifications in the architecture of neural networks, including alterations in activation functions, Wang et al. (2018), adaptations in learning processes like distillation, Papernot et al. (2016), the introduction of novel loss functions, Chen et al. (2019), and adjustments in training procedures, Goodfellow et al. (2015).
The existing literature underscores the effectiveness of unlabeled samples to enhance deep learning performance, Zhou et al. (2005); Zhu et al. (2005); Belkin et al. (2006); Bengio et al. (2006). Additionally, studies show that unlabeled data improve adversarial robustness, Carmon et al. (2019). Motivated by these insights, we propose a Graph-Regularized Adversarial Training method (GReAT) to improve the robustness performance. The proposed method utilizes the structural information from the input data to improve the robustness of deep learning models against adversarial attacks. The main idea within GReAT is to construct a graph representation of clean data with an adversarial neighborhood, where each node represents a data point, and the edges encode the similarity between the nodes. This approach allows us to incorporate the structural information from the data into the training process, which helps create robust classification models. To evaluate the effectiveness of our approach, we conduct experiments on data sets: TensorFlow's flower data set TensorFlow (2019) and CIFAR-10 Krizhevsky et al. (2009). We compare GReAT with several state-of-the-art methods. The results show that the proposed approach consistently outperforms the baselines regarding accuracy and robustness against adversarial attacks. Our
proposed GReAT graph-based semi-supervised learning approach for adversarial training provides a promising direction for improving the robustness of deep learning models against adversarial attacks.
## 2 Background and Related Works
This section covers the relevant background and related works. In particular, we cover deep learning and semi-supervised learning, adversarial learning, and graph-based semi-supervised learning.
### Deep Learning and Semi-supervised Learning
DL models are complex non-linear mapping functions between input and output. They consist of multiple layers and neurons with activation functions. They extract features from input samples and predict labels based on those features. Neural networks are trained using vast amounts of labeled data and can learn and improve their performance over time utilizing backpropagation algorithms.
The following equation represents the prediction process of the classical deep learning paradigm:
\[\mathbf{Y}:f(\mathbf{X},\mathbf{\theta},\mathbf{b}), \tag{1}\]
where \(\mathbf{X}\) is the data fed into the neural network \(f\), \(\mathbf{\theta}\) represent the values assigned to the connections between the neurons in the network, and \(\mathbf{b}\) is the offsets applied to the input data. The output is the result produced by the neural network after processing the input data through its layers of neurons.
Semi-supervised learning uses labeled _and_ unlabeled data to train a model. The following representation shows the prediction process of semi-supervised learning:
\[\mathbf{Y}:f(\mathbf{X_{l}},\mathbf{X_{ul}},\mathbf{\theta},\mathbf{b}), \tag{2}\]
where \(\mathbf{X_{l}}\) is the labeled data and \(\mathbf{X_{ul}}\) is the unlabeled data. The weights and biases are the same as in supervised-learning learning. The output is the result produced by the model after processing the labeled and unlabeled data through its layers of neurons. A label assignment procedure typically exists in semi-supervised learning to annotate the unlabeled data. This procedure employs a smoothing function or similarity metrics to assign the label of the most similar labeled sample to the unlabeled sample, Yang et al. (2016).
### Adversarial Learning
A data instance \(\mathbf{x}^{\prime}\) is considered an adversarial example of a natural instance \(\mathbf{x}\) when \(\mathbf{x}^{\prime}\) is close to \(\mathbf{x}\), under a specific distance metric, while \(f(\mathbf{x}^{\prime})\neq y\), where \(y\) is the label of \(\mathbf{x}\), Ren et al. (2020). Formally, an adversarial example of \(\mathbf{x}\) is can be defined as
\[\mathbf{x}^{\prime}:D(\mathbf{x}^{\prime},\mathbf{x})<\epsilon,f(\mathbf{x}^{\prime})\neq y, \tag{3}\]
where \(D(\cdot,\cdot)\) represents a distance metric, such as the \(\|\cdot\|_{2}\) norm, and \(\epsilon\) is a distance constraint, which limits the amount of allowed perturbations. Since the existence of adversarial examples is a significant threat to DL models, adversarial attack and defense algorithms are intensively investigated to improve the robustness and security of such models.
For instance, FGSM, by Goodfellow et al. (2015), was proposed to generate adversarial samples and attack DL models. The PGD algorithm, an iterative version of the FGSM attack, was proposed to generate adversarial samples by maximizing the loss increment within an \(L_{\infty}\) norm-ball, Madry et al. (2017). Although many defense methods have been proposed, adversarial training is the most efficient approach against adversarial attacks, Goodfellow et al. (2015); Madry et al. (2017). Goodfellow et al. (2015) proposed using adversarial attack samples during training so that the classifier can learn the features of adversarial examples and their perturbations. The classifier's robustness against adversarial attacks is substantially enhanced due to the integration of adversarial examples in the training phase. It effectively empowers the classifier to develop a more robust defense mechanism against adversarial instances. Formally, adversarial training is defined as
\[\mathbf{\theta}^{*}=\operatorname*{arg\,min}_{\theta\in\Theta}\frac{1}{L}\sum_{i= 1}^{L}\max_{\mathsf{D}(\mathbf{x}^{\prime}_{i},\mathbf{x}_{i})<\epsilon}\ell_{adv}( \theta,\mathbf{x}^{\prime}_{i},y_{i}). \tag{4}\]
The above equation states a min-max procedure under the specific distance constraint. In the inner maximization component, the adversarial training seeks an adversarial sample \(\mathbf{x}^{\prime}_{j}\) to maximize the loss \(\ell_{adv}\), under the distance metric \(D(\mathbf{x}^{\prime}_{j},\mathbf{x}_{j})<\epsilon\), given the natural sample \(\mathbf{x}_{j}\). The outer minimization seeks the optimal gradient \(\theta^{*}\) that yields the global minimum empirical loss. In their work, Madry _et al._ iteratively applies the PGD algorithm during training to search for strong adversarial samples to maximize \(\ell_{adv}\). This helps the model yield improved robustness against PGD and FGSM attacks. Adversarial training with PGD is considered one of the strongest defense methods, Madry et al. (2017).
### Graph-based Semi-supervised Learning
Graph-based semi-supervised learning uses labeled and unlabeled data to train DL models, Zhou et al. (2005); Belkin et al. (2006); Weston et al. (2008); Agarwal et al. (2009); Jacob et al. (2014); Bui et al. (2018). This approach uses a small amount of labeled data and a large amount of unlabeled data to learn the graph structure of the given data. A given graph can be represented as \(G=(V,E,W)\), where \(V\) indicates data points as vertices, \(E\) represents edges between data points, and \(W\) is the edge weight matrix. The edges between the vertices are created on the basis of a similarity metric between the data points. Graph-based semi-supervised learning aims to use the graph structure and the labeled data to learn the label for the unlabeled data points. This technique is typically done by propagating the labels from the labeled data points to the unlabeled data points through the similarity graph of the entire data, Zhu et al. (2005); Zhou et al. (2004); Yang et al. (2016); Bui et al. (2018).
In graph-based semi-supervised learning, label propagation is often used to classify nodes in a graph when only a few nodes have been labeled. This method starts with the labeled nodes and propagates their labels to their neighbors. The labels are then iteratively propagated repeatedly until the mapping function converges and the entire graph is labeled. As shown in Yang et al. (2016), the loss function of the graph-based semi-supervised learning can be represented as:
\[\sum_{i=1}^{L}\ell(\theta,\mathbf{x}_{i},y_{i})+\lambda\sum_{i,j}w_{i,j}\|h(x_{i}) -h(x_{j})\|^{2}, \tag{5}\]
where the first term represents the standard supervised loss while the second term represents the penalty of the neighborhood loss. Note that \(w_{ij}\) represents the similarity between different instances and \(\lambda\) controls the contribution of neighborhood regularization. When \(\lambda=0\), the loss term becomes the standard supervised loss. The penalty amount depends on the similarity between instance \(\mathbf{x_{i}}\) and its neighbors. Also, \(h\) represents a lookup table that contains all samples and similarity weights. It can be obtained with a closed-form solution, according to Zhou et al. (2004).
Weston et al. (2008) proposed embedding samples instead of using lookup tables by extending the regularization term in the Eq. 5. The regularization term becomes \(\lambda\sum_{i,j}\alpha_{i,j}\|g_{\theta}(x_{i})-g_{\theta}(x_{j})\|^{2}\), where \(g_{\theta}(\cdot)\) indicates the
embedding of samples generated by a neural network. Transforming the regularization term by transitioning from \(f\) to \(g\) leverages stronger constraints on the neural network, according to Yang et al. (2016).
Here, we extend Eq. 5 by replacing the lookup table term with the embedding term in the regularization component and defining a general neighbor similarity metric. This approach yields
\[\sum_{i=1}^{L}\ell(\theta,\mathbf{x}_{i},y_{i})+\lambda\sum_{i=1}w_{i}\mathsf{D}(g_ {\theta}(x_{i}),\mathsf{N}(g_{\theta}(x_{i}))). \tag{6}\]
In the Eq. 6, \(\mathsf{N}\) represents the neighbors of a given sample \(\mathbf{x_{i}}\), \(\mathbf{w_{i}}\) represents the edge weight between sample \(\mathbf{x_{i}}\) and its neighbors, and \(\mathsf{D}\) represents the distance metric between embeddings.
## 3 Graph Regularized Adversarial Training
In this Section, we integrate the adversarial learning process, Madry et al. (2017), into the graph-based semi-supervised learning framework, Weston et al. (2008); Yang et al. (2016); Bui et al. (2018), to take advantage of both adversarial training and semi-supervised learning. The main framework of GReAT is shown in Fig. 1.
The feature space encompasses both the labeled original training samples and the adversarial examples that are created through adversarial regularization and neighbor similarities. This feature space is crucial for identifying the nearest-neighbor samples. When we feed a batch of input samples to the neural network, it includes not only the original samples but also their corresponding neighbors. In the final layer of the neural network, we derive a sample embedding for each of these samples. The training objective for regularization includes two components: the supervised loss and the label propagation loss, which accounts for neighbor-related loss. In other words, it considers the impact of neighbors on the overall training objective.
\[\mathcal{L}_{GReAT}=\mathcal{L}_{adv}+\lambda\mathcal{L}_{N}, \tag{7}\]
where \(\mathcal{L}_{adv}\) represents the supervised loss from training labels of clean and adversarially perturbed samples, and \(\mathcal{L}_{N}\) represents the neighbor loss, which includes the loss from the clean training samples and adversarially perturbed samples.
We consider similar instances as neighbors of sample \(\mathbf{x}\) in the graph regularized semi-supervised learning case. In our case, we consider an adversarial example, \(\mathbf{x^{\prime}}\), in addition to a neighbor of sample \(\mathbf{x}\). Next, we extend Eq. 6 by including adversarial and adversarial neighbor losses as new regularizer terms. Formally, the unpacked form of Eq. 7 is:
\[\begin{split}\mathcal{L}_{GReAT}(\Theta)=&\sum_{i=1} ^{L}\ell(\theta,\mathbf{x}_{i},y_{i})\\ &+\alpha_{11}\sum_{i=1}^{L}\ell_{N}(y_{i},x_{i},\mathsf{N}(x_{i}) )\\ &+\alpha_{22}\sum_{i=1}^{L}\ell_{N}(y_{i},x_{i}^{\prime},\mathsf{ N}(x_{i}^{\prime}))\\ &+\alpha_{3}\sum_{i=1}^{L}\ell(\theta,\mathsf{N}_{adv}(x_{i}),y_ {i}),\end{split} \tag{8}\]
In the above equation, \(\mathsf{N}(\mathbf{x})\) represents neighbors of sample \(\mathbf{x}\). The neigh
Figure 1: GReAT framework.
bors could be clean or adversarially perturbed samples. Thus, \(\mathsf{N}(\mathbf{x^{\prime}})\) represents the neighbors of adversarial example \(\mathbf{x^{\prime}}\). Its neighbors could be clean samples and adversarial examples. Specifically, \(\mathsf{N}_{adv}(\mathbf{x})\) represents the adversarial neighbor of the sample \(\mathbf{x}\). The adversarial neighbors have the same label as the original sample \(\mathbf{x}\) similar to the standard adversarial training.
We obtain adversarial examples using the FGSM, Goodfellow et al. (2015), and PGD, Madry et al. (2017), methods. Note that the \(\alpha_{11},\alpha_{22},\alpha_{3}\) hyperparameters determine the contributions of different neighborhood types, which are shown in Fig. 6 as sub-graph types. The \(\alpha\) terms can be tuned according to the performance on clean and adversarially perturbed testing inputs. Furthermore, a detailed explanation of the embedding of neighbor nodes and graph construction between clean and adversarial samples is shown in Section 3.2.
### Related Previous Methods
Creating graph embeddings using Deep Neural Networks (DNNs) is a well-known method, Weston et al. (2008). Furthermore, the propagation of unlabeled graph embeddings using transductive methods, Zhu et al. (2005); Yang et al. (2016), are efficient and well studied. Neural Graph Machines (NGMs), Bui et al. (2018), are a commonly used example of label propagation and graph embeddings, along with supervised learning. The proposed training objective takes advantage of these frameworks and provides more robust image classifiers. Therefore, the training objective can be considered a combination of nonlinear label propagation and a graph-regularized version of adversarial training.
### Graph Construction
We use a pre-trained model, DenseNet121, Huang et al. (2016), to generate image embeddings as a feature extractor. The pre-trained model has weights obtained by training on ImageNet. The pre-trained model is more complex than the model we use to train and test the proposed regularization algorithm in our simulations. Numerous studies show that complex DNNs are better feature extractors than shallow networks, Krizhevsky et al. (2009); Mhaskar et al. (2017). Another significant advantage of using larger pre-trained models to obtain embeddings is to reduce computational costs. The process of creating embeddings is illustrated in Fig. 2.
Generating appropriate inputs to the neural network plays a significant role in yielding correct predictions. As noted above, we use a pre-trained DL
model to create node embeddings. We generate embeddings of clean samples and adversarial examples to obtain the neighborhood relationship between clean and adversarially perturbed examples. The overall graph construction process is shown in Fig. 3. Similarly, one-dimensional embedding is a crucial process for measuring sample similarities. Since the size of the embeddings is the same, we can visualize clean and adversarial samples in the embedding space using the van der Maaten and Hinton (2008) t-distributed stochastic neighbor embedding (t-SNE) method.
In Figure 4, we utilize t-SNE (t-Distributed Stochastic Neighbor Embedding) to create a visual representation of the validation data set obtained from TensorFlow's flower dataset. The primary purpose of this visualization is to provide insight into the distribution and relationships among the data points.The left panel of the figure is dedicated to displaying all the samples that constitute the validation data set. It is important to note that this data set encompasses samples belonging to five distinct classes. Each class represents a specific category or type of data within the dataset, and the samples within each class share certain common characteristics or features.
Figure 3: Densenet121 for creating image embeddings.
Figure 2: Densenet121 for creating image embeddings.
By visually representing the data set using t-SNE, we aim to reduce the dimensionality of the data while preserving its inherent structure and relationships. This reduction in dimensionality allows us to plot the data points in a two-dimensional space, making it easier to discern patterns, clusters, and similarities among the samples. Visualization is a valuable tool for gaining a deeper understanding of how the different classes are distributed and how they relate to each other within the validation data set. The figure panel on the right shows how adversarial examples are distributed around clean samples. The visualization of the embeddings highlights a strong connection between individual samples and their respective neighbors, effectively distinguishing between various classes.We use the strong neighborhood connections to learn better and create more robust models. Consequently, we use these node embeddings as input features to the neural network by creating an adjacency embedding matrix, as shown in Fig. 5. In particular, we use the label propagation method, Lee (2013), to propagate the information from the labeled data points to the unlabeled instances, which improves the model's performance on both clean and adversarial examples.
Sample sub-graph of training instances are shown in Fig. 5. These examples might be labeled or unlabeled since we generate embeddings for each sample and create the graph based on the simi
Figure 4: Samples in embedding space. The left figure represents all the samples in the validation data set. The right figure shows some clean samples and their adversarial neighbors.
A visual example of a sub-graph is demonstrated in Fig. 6. Three examples of sub-graph types are shown. The first column of the figure shows labeled samples. The second and third columns show the labeled samples' two most similar neighbors. We associate these samples and their neighbors with the sub-graph examples, as noted in Fig. 5. For instance, the first row of images in Fig. 6 represents Fig. 5-D, since the labeled sample is clean and its first and second neighbors are adversarially perturbed samples. The second row of Fig. 6 represents Fig. 5-F, since the labeled sample is adversarially perturbed and its neighbors are one clean sample and one adversarially perturbed sample. Finally, the third row represents Fig. 5-C, since the labeled instance is clean and its neighbors are clean and adversarially perturbed samples. However, a labeled sample may have one neighbor or none, forinstance, if the similarity measure of embeddings cannot pass the similarity threshold. In that case, the labeled sample goes through the neural network as regular input without graph regularization.
### Optimization
The training process begins with a minibatch of samples and their edges. Instead of using all available data at once, the training process randomly selects a subset of edges for each iteration. This helps introduce randomness
Figure 5: A: A sample with two neighbors showing their sub-graph and feature inputs. Blue nodes represent clean samples, and red nodes represent adversarially perturbed samples. B,C,D,E,F, and G show how clean samples and adversarial examples may link on the graph structure.
and variability into the training process, which benefits the learning process. Additionally, to further improve the training process, selected edges are chosen from a nearby region to increase the likelihood of some edges. This can help reduce noise and speed up the learning process. The Stochastic Gradient Descent (SGD) algorithm updates the network weights utilizing the cross-entropy loss function.
Note that the overall open form of the cost function in the following is equivalent to Eq. 6. The cost function incorporates the cost of supervised
Figure 6: From left to right: labeled sample, the first neighbour and the second neighbour. The samples are taken from Tensorflow’s flowers data set.
loss from labeled clean and labeled adversarial samples and neighbor losses. That is, the cost includes different neighbor types/edges, as shown in Fig. 5. Formally,
\[\mathcal{L}_{GReAT}(\Theta)= \sum_{i=1}^{L}\ell(\theta,\mathbf{x}_{i},y_{i})+\sum_{i=1}^{L}\ell( \theta,\mathbf{x^{\prime}}_{i},y_{i}),\] \[+\lambda\Bigg{[}\alpha_{11}\sum_{i=1}^{L}w_{a}\mathsf{D}(g_{ \theta}(x_{i}),\mathsf{N}(g_{\theta}(x_{i})))\] \[+\alpha_{12}\sum_{i=1}^{L}w_{b}\mathsf{D}(g_{\theta}(x_{i}), \mathsf{N}(g_{\theta}(x^{\prime}_{i}))) \tag{9}\] \[+\alpha_{21}\sum_{i=1}^{L}w_{c}\mathsf{D}(g_{\theta}(x^{\prime}_{ i}),\mathsf{N}(g_{\theta}(x_{i})))\] \[+\alpha_{22}\sum_{i=1}^{L}w_{d}\mathsf{D}(g_{\theta}(x^{\prime}_{ i}),\mathsf{N}(g_{\theta}(x^{\prime}_{i})))\Bigg{]},\]
where \(w_{a},w_{b},w_{c},w_{d}\) represent the similarity weights between the samples and their neighbors calculated by cosine similarity measurement.
The similarity weights are (possibly) unique for each sample and its neighbors, with a range of zero to one. A sample and neighbor candidate are dissimilar if the similarity weight is near zero. For calculating the neighbor loss, we use \(\mathsf{D}\) as it represents the distance between a sample and its neighbor, where we use the norms \(L_{1}\) and \(L_{2}\) as distance metrics for calculating the neighbor distance. The hyperparameters \(\alpha_{11},\alpha_{12},\alpha_{21}\) and \(\alpha_{22}\) control the contributions of the different types of edges. For simulations, we set all \(\alpha\)s as one to include all edges in the training. The new objective function makes SGD possible with clean and adversarial samples and their neighbors in mini-batch training.
### Complexity Analysis
The proposed method incorporates graph regularization into its training process, applying it to both labeled and unlabeled data instances within the graph, which includes benign and adversarial examples. The computational complexity of each training epoch is dependent on the number of edges in the graph, denoted as \(E_{c}\). To elucidate the complexity of the training, we can express it as \(O(count(E_{c}))\). It is important to note that the quantity \(E_{c}\) is
directly proportional to several factors. Firstly, it scales with the number of neighboring data points taken into consideration, signifying that more neighbors will increase the complexity. Second, it is influenced by a parameter that determines the selection of the most similar neighbors, further impacting the computational load. Moreover, the step size used for adversarial regularization is tied to \(E_{c}\).
For instance, if we opt for a single-step adversarial regularization method like FGSM, each clear example will have only one adversarial neighbor. However, when employing a multi-step adversarial regularization approach, such as PGD, the number of edges substantially increases, as adversarial examples are generated at each step. This type of PGD-based adversarial regularization tends to enhance the model's robustness compared to FGSM regularization. Nevertheless, it introduces a trade-off between robustness and training time. Training a model with PGD regularization demands more computational resources because of the increase in the number of edges and samples involved. This trade-off is essential when choosing the appropriate adversarial regularization method for a given application. For our simulations, we used FGSM to create adversarial examples for training and testing stages to reduce computational time.
## 4 Experiments
We conducted experiments to show the performance of the proposed GReAT method. Each experiment is carried out on clean data sets with a fixed number of epochs and training steps. The typical hyperparameters are fixed to ensure fair comparisons with other state-of-the-art methods. The base CNN model is trained and then regularized with the proposed loss function. We use the copy of the base model to obtain the regularized model each time to preserve the original base model. Once the models are trained, we test each model on the same clean and adversarially perturbed test data to measure the generalization and robustness performances.
### Datasets
The CIFAR10, Krizhevsky et al. (2009), and flowers, TensorFlow (2019), datasets are used to evaluate the methods. The Cifar10 dataset consists of 60,000 images with ten classes, and each class contains a fixed size of \(32\times 32\) three-channel RGB images. The flowers dataset contains 3,670 images with five classes, each containing high-resolution RGB images. The image sizes are
not fixed in the flowers dataset. Resizing is, therefore, required as one of the pre-processing steps. The image distributions of each class are balanced for both data sets. We split the dataset 80%-10%-10%, as train-validation-test data sets, respectively. In the simulations, we reduce the training set to 20%, and 50%, to observe the model performances with fewer labeled samples.
### Pre-processing Steps
A few essential pre-processing steps are required to prepare the batches for training. After creating image embeddings, we measure the similarity between each embedding and create training batches based on this similarity metric.
#### 4.2.1 Similarity measure
Identifying the closest neighbors for a given sample requires the measurement of similarity amongst the embeddings. Various metrics are available, including Euclidean distance, cosine similarity, and Structural Similarity Index Measure (SSIM). We have opted for cosine similarity due to its proven effectiveness in quantifying the similarity of image embeddings within a multidimensional space. Formally defined, the cosine similarity of two vectors can be expressed as follows:
\[Cos(x_{i},x_{j})=\frac{x_{i}\cdot x_{j}}{\|x_{i}\|*\|x_{j}\|}. \tag{10}\]
The similarity weights are between 0 and 1, depending on the angle between the two vectors. Two overlapping embeddings have weight 1 when the angle between the two embeddings is zero. Conversely, if two embedding vectors are orthogonal, they are dissimilar, and the similarity weight is zero. Once all similarity weights are calculated, the most similar neighbors are identified as candidates for regularization. We pre-define a similarity threshold to consider those neighbors. Embeddings that fall under the threshold are not considered as neighboring candidates on the graph.
#### 4.2.2 Training batches
Once the graph structure is created with clean samples and adversarial examples, we generate training batches that are fed into the neural network model. Each training batch consists of samples, their neighbors, and adversarial neighbors. The number of neighbors is predetermined, although other strategies can be utilized. In our simulations, we pick the number of neighbors as two.
### Network
The base training model consists of four convolution layers and max-pooling layers. Dropout and batch normalization layers in the base model are deployed to minimize overfitting. We finalize the base model with fully connected and soft-max layers. The Adam optimizer with a 0.001 learning rate is utilized.
### Results
First, we evaluate accuracy against attack strength by adjusting the magnitude of attack perturbation, providing insights into model performance across diverse attack intensities. Subsequently, experiments are conducted on both clean and adversarially perturbed datasets to gauge model generalization and robustness. We set the perturbation magnitude to 0.2 in these experiments and employ the FGSM attack method.
#### 4.4.1 Results on Flowers Data Set
The above-mentioned flower dataset comprises high-resolution images. The performance of the proposed method, along with other state-of-the-art techniques on the clean dataset, is summarized in Table 1. Similarly, results for the adversarially perturbed dataset are presented in Table 2.
Table 1 indicates that the NSL approach, Bui et al. (2018), yields significantly better performance. This is largely due to its training on clean samples and their respective neighbors. For a comprehensive evaluation, we introduced \(GReAT_{adv}\), which trains only adversarial samples and their neighbors to assess the impact of adversarial regularization. Given its exclusive focus on adversarial samples during training, this model faces challenges when tested on clean datasets. However, the proposed GReAT method consistently yields positive results.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{**Model Accuracy**} \\ \hline
**train set(\%)** & **Base** & **NSL** & **AT** & \(GREAT_{adv}\) & **GREAT** \\ \hline
20\% & 0.548 & **0.553** & 0.525 & 0.207 & 0.550 \\
50\% & 0.564 & 0.608 & 0.575 & 0.245 & **0.659** \\
80\% & 0.597 & 0.613 & 0.583 & 0.277 & **0.671** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Clean Accuracy Results for Flowers Data Set
Finally, the models were evaluated using adversarially perturbed test data from the flowers data set, and the results are shown in Table 2. As the table shows, models not trained on adversarial examples, particularly the base model, exhibit diminished performance. Although the NSL model is trained only on clean samples, it still exhibits some robustness to adversarially perturbed test samples. The proposed GREAT model outperforms the other models and provides a balanced result for clean and adversarially perturbed testing data. \(GReAT_{adv}\) gives the highest accuracy for perturbed test samples. This experiment shows how graph regularization with adversarial training is effective on both adversarially perturbed and clean testing samples.
#### 4.4.2 Accuracy vs attack strength
We evaluate the robustness of the proposed methods by adjusting the step size of the perturbations, which provides insights into the model performance under varying attack strengths. As illustrated in Fig. 7, the accuracy of the base model declines sharply with increasing attack intensity. Although the model trained with standard adversarial training also exhibits a notable decrease in confidence, the proposed GREAT model consistently displays significant robustness, retaining its efficacy even under substantial perturbations.
The perturbation sizes for adversarial training samples are 20 for \(L_{2}\) norm and 0.2 for \(L_{\infty}\) constrained models, respectively. As the figure shows, the models trained on adversarial examples exhibit peak performance around these specific training perturbation sizes (\(\epsilon\)). Ideally, we aim to train a model with varying perturbation sizes to enhance its robustness against adversarial
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Attack Norm**} & \multirow{2}{*}{**train set(\%)**} & \multicolumn{5}{c}{**Model Accuracy**} \\ \cline{3-6} & & **Base** & **NSL** & **AT** & \(GReAT_{adv}\) & **GReAT** \\ \hline L2 & 20\% & 0.011 & 0.011 & 0.450 & **0.836** & 0.605 \\ & 50\% & 0.014 & 0.024 & 0.496 & **0.854** & 0.647 \\ & 80\% & 0.016 & 0.063 & 0.526 & **0.891** & 0.668 \\ \hline Linf & 20\% & 0.001 & 0.000 & 0.727 & **0.924** & 0.883 \\ & 50\% & 0.002 & 0.001 & 0.753 & **0.942** & 0.892 \\ & 80\% & 0.005 & 0.005 & 0.819 & **0.968** & 0.931 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Robust Accuracy Results for Flowers Data Set
attacks, given that attack perturbation sizes might differ from the training perturbation size. However, for the sake of simulation simplicity, we utilize only a single-step perturbation size for the model trainings.
#### 4.4.3 Results on Cifar10 Data Set
Next, we evaluate the performance on the Cifar10 dataset, which is made up of images with lower resolution with more classes and quantities. Tables 3 and 4 detail the performance on the clean and adversarially perturbed image datasets, respectively. The proposed GReAT methodology yields balanced results, indicating that GReAT demonstrates both generalization and robust performance. In stark contrast, alternative methodologies are significantly impacted by adversarial attacks. These simulation results underscore that regularizing the deep learning model with both benign and adversarial examples results in improved generalization _and_ robustness.
Compared to other methods, the proposed method shows outstanding performance on the benign testing set. This is because more training data
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{**Model Accuracy**} \\ \hline
**train set(\%)** & **Base** & **NSL** & **AT** & \(GReAT_{adv}\) & **GReAT** \\ \hline
20\% & 0.522 & 0.523 & 0.296 & 0.227 & **0.560** \\
50\% & 0.612 & 0.648 & 0.437 & 0.285 & **0.649** \\
80\% & 0.701 & 0.713 & 0.688 & 0.327 & **0.731** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Clean Accuracy Results for Cifar10 Data Set.
Figure 7: Robustness test with increasing perturbation size.
and classes provide more underlying information between classes with graph regularization.
Table 4 provides the performance of each model on adversarially perturbed testing data. As detailed in the table, the proposed method provides superior results to the NSL and standard adversarial training models. We observe similar results for \(GReAT_{adv}\) in the Cifar10 data set, which shows the learning ability of GReAT over adversarial deceptive perturbations.
## 5 Conclusion
In this paper, we have presented a Graph Regularized Adversarial Training Method (GReAT), designed to enhance the robustness of classifiers. By leveraging classical adversarial training, the graph regularization technique enhances the robustness of deep learning classifiers. This technique employs graph-based constraints to regularize the training process, thereby bolstering the model's capacity to withstand adversarial attacks. Integrating these constraints enables the model to learn more robust features and be less prone to manipulation via adversarial examples. This strategy has demonstrated significant potential in enhancing the robustness and generalization of deep learning classifiers, indicating that it could be a valuable tool in adversarial training.
|
2304.08955 | Well-posedness for moving interfaces in anisotropic plasmas | We study the local-in-time well-posedness for an interface that separates an
anisotropic plasma from a vacuum. The plasma flow is governed by the ideal
Chew-Goldberger-Low (CGL) equations, which are the simplest collisionless fluid
model with anisotropic pressure. The vacuum magnetic and electric fields are
supposed to satisfy the pre-Maxwell equations. The plasma and vacuum magnetic
fields are tangential to the interface. This represents a nonlinear
hyperbolic-elliptic coupled problem with a characteristic free boundary. By a
suitable symmetrization of the linearized CGL equations we reduce the
linearized free boundary problem to a problem analogous to that in isotropic
magnetohydrodynamics (MHD). This enables us to prove the local existence and
uniqueness of solutions to the nonlinear free boundary problem under the same
non-collinearity condition for the plasma and vacuum magnetic fields on the
initial interface required by Secchi and Trakhinin (Nonlinearity 27:105-169,
2014) in isotropic MHD. | Yuri Trakhinin | 2023-04-18T12:46:30Z | http://arxiv.org/abs/2304.08955v1 | # Well-posedness for moving interfaces in anisotropic plasmas
# Well-posedness for moving interfaces in anisotropic plasmas
Yuri Trakhinin
This research was carried out at the Sobolev Institute of Mathematics, under a state contract (project no. FWNF-2022-0008).
**Abstract**: We study the local-in-time well-posedness for an interface that separates an anisotropic plasma from a vacuum. The plasma flow is governed by the ideal Chew-Goldberger-Low (CGL) equations, which are the simplest collisionless fluid model with anisotropic pressure. The vacuum magnetic and electric fields are supposed to satisfy the pre-Maxwell equations. The plasma and vacuum magnetic fields are tangential to the interface. This represents a nonlinear hyperbolic-elliptic coupled problem with a characteristic free boundary. By a suitable symmetrization of the linearized CGL equations we reduce the linearized free boundary problem to a problem analogous to that in isotropic magnetohydrodynamics (MHD). This enables us to prove the local existence and uniqueness of solutions to the nonlinear free boundary problem under the same non-collinearity condition for the plasma and vacuum magnetic fields on the initial interface required by Secchi and Trakhinin (Nonlinearity 27:105-169, 2014) in isotropic MHD.
**Keywords**: ideal CGL equations, pre-Maxwell equations, plasma-vacuum interface, free boundary problem, well-posedness
**Mathematics Subject Classification (2020)**: 76W05, 35L65, 35R35
## 1 Introduction
Let \(\Omega\subset\mathbb{R}^{3}\) be the reference domain occupied by a collisionless plasma and vacuum. In the plasma region \(\Omega^{+}(t)\subset\Omega\), the motion is governed by the following ideal Chew-Goldberger-Low (CGL) equations:
\[\partial_{t}\rho+\nabla\cdot(\rho v)=0, \tag{1.1a}\] \[\partial_{t}(\rho v)+\nabla\cdot(\rho v\otimes v+(\tau-1)H \otimes H)+\nabla q=0,\] (1.1b) \[\partial_{t}H-\nabla\times(v\times H)=0,\] (1.1c) \[\frac{d}{dt}\left(\frac{p_{\parallel}|H|^{2}}{\rho^{3}}\right)=0,\] (1.1d) \[\frac{d}{dt}\left(\frac{p_{\perp}}{\rho|H|}\right)=0, \tag{1.1e}\]
together with the divergence-free equation
\[\nabla\cdot H=0. \tag{1.2}\]
Here density \(\rho\), fluid velocity \(v=(v_{1},v_{2},v_{3})^{\mathsf{T}}\), magnetic field \(H=(H_{1},H_{2},H_{3})^{\mathsf{T}}\), parallel pressure \(p_{\parallel}\), and perpendicular pressure \(p_{\perp}\) are unknown functions of time \(t\) and space variable \(x=(x_{1},x_{2},x_{3})\). We denote by
\[\tau=\frac{p_{\parallel}-p_{\perp}}{|H|^{2}}\quad\text{and}\quad q=p_{\perp} +\tfrac{1}{2}|H|^{2}\]
the anisotropy factor and the (perpendicular) total pressure respectively, and \(d/dt=\partial_{t}+(v\cdot\nabla)\) represents the material derivative. The system of equations (1.1) is closed for the primary unknown \(U:=(\rho,v,H,p_{\parallel},p_{\perp})^{\mathsf{T}}\in\mathbb{R}^{9}\) whereas (1.2) is the divergence constraint on the initial data for \(U\).
The CGL equations named after Chew et al. [6] are the simplest fluid model describing the motion of a collisionless plasma, i.e., plasma in which the mean free path for particle collisions is large
compared to the Larmor radius. As we can see in (1.1b), in the CGI model the pressure tensor has the anisotropic (gyrotropic) form \(\mathfrak{p}=p_{\perp}I+\tau H\otimes H\), where \(I\) is the unit matrix of order 3. The CGL model is a gyrotropic approximation in the sense that in general the pressure tensor has also a non-gyrotropic part \(\varPi\) connected with the finite Larmor radius (FLR) corrections: \(\mathfrak{p}=p_{\perp}I+\tau H\otimes H+\varPi\). The CGL model together with fluid models taking into account effects introduced by FLR corrections are discussed in full details in the nice survey [15]. As was noted in [15], in recent years there has been an increased emphasis on pressure/temperature anisotropy effects, in particular, on the CGL model (see, e.g., [7, 31] and references therein) because the classical magnetohydrodynamic (MHD) fluid description does not satisfy all the needs of astrophysical applications requiring correct modelling of collisionless plasmas.
It is easy to see that in system (1.1) the classical CGL double adiabatic equations (1.1d) and (1.1e) can be equivalently replaced with the conservation laws
\[\begin{cases}\partial_{t}(\rho s_{\parallel})+\nabla\cdot(\rho s_{\parallel} v)=0,\\ \partial_{t}(\rho s_{\perp})+\nabla\cdot(\rho s_{\perp}v)=0,\end{cases} \tag{1.3}\]
where
\[s_{\parallel}=\frac{1}{3}\ln\left(\frac{p_{\parallel}|H|^{2}}{\rho^{3}} \right)\quad\text{and}\quad s_{\perp}=\frac{2}{3}\ln\left(\frac{p_{\perp}}{ \rho|H|}\right)\]
are so-called parallel and perpendicular entropies (see, e.g., [15]). That is, equations (1.1a)-(1.1c), (1.3) form the system of 9 conservation laws. Mathematically, in (1.3) instead of \(s_{\parallel}\) and \(s_{\perp}\) we could, of course, use any smooth functions of \(p_{\parallel}|H|^{2}/\rho^{3}\) and \(p_{\perp}/(\rho|H|)\) respectively.
Using (1.2) and the additional 10th conservation law (energy conservation)
\[\partial_{t}(\rho E+\tfrac{1}{2}|H|^{2})+\nabla\cdot(\rho Ev+p_{\perp}v+H \times(v\times H)+\tau(v\cdot H)H)=0\]
which holds on smooth solutions of system (1.1) (see, e.g., [15]), and following Godunov's symmetrization procedure [8], Blokhin and Krymskikh [2] have symmetrized the conservation laws (1.1a)-(1.1c), (1.3) in terms of a vector of canonical variables \(Q=Q(U)\):
\[A^{0}(Q)\partial_{t}Q+\sum_{j=1}^{3}A^{j}(Q)\partial_{j}Q=0.\]
Here \(E=\mathfrak{\epsilon}+\tfrac{1}{2}|v|^{2}\) is the total energy,
\[\mathfrak{\epsilon}=\frac{p_{\perp}}{\rho}+\frac{p_{\parallel}}{2\rho}\]
is the specific internal energy,
\[Q=\left(\mathfrak{\epsilon}+\tfrac{p_{\parallel}}{\rho}-\tfrac{1}{2}|v|^{2}-T _{\parallel}s_{\parallel}-T_{\perp}s_{\perp}\,,v\,,(1-\tau)H\,,T_{\parallel} \,,T_{\perp}\right)^{\mathsf{T}},\]
\(T_{\parallel}=p_{\parallel}/(\rho R)\) and \(T_{\perp}=p_{\perp}/(\rho R)\) are the parallel and perpendicular temperatures, \(R\) is the gas constant, and the symmetric matrices \(A^{\alpha}(Q)\) (\(\alpha=\overline{0,3}\)) are written in [3].
From \(Q\) we can return to the vector \(U\) of primary unknowns keeping the symmetry property (see [3]), i.e., we rewrite the CGL equations as the symmetric system
\[A^{+}_{0}(U)\partial_{t}U+\sum_{j=1}^{3}A^{+}_{j}(U)\partial_{j}U=0\qquad\text {in }\Omega^{+}(t), \tag{1.4}\]
The symmetric matrices \(A^{+}_{\alpha}(U)\), which are rather cumbersome, are written in [3]. The symmetric system (1.4) is hyperbolic if \(A_{0}>0\). As was shown in [2] (see also [3]), the hyperbolicity condition \(A^{+}_{0}>0\) holds provided that
\[\rho>0,\quad-1/a_{p}<\tau<1, \tag{1.5}\]
where \(a_{p}=T_{\perp}/T_{\parallel}=p_{\perp}/p_{\parallel}\) is the temperature anisotropy ratio [15]. Moreover, we by default assume that \(p_{\parallel}>0\), \(p_{\perp}>0\), and the magnetic field \(H\neq 0\).
In the vacuum region \(\Omega^{-}(t)\subset\Omega\), for the vacuum magnetic field \(h=(h_{1},h_{2},h_{3})^{\mathsf{T}}\) and the vacuum electric field \(e=(e_{1},e_{2},e_{3})^{\mathsf{T}}\), we consider the pre-Maxwell equations
\[\nabla\times h=0, \nabla\cdot h=0, \tag{1.6}\] \[\nabla\times e=-\partial_{t}h, \nabla\cdot e=0, \tag{1.7}\]
where the displacement current is neglected from Maxwell's equations in vacuum as in non-relativistic CGL system. The vacuum electric field \(e\) in (1.6)-(1.7) is a secondary variable, so that the dynamics
in \(\Omega^{-}(t)\) can be described by the elliptic (div-curl) system (1.6), or equivalently,
\[\sum_{j=1}^{3}A_{j}^{-}\partial_{j}h=0\qquad\text{in }\Omega^{-}(t), \tag{1.8}\]
where the constant matrices \(A_{1}^{-}\), \(A_{2}^{-}\), and \(A_{3}^{-}\) are defined by
\[A_{1}^{-}:=\begin{pmatrix}0&0&0\\ 0&0&-1\\ 0&1&0\\ 1&0&0\end{pmatrix},\quad A_{2}^{-}:=\begin{pmatrix}0&0&1\\ 0&0&0\\ -1&0&0\\ 0&1&0\end{pmatrix},\quad A_{3}^{-}:=\begin{pmatrix}0&-1&0\\ 1&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix}.\]
As in [23], for technical simplicity we assume that \(\Omega^{\pm}(t)=\{x\in\Omega:x_{1}\gtrless\varphi(t,x^{\prime})\}\) and the plasma-vacuum interface is given by the form of a graph
\[\Sigma(t):=\{x\in\Omega:x_{1}=\varphi(t,x^{\prime})\}\quad\text{with }\ x^{\prime}=(x_{2},x_{3}),\]
where the interface function \(\varphi\) is to be determined. As in [23], we focus on the case of \(\Omega=(-1,1)\times\mathbb{T}^{2}\) with boundaries \(\Sigma^{\pm}:=\{\pm 1\}\times\mathbb{T}^{2}\), where \(\mathbb{T}^{2}\) denotes the 2-torus and can be thought of as the unit square with periodic boundary conditions. For the plasma-vacuum system the boundary conditions have the same form as in classical (isotropic) MHD [23]:
\[q-\frac{1}{2}|h|^{2}=0,\quad\partial_{t}\varphi=v\cdot N \text{on }\Sigma(t), \tag{1.9a}\] \[H\cdot N=0,\quad h\cdot N=0 \text{on }\Sigma(t),\] (1.9b) \[H_{1}=0,\quad v_{1}=0 \text{on }\Sigma^{+},\] (1.9c) \[h\times\mathbf{e}_{1}=\boldsymbol{j}_{c} \text{on }\Sigma^{-}, \tag{1.9d}\]
where \(N:=(1,-\partial_{2}\varphi,-\partial_{3}\varphi)^{\mathsf{T}}\) is the normal to \(\Sigma(t)\) and \(\mathbf{e}_{j}:=(\delta_{1j},\delta_{2j},\delta_{3j})^{\mathsf{T}}\), \(j=1,2,3\) with \(\delta_{ij}\) being the Kronecker delta. The vector function \(\boldsymbol{j}_{c}\) represents a given surface current that forces oscillations onto the plasma-vacuum system. This model can be exploited for the analysis of waves in astrophysical plasmas, e.g., by mimicking the effects of excitation of MHD waves by an external plasma by means of a localized set of "coils", when the response of the internal plasma is the main issue (e.g., in the problem of sunspot oscillations excited by sound waves in the photosphere; see [9, SS4.6] for a thorough discussion of the condition (1.9d)).
The second condition in (1.9a) means that the interface moves with the velocity of plasma particles. Conditions (1.9b) state that the plasma and vacuum magnetic fields are tangential to the interface. If on both sides of the interface we have plasmas, these conditions hold on a tangential discontinuity (current-vortex sheet). For the tangential discontinuity, the Rankine-Hugoniot jump conditions in anisotropic CGL plasmas [3, 16] imply the jump condition \([q]=q^{+}|_{\Sigma}-g^{-}|_{\Sigma}=0\) for the perpendicular total pressure. That is, the first condition in (1.9a) appears as the limiting case of the jump condition \([q]=0\) when from one side of the discontinuity we have vacuum: \(q^{-}=\frac{1}{2}|h|^{2}\). In other words, as in isotropic MHD, the first condition in (1.9a) comes from the balance of the normal stresses at the interface. At last, conditions (1.9c) are the standard perfectly conducting wall and impermeable conditions.
We supplement (1.4) and (1.8)-(1.9) with the initial conditions
\[\varphi|_{t=0}=\varphi_{0},\qquad U|_{t=0}=U_{0}:=(\rho_{0},v_{0},H_{0},p_{ \hat{1}0},p_{\perp 0})^{\mathsf{T}}, \tag{1.10}\]
where \(\|\varphi_{0}\|_{L^{\infty}(\mathbb{T}^{2})}<1\). Note that the vacuum magnetic field \(h\in\mathbb{R}^{3}\) can be uniquely determined from the elliptic problem consisting of (1.8), the second condition in (1.9b), and (1.9d) when the interface function \(\varphi\) is given. It is worth mentioning that system (1.4), (1.8)-(1.10) is a nonlinear hyperbolic-elliptic coupled problem with a characteristic free boundary.
In [27] two different well-posedness conditions we proposed for the linearized plasma-vacuum interface problem in classical ideal compressible MHD. The first one is the non-collinearity condition, stating that the magnetic fields on either side of the interface are not collinear:
\[|H\times h|\geq\kappa>0\quad\text{on }\Sigma(t). \tag{1.11}\]
The second condition
\[\boldsymbol{n}\cdot\nabla(q-\tfrac{1}{2}|h|^{2})\leq-\kappa<0\quad\text{on }\Sigma(t) \tag{1.12}\]
is the MHD counterpart of the Rayleigh-Taylor sign condition, where \(\boldsymbol{n}\) denotes the outward unit normal to the interface \(\Sigma(t)\).
Based on the linear results in [22, 27], Secchi and Trakhinin [23] proved the first local well-posedness theorem for the (nonlinear) plasma-vacuum interface problem in classical MHD under condition (1.11) satisfied at the first moment. But, the proof of the local well-posedness of this problem under condition (1.12) is still an open problem. At the same time, if the surface current \(\mathbf{j}_{\rm c}\equiv 0\), the elliptic subproblem for \(h\) has only zero solution \(h\equiv 0\). For this case local well-posedness was showed by Trakhinin and Wang [28] under the Rayleigh-Taylor-type sign condition (1.12). In MHD, such case corresponds to the interface between a compressible liquid and vacuum because \(h\equiv 0\) implies \(q|_{\Sigma}=0\) (cf. (1.9a)). We note however that this is prohibited for the CGL model because \(H\neq 0\) and \(p_{\perp}>0\). At last, we refer the reader to [10, 11, 12, 14, 19, 24] and [13] respectively for well-posedness and ill-posedness results for plasma-vacuum interfaces in ideal incompressible MHD.
Our main goal in this paper is to extend the well-posedness result in [23] to the CGL model, i.e., to prove the local well-posedness of problem (1.4), (1.8)-(1.10) provided that the initial data satisfy the non-collinearity condition (1.11). Fortunately, we do not need to repeat and adopt to the CGL model all the long mathematical arguments in [22, 23]. Our main idea is finding a suitable symmetrization of the linearized CGL equations which enables us to reduce the linearized free boundary problem to a problem analogous to that in isotropic MHD. We also hope that this symmetrization could be useful for another boundary value problems in the CGL model, e.g., for current-vortex sheets or contact discontinuities [3].
The plan of the rest of this paper is as follows. In Section 2, we reduce the system (1.4), (1.8)-(1.10) to an equivalent fixed-boundary problem and state for it our main theorem on the local well-posedness under the non-collinearity condition (1.11). In Section 3, we write down the linearized problem associated with the fixed-boundary problem from Section 2. In Section 4, we obtain the announced symmetrization of the linearized CGL equations, and Section 5 is devoted to final remarks on the proof of our well-posedness theorem.
## 2 Equivalent fixed-boundary problem and main result
We reformulate the free boundary problem (1.4), (1.8)-(1.10) into an equivalent fixed-boundary problem by introducing \(U_{\sharp}(t,x):=U(t,\Phi(t,x),x^{\prime})\) and \(h_{\sharp}(t,x):=h(t,\Phi(t,x),x^{\prime})\). We choose the lifting function \(\Phi\) as
\[\Phi(t,x):=x_{1}+\chi(x_{1})\varphi(t,x^{\prime}), \tag{2.1}\]
where \(\chi\in C^{\infty}_{0}(-1,1)\) is the cut-off function that satisfies \(\|\chi^{\prime}\|_{L^{\infty}(\mathbb{R})}<4/(\|\varphi_{0}\|_{L^{\infty}( \mathbb{T}^{2})}+3)\) and equals to \(1\) on a small neighborhood of the origin. See [23] for another change of variables, which can gain one half derivative for \(\varphi\). Here, as in [29] and unlike [23], we use the change of variables (2.1) for more technical simplicity (we refer to [29] for more details, in particular, for the restriction on \(\chi^{\prime}\), etc.).
After the change of variables (2.1) the free boundary problem (1.4), (1.8)-(1.10) is reduced to the following nonlinear fixed boundary problem:
\[\mathbb{L}_{+}(U,\Phi):=L_{+}(U,\Phi)U=0 \text{in }\Omega^{+}:=(0,1)\times\mathbb{T}^{2}, \tag{2.2a}\] \[\mathbb{L}_{-}(h,\Phi):=L_{-}(\Phi)h=0 \text{in }\Omega^{-}:=(-1,0)\times\mathbb{T}^{2},\] (2.2b) \[\mathbb{B}(U,h,\varphi)=0 \text{on }\Sigma^{3}\times\Sigma^{+}\times\Sigma^{-},\] (2.2c) \[U|_{t=0}=U_{0}, \varphi|_{t=0}=\varphi_{0}, \tag{2.2d}\]
where we have dropped the subscript "\(\sharp\)" for convenience, \(\Sigma:=\{0\}\times\mathbb{T}^{2}\), and
\[L_{+}(U,\Phi):=A^{+}_{0}(U)\partial_{t}+\widetilde{A}^{+}_{1}(U, \Phi)\partial_{1}+A^{+}_{2}(U)\partial_{2}+A^{+}_{3}(U)\partial_{3}, \tag{2.3}\] \[L_{-}(\Phi):=\widetilde{A}^{-}_{1}(\Phi)\partial_{1}+A^{-}_{2} \partial_{2}+A^{-}_{3}\partial_{3},\] (2.4) \[\mathbb{B}(U,h,\varphi):=\begin{pmatrix}\partial_{t}\varphi-v \cdot N\\ q-\frac{1}{2}|h|^{2}\\ h\cdot N\\ v_{1}\\ h\times\mathbf{e}_{1}-\mathbf{j}_{\rm c}\end{pmatrix}, \tag{2.5}\]
with \(\widetilde{A}^{-}_{1}(\Phi):=(A^{-}_{1}-\partial_{2}\Phi A^{-}_{2}-\partial_{3 }\Phi A^{-}_{3})/\partial_{1}\Phi\) and
\[\widetilde{A}^{+}_{1}(U,\Phi):=\frac{1}{\partial_{1}\Phi}\big{(}A^{+}_{1}(U)- \partial_{t}\Phi A^{+}_{0}(U)-\partial_{2}\Phi A^{+}_{2}(U)-\partial_{3}\Phi A ^{+}_{3}(U)\big{)}.\]
In (2.2c), we employ the notation \(\Sigma^{3}\times\Sigma^{+}\times\Sigma^{-}\) to denote that the first three components of this vector equation are taken on \(\Sigma\), the fourth one on \(\Sigma^{+}\), and the fifth one on \(\Sigma^{-}\). The equations for
\(H\) contained in (2.2a) can be written as
\[\mathbb{H}(H,v,\Phi):=(\partial_{t}^{\Phi}+v\cdot\nabla^{\Phi})H-(H\cdot\nabla^{ \Phi})v+H\nabla^{\Phi}\cdot v=0\quad\text{in }\Omega^{+}, \tag{2.6}\]
where
\[\partial_{t}^{\Phi}:=\partial_{t}-\frac{\partial_{t}\Phi}{\partial_{1}\Phi} \partial_{1},\ \nabla^{\Phi}:=(\partial_{1}^{\Phi},\partial_{2}^{\Phi},\partial_{3}^{\Phi})^{ \top},\ \partial_{1}^{\Phi}:=\frac{\partial_{1}}{\partial_{1}\Phi},\ \partial_{j}^{\Phi}:= \partial_{j}-\frac{\partial_{j}\Phi}{\partial_{1}\Phi}\partial_{1}\]
for \(j=2,3\). In the new variables, equation (1.2) and first conditions in (1.9b)-(1.9c) become
\[\nabla^{\Phi}\cdot H=0\quad\text{in }\Omega^{+},\qquad H\cdot N=0\quad\text{on }\Sigma,\qquad H_{1}=0\quad\text{on }\Sigma^{+}, \tag{2.7}\]
which can be regarded as initial constraints, meaning that they hold for \(t>0\) as long as they are satisfied initially (see [26] for the detailed proof).
For formulating of the main result of this paper, which is a local existence and uniqueness theorem for problem (2.2), we need to introduce the anisotropic weighted Sobolev spaces [5, 20]. We denote
\[\mathrm{D}_{*}^{\alpha}:=\partial_{t}^{\alpha_{0}}(\sigma\partial_{1})^{ \alpha_{1}}\partial_{2}^{\alpha_{2}}\partial_{3}^{\alpha_{3}}\partial_{1}^{ \alpha_{4}}\qquad\text{for }\alpha:=(\alpha_{0},\dots,\alpha_{4})\in\mathbb{N}^{5},\]
where \(\sigma=\sigma(x_{1})\) is a positive \(C^{\infty}\)-function on \((0,1)\) such that \(\sigma(x_{1})=x_{1}\) in a neighbourhood of the origin and \(\sigma(x_{1})=1-x_{1}\) in a neighbourhood of \(x_{1}=1\). For \(m\in\mathbb{N}\) and \(I\subset\mathbb{R}\), the anisotropic Sobolev space \(H^{m}_{*}(I\times\Omega^{+})\) is defined as
\[H^{m}_{*}(I\times\Omega^{+}):=\{u\in L^{2}(I\times\Omega^{+}):\,\mathrm{D}_{* }^{\alpha}u\in L^{2}(I\times\Omega^{+})\text{ for }\langle\alpha\rangle\leq m\},\]
and equipped with the norm \(\|\cdot\|_{H^{m}_{*}(I\times\Omega^{+})}\), where
\[\langle\alpha\rangle:=\sum_{i=0}^{3}\alpha_{i}+2\alpha_{4},\qquad\|u\|_{H^{m} _{*}(I\times\Omega^{+})}^{2}:=\sum_{\langle\alpha\rangle\leq m}\|\mathrm{D}_{ *}^{\alpha}u\|_{L^{2}(I\times\Omega^{+})}^{2}.\]
By definition, \(H^{m}(I\times\Omega^{+})\hookrightarrow H^{m}_{*}(I\times\Omega^{+}) \hookrightarrow H^{\lfloor m/2\rfloor}(I\times\Omega^{+})\) for all \(m\in\mathbb{N}\) and \(I\subset\mathbb{R}\), where \(\lfloor s\rfloor\) denotes the floor function of \(s\in\mathbb{R}\) that maps \(s\) to the greatest integer less than or equal to \(s\). We refer to [5, 18, 20] and references therein for an extensive study of anisotropic Sobolev spaces.
To present our main theorem, we also need to introduce the compatibility conditions on the initial data. However, the process of introduction of the compatibility conditions for problem (2.2) totally coincides with that described in [23, 29] for the counterpart of (2.2) in isotropic MHD. This is why we can just refer to [23, 29]. For the reader's convenience, we only note here that the initial vacuum magnetic field \(h_{0}\) is not given independently from the other initial data because it is uniquely determined by the div-curl system
\[L_{-}(\Phi_{0})h_{0}=0\ \text{ in }\Omega^{-},\ \ \ \ \ h_{0}\cdot N_{0}=0\ \text{ on }\Sigma,\ \ \ \ \ h_{0}\times\mathbf{e}_{1}=\mathbf{j}_{\mathrm{c}}(0)\ \text{ on }\Sigma^{-},\]
where \(L_{-}\) is the operator given by (2.4) and \(N_{0}:=(1,-\partial_{2}\varphi_{0},-\partial_{3}\varphi_{0})^{\top}\).
We are now in a position to state the main result of this paper.
**Theorem 2.1**.: _Assume that \(\mathbf{j}_{\mathrm{c}}\in H^{m+3/2}([0,T_{0}]\times\Sigma^{-})\) for some \(T_{0}>0\) and \(m\in\mathbb{N}\) with \(m\geq 20\). Assume further that the initial data \((U_{0},\varphi_{0})\in H^{m+3/2}(\Omega^{+})\times H^{m+2}(\mathbb{T}^{2})\) satisfy \(\|\varphi_{0}\|_{L^{m}(\mathbb{T}^{2})}<1\), the constraints (2.7), the hyperbolicity conditions (1.5),_
\[\rho_{0}\geq\delta_{1}>0,\quad\frac{p_{\perp 0}-p_{\parallel 0}}{|H_{0}|^{2}}+1 \geq\delta_{2}>0,\quad p_{\parallel 0}+\frac{p_{\perp 0}(p_{\parallel 0}-p_{ \perp 0})}{|H_{0}|^{2}}\geq\delta_{3}>0, \tag{2.8}\]
_the condition_
\[6p_{\parallel 0}-p_{\perp 0}\geq\delta_{4}>0, \tag{2.9}\]
_the default requirements \(p_{\parallel 0}\geq\delta_{5}>0\), \(p_{\perp 0}\geq\delta_{6}>0\), \(|H_{0}|\geq\delta_{7}>0\), the compatibility conditions up to order \(m\), and the non-collinearity condition_
\[|H_{0}\times h_{0}|\big{|}_{\Sigma}\geq\delta_{0}>0 \tag{2.10}\]
_for some fixed constants \(\delta_{k}\) (\(k=\overline{0,4}\)). Then problem (2.2) admits a unique solution \((U,h,\varphi)\) in \(H^{m-9}_{*}([0,T]\times\Omega^{+})\times H^{m-9}([0,T]\times\Omega^{-})\times H ^{m-9}([0,T]\times\mathbb{T}^{2})\) for some \(0<T\leq T_{0}\)._
**Remark 2.1**.: _According to the local existence results in [17, 30] for general symmetric hyperbolic systems, the Cauchy problem for the CGL equations (1.1) allows smooth solutions within a short time if the initial data satisfy the hyperbolicity conditions (1.5)/(2.8). Clearly, conditions (1.5) satisfied for a background constant solution prevent the ill-posedness of the Cauchy problem for the corresponding constant coefficient linearized CGL equations, in particular, the so-called freehes and mirror instabilities [15] taking place if \(\tau>1\) and \(a_{p}>6(1+1/\beta_{\perp})\) respectively, where \(\beta_{\perp}=2p_{\perp}/|H|^{2}\) is the perpendicular plasma beta (one can easily check that the inequality \(\tau>-1/a_{p}\) in (1.5) prevents mirror instability)._
**Remark 2.2**.: _In view of the inequality \(\tau>-1/a_{p}\) in (1.5), our restriction \(a_{p}<6\) on the initial data in (2.9) is automatically satisfied for \(\beta_{\perp}>2/5\). Otherwise, it is indeed an additional requirement on the initial data. We believe that it could be neglected if we were able to get a priori estimates for the linearization of problem (2.2) without using the symmetrization of the linearized CGL equations in Section 4 alternative to the linearization of the symmetric system (1.4) with really cumbersome matrices. On the other hand, even if condition (2.9) can be dropped, we hope that the symmetrization proposed in Section 4 could be useful for applying the energy method to another boundary value problems for the CGL system._
**Remark 2.3**.: _Since in this paper we use for technical simplicity the change of variables (2.1), which does not gain one half derivative for \(\varphi\) compared to the change in [23], Theorem 2.1 is formulated in the form similar to the theorem proved in [29] for the plasma-vacuum interface problem in isotropic MHD with non-zero surface tension._
## 3 Linearized problem
The proof of the existence and uniqueness of solutions to a nonlinear problem is often relied on the analysis of the linearized problem. Moreover, if for the linearized problem we can only deduce a priori estimates with a loss of derivatives from the source terms and coefficients, then the classical fixed-point argument cannot be applied. This difficulty can be sometimes overcome by using the Nash-Moser method (see, e.g., [21] and references therein). To this end, one needs to perform a "genuine" linearization when we keep all the lower-order terms which are usually dropped while applying the fixed-point argument. Following arguments in [22, 23, 29] for the plasma-vacuum interface problem in classical MHD, we below describe such a linearization of problem (2.2).
Consider the basic state \((\hat{U}(t,x),\hat{h}(t,x),\hat{\varphi}(t,x^{\prime}))\) which is is a sufficiently smooth vector-function defined on \(\Omega_{T}^{+}\times\Omega_{T}^{-}\times\Sigma_{T}\), where \(\Omega_{T}^{\pm}:=(-\infty,T)\times\Omega^{\pm}\), and \(\Sigma_{T}:=(-\infty,T)\times\Sigma\). The assumptions on the basic state are totally analogous to those in [23, 29]. In particular, we assume that it satisfies the hyperbolicity conditions (_cf._ (2.8))
\[\hat{\rho}\geq\frac{\delta_{1}}{2}>0,\quad\frac{\hat{p}_{\perp}-\hat{p}_{ \parallel}}{|\hat{H}|^{2}}+1\geq\frac{\delta_{2}}{2}>0,\quad p_{\parallel}+ \frac{\hat{p}_{\perp}(\hat{p}_{\parallel}-\hat{p}_{\perp})}{|\hat{H}|^{2}} \geq\frac{\delta_{3}}{2}>0 \tag{3.1}\]
(together with \(\hat{p}_{\parallel}\geq\delta_{5}/2>0\), \(\hat{p}_{\perp}\geq\delta_{6}/2>0\), \(|\hat{H}|\geq\delta_{7}/2>0\)), the condition (_cf._ (2.9))
\[6-\frac{\hat{p}_{\perp}}{\hat{p}_{\parallel}}\geq\frac{\delta_{4}}{2}>0, \tag{3.2}\]
some regularity assumptions as in [23, 29] and
\[\mathbb{H}(\hat{H},\hat{v},\hat{\Phi})=0 \text{in }\Omega_{T}^{+}, \tag{3.3}\] \[\partial_{t}\hat{\varphi}=\hat{v}\cdot\hat{N},\quad\hat{h}\cdot \hat{N}=0 \text{on }\Sigma_{T},\] (3.4) \[\partial_{1}\hat{h}\cdot\hat{N}+\partial_{2}\hat{h}_{2}+\partial _{3}\hat{h}_{3}=0 \text{on }\Sigma_{T},\] (3.5) \[\hat{v}_{1}=0 \text{on }\Sigma_{T}^{+},\qquad\hat{h}\times\mathbf{e}_{1}=\mathbf{j}_{ \mathbf{c}}\quad\text{on }\Sigma_{T}^{-}, \tag{3.6}\]
where \(\mathbb{H}\) is the operator defined in (2.6), \(\hat{U}=(\hat{p},\hat{v},\hat{H},\hat{p}_{\parallel},\hat{p}_{\perp})^{\intercal }\in\mathbb{R}^{9}\), \(\hat{h}=(\hat{h}_{1},\hat{h}_{2},\hat{h}_{3})^{\intercal}\in\mathbb{R}^{3}\), \(\hat{v}=(\hat{v}_{1},\hat{v}_{2},\hat{v}_{3})^{\intercal}\in\mathbb{R}^{3}\), \(\hat{\Phi}(t,x):=x_{1}+\hat{\Psi}(t,x)\) with \(\hat{\Psi}(t,x):=\chi(x_{1})\hat{\varphi}(t,x^{\prime})\), \(\hat{N}:=(1,-\partial_{2}\hat{\Phi},-\partial_{3}\hat{\Phi})^{\intercal}\), and \(\Sigma_{T}^{\pm}:=(-\infty,T)\times\Sigma^{\pm}\). It follows from (3.3) that the identities
\[\nabla^{\hat{\Phi}}\cdot\hat{H}\big{|}_{\Omega_{T}^{\pm}}=0\qquad\hat{H}\cdot \hat{N}\big{|}_{\Sigma_{T}}=0,\qquad\hat{H}_{1}\big{|}_{\Sigma_{T}^{\pm}}=0 \tag{3.7}\]
are satisfied if they hold at the initial time (see [26] for the proof). As such, we require that the conditions (3.7) are satisfied at \(t=0\). Moreover, we assume that the non-collinearity condition holds for the basic state (_cf._ (2.10)):
\[\big{|}\hat{H}\times\hat{h}\big{|}\geq\frac{\delta_{0}}{2}>0\qquad\text{on }\Sigma_{T}. \tag{3.8}\]
Introduce the good unknowns of Alinhac [1]:
\[\hat{U}:=U-\frac{\Psi}{\partial_{1}\hat{\Phi}}\partial_{1}\hat{U},\quad\hat{h}: =h-\frac{\Psi}{\partial_{1}\hat{\Phi}}\partial_{1}\hat{h}, \tag{3.9}\]
where \(\Psi(t,x):=\chi(x_{1})\psi(t,x^{\prime})\). Then the linearized operators for equations (2.2a)-(2.2b) around the basic state \((\hat{U},\hat{h},\hat{\varphi})\) read
\[\mathbb{L}^{\prime}_{+}(\hat{U},\hat{\Phi})(U,\Psi) :=\left.\frac{\mathrm{d}}{\mathrm{d}\theta}\mathbb{L}_{+}(\hat{U} +\theta U,\hat{\Phi}+\theta\Psi)\right|_{\theta=0}\] \[=L_{+}(\hat{U},\hat{\Phi})\hat{U}+\mathcal{C}_{+}(\hat{U},\hat{ \Phi})\hat{U}+\frac{\Psi}{\partial_{1}\hat{\Phi}}\partial_{1}\mathbb{L}_{+}( \hat{U},\hat{\Phi}), \tag{3.10}\] \[\mathbb{L}^{\prime}_{-}(\hat{h},\hat{\Phi})(h,\Psi) :=\left.\frac{\mathrm{d}}{\mathrm{d}\theta}\mathbb{L}_{-}(\hat{h }+\theta h,\hat{\Phi}+\theta\Psi)\right|_{\theta=0}=L_{-}\big{(}\hat{\Phi} \big{)}\hat{h}+\frac{\Psi}{\partial_{1}\hat{\Phi}}\partial_{1}\mathbb{L}_{-}( \hat{h},\hat{\Phi}), \tag{3.11}\]
where \(L_{\pm}\) are the operators defined in (2.3)-(2.4) and
\[\mathcal{C}_{+}(U,\Phi)U:=\sum_{k=1}^{8}V_{k}\Bigg{(}\frac{ \partial\widetilde{A}_{1}^{+}}{\partial U_{k}}(U,\Phi)\partial_{1}U+\sum_{i=0,2,3}\frac{\partial A_{i}^{+}}{\partial U_{k}}(U)\partial_{i}U\Bigg{)}.\]
For the boundary operator \(\mathbb{B}\) defined by (2.5), we have
\[\mathbb{B}^{\prime}(\hat{U},\hat{h},\hat{\varphi})(U,h,\psi):= \left.\frac{\mathrm{d}}{\mathrm{d}\theta}\mathbb{B}\big{(}\hat{U}+\theta U,\, \hat{h}+\theta h,\,\hat{\varphi}+\theta\psi\big{)}\right|_{\theta=0}\] \[=\Big{(}(\partial_{t}+\hat{v}^{\prime}\cdot\mathrm{D}_{x^{\prime} })\psi-v\cdot\hat{N},\;p_{\perp}+\hat{H}\cdot H-\hat{h}\cdot h,\;h\cdot\hat{N} -\dot{h}^{\prime}\cdot\mathrm{D}_{x^{\prime}}\psi,\;v_{1},\;h\times\mathbf{e} _{1}\Big{)}^{\mathsf{T}},\]
where we denote \(z^{\prime}:=(z_{2},z_{3})^{\mathsf{T}}\) for any vector \(z:=(z_{1},z_{2},z_{3})^{\mathsf{T}}\).
Dropping the last terms in (3.10)-(3.11), we get the following effective linear problem for the good unknowns (3.9):
\[\mathbb{L}^{\prime}_{e+}(\hat{U},\hat{\Phi})\hat{U}:=L_{+}(\hat{ U},\hat{\Phi})\hat{U}+\mathcal{C}_{+}(\hat{U},\hat{\Phi})\hat{U}=f^{+} \quad\text{in }\Omega_{T}^{+}, \tag{3.12a}\] \[L_{-}(\hat{\Phi})\hat{h}=f^{-} \quad\text{in }\Omega_{T}^{-},\] (3.12b) \[\mathbb{B}^{\prime}_{e}(\hat{U},\hat{h},\hat{\varphi})(\hat{U}, \hat{h},\psi)=g \quad\text{on }\Sigma_{T}^{3}\times\Sigma_{T}^{+}\times\Sigma_{T}^{-},\] (3.12c) \[(\hat{U},\psi)\big{|}_{t<0}=0,\qquad\hat{h}\big{|}_{t<0}=0, \tag{3.12d}\]
where
\[\mathbb{B}^{\prime}_{e}(\hat{U},\hat{h},\hat{\varphi})(\hat{U},\hat{h},\psi): =\begin{pmatrix}(\partial_{t}+\hat{v}^{\prime}\cdot\mathrm{D}_{x^{\prime}}+ \hat{b}_{1})\psi-\hat{v}\cdot\hat{N}\\ \dot{p}_{\perp}+\hat{H}\cdot\hat{H}-\hat{h}\cdot\hat{h}+\hat{b}_{2}\psi\\ \dot{h}\cdot\hat{N}-\mathrm{D}_{x^{\prime}}\cdot(\dot{h}^{\prime}\psi)\\ \dot{v}_{1}\\ \dot{h}\times\mathbf{e}_{1}\end{pmatrix} \tag{3.13}\]
with \(\hat{b}_{1}:=-\partial_{1}\hat{v}\cdot\hat{N}\) and \(\hat{b}_{2}:=\partial_{1}\hat{p}_{\perp}+\hat{H}\cdot\partial_{1}\hat{H}-\hat{ h}\cdot\partial_{1}\hat{h}\), definitions (3.9), and constraint (3.5). The last terms in (3.10)-(3.11) should be considered as error terms at each Nash-Moser iteration step (see [23, 29] for more details). The source terms \(f^{\pm}\) and \(g\) are supposed to vanish in the past. We consider the case of zero initial data because the nonlinear problem can be reduced to it by construction of a so-called approximate solution (see, e.g., [23, 29]).
## 4 Symmetrization of the linearized CGL equations
To apply the energy method to the linear problem (3.12), we need to have symmetric matrices in the operator \(L_{+}\) described in (2.3). At the same time, the symmetric matrices in system (1.4) found in [2] are so cumbersome (see also [3]) that it makes them difficult to be used for deriving a priori estimates for problem (3.12). To avoid this difficulty we propose an elementary (algebraic) symmetrization of the linearized CGL equations, which does not rely on the Godunov's symmetrization procedure.
We forget for a moment about our initial-boundary value problem (3.12) and just consider the linearization of the CGL equation in the whole space \(\mathbb{R}^{3}\) about a basic state \(\hat{U}=(\hat{\rho},\hat{v},\hat{H},\hat{p}_{\hat{1}},\hat{p}_{\perp})^{\mathsf{ T}}\). We first rewrite system (1.1) in the following quasilinear form:
\[\partial_{t}U+\sum_{j=1}^{3}B_{j}(U)\partial_{j}U=0, \tag{4.1}\]
where the matrices \(B_{j}(U)\) are not symmetric and can be written down if necessary. Note that the divergence constraint (1.2) is used while writing down system (4.1). Then the linearization of (4.1)
about \(\hat{U}\) reads:
\[\begin{cases}D(\hat{v})\rho+\hat{\rho}\,\nabla\cdot v+\text{z.o.t.}=0,\\ \hat{\rho}\,D(\hat{v})v+\hat{b}\big{(}\hat{b}\cdot\big{\{}\nabla(p_{\parallel}-p_ {\perp})-2\hat{\tau}(\hat{H}\cdot\nabla)H\big{\}}\big{)}+(\hat{\tau}-1)(\hat{H} \cdot\nabla)H+\nabla q+\text{z.o.t.}=0,\\ D(\hat{v})H-(\hat{H}\cdot\nabla)v+\hat{H}\,\nabla\cdot v+\text{z.o.t.}=0,\\ D(\hat{v})p_{\parallel}+\hat{p}_{\parallel}\,\nabla\cdot v+2\hat{p}_{ \parallel}\big{(}\hat{b}\cdot(\hat{b}\cdot\nabla)v\big{)}+\text{z.o.t.}=0,\\ D(\hat{v})p_{\perp}+2\hat{p}_{\perp}\,\nabla\cdot v-\hat{p}_{\perp}\big{(} \hat{b}\cdot(\hat{b}\cdot\nabla)v\big{)}+\text{z.o.t.}=0,\end{cases} \tag{4.2}\]
where \(U=(\rho,v,H,p_{\parallel},p_{\perp})^{\mathsf{T}}\) is now the vector of perturbations,
\[D(\hat{v})=\partial_{t}+(\hat{v}\cdot\nabla),\quad\hat{b}=\frac{\hat{H}}{| \hat{H}|},\quad\hat{\tau}=\frac{\hat{p}_{\parallel}-\hat{p}_{\perp}}{|\hat{H }|^{2}},\quad q=p_{\perp}+\hat{H}\cdot H,\]
and z.o.t. are zero-order (nondifferential) terms which are of no interest for our current goals.
Let us introduce the new unknown
\[P:=\tfrac{1}{2}p_{\perp}-p_{\parallel}+\hat{\tau}(\hat{H}\cdot H).\]
The introduction of this unknown is prompted by some functions used in [4] for studying by the energy method the 2D stability of rectilinear shock waves in the CGL model for the special cases when the background constant magnetic field is parallel or perpendicular to the shock front. Let the new vector-valued unknown function be
\[V=(p_{\perp},v,H,P,s_{\parallel})^{\mathsf{T}}=J(\hat{U})U, \tag{4.3}\]
where \(s_{\parallel}\) is now the perturbation of the parallel entropy:
\[s_{\parallel}:=\frac{p_{\parallel}}{3\hat{p}_{\parallel}}-\frac{\rho}{\hat{ \rho}}+\frac{2(\hat{H}\cdot H)}{3|\hat{H}|^{2}}\]
(instead of the \(s_{\parallel}\) we could also take the perturbation of the perpendicular entropy), and the nonsingular matrix \(J(\hat{U})\) can be easily written down. We then rewrite (4.2) in terms of the new unknown \(V\) as follows:
\[\begin{cases}\frac{1}{2\hat{p}_{\perp}}D(\hat{v})p_{\perp}+\nabla\cdot v- \frac{1}{2}\big{(}\hat{b}\cdot(\hat{b}\cdot\nabla)v\big{)}+\text{z.o.t.}=0,\\ \hat{\rho}\,D(\hat{v})v-\hat{b}\big{(}\hat{b}\cdot\big{\{}\nabla\big{(}\tfrac {1}{2}p_{\perp}+P\big{)}+\hat{\tau}(\hat{H}\cdot\nabla)H\big{\}}\big{)}+(\hat{ \tau}-1)(\hat{H}\cdot\nabla)H+\nabla q+\text{z.o.t.}=0,\\ (1-\hat{\tau})D(\hat{v})H+\hat{\tau}\hat{b}\big{(}\hat{b}\cdot D(\hat{v})H \big{)}+(\hat{\tau}-1)(\hat{H}\cdot\nabla)v\\ \hskip 28.452756pt-\hat{\tau}\hat{b}\big{(}\hat{b}\cdot(\hat{H}\cdot\nabla)v \big{)}+\hat{H}\,\nabla\cdot v+\text{z.o.t.}=0,\\ \frac{2}{6\hat{p}_{\parallel}-\hat{p}_{\perp}}\,D(\hat{v})P-\hat{b}\cdot(\hat{ b}\cdot\nabla)v+\text{z.o.t.}=0,\\ D(\hat{v})s_{\parallel}+\text{z.o.t.}=0.\end{cases} \tag{4.4}\]
For obtaining equations (4.4) we have, in particular, left multiplied the equation for \(H\) in (4.2) by the symmetric matrix
\[\hat{\mathcal{B}}=(1-\hat{\tau})I+\hat{\tau}\hat{b}\otimes\hat{b}.\]
Equations (4.4) form the symmetric hyperbolic system
\[\mathcal{A}_{0}(\hat{U})\partial_{t}V+\sum_{j=1}^{3}\mathcal{A}_{j}(\hat{U}) \partial_{j}V+\mathcal{A}_{4}(\hat{U})V=0, \tag{4.5}\]
where the block diagonal matrix \(\mathcal{A}_{0}(\hat{U})=\text{diag}\,(1/(2\hat{p}_{\perp}),\hat{\rho}I,\hat{ \mathcal{B}},2/(6\hat{p}_{\parallel}-\hat{p}_{\perp}),1)\) is positive definite thanks to the assumptions \(\hat{\tau}<1\) (cf. (3.1)) and (3.2),
\[\mathcal{A}_{j}(\hat{U})=\begin{pmatrix}\frac{\hat{v}_{j}}{2\hat{p}_{\perp}}& \mathbf{e}_{j}^{\mathsf{T}}-\frac{1}{2}\hat{b}_{j}\hat{b}^{\mathsf{T}}&0&0&0\\ \mathbf{e}_{j}-\frac{1}{2}\hat{b}_{j}\hat{b}&\hat{\rho}\hat{v}_{j}I&\mathbf{e}_{j }\otimes\hat{H}-\hat{H}_{j}\hat{\mathcal{B}}&-\hat{b}_{j}\hat{b}&0\\ 0&\hat{H}\otimes\mathbf{e}_{j}-\hat{H}_{j}\hat{\mathcal{B}}&\hat{v}_{j}\hat{ \mathcal{B}}&0&0\\ 0&-\hat{b}_{j}\hat{b}^{\mathsf{T}}&0&\frac{2\hat{v}_{j}}{6\hat{p}_{\parallel}- \hat{p}_{\perp}}&0\\ 0&0&0&0&\hat{v}_{j}\end{pmatrix},\]
and the concrete form of the matrix \(\mathcal{A}_{4}(\hat{U})\) is of no interest.
## 5 Proof of Theorem 2.1
In view of (4.3), (4.5), for deriving a priori estimates for the linear problem (3.12) we can now use the system
\[\mathcal{A}_{0}(\hat{U})\partial_{\hat{U}}\hat{V}+\widetilde{\mathcal{A}}_{1}( \hat{U},\hat{\Phi})\partial_{\hat{U}}\hat{V}+\mathcal{A}_{2}(\hat{U})\partial_{ 2}\hat{V}+\mathcal{A}_{3}(\hat{U})\partial_{3}\hat{V}+\mathcal{A}_{4}(\hat{U}) \hat{V}=F^{+}\quad\text{in }\Omega_{T}^{+} \tag{5.1}\]
following from (3.12a), where
\[\hat{V}:=J(\hat{U})\hat{U},\qquad\widetilde{\mathcal{A}}_{1}^{+}(\hat{U},\hat {\Phi}):=\frac{1}{\partial_{1}\hat{\Phi}}\big{(}\mathcal{A}_{1}^{+}(\hat{U})- \partial_{\hat{\Phi}}\hat{\Phi}\mathcal{A}_{0}^{+}(\hat{U})-\partial_{2}\hat {\Phi}\mathcal{A}_{2}^{+}(\hat{U})-\partial_{3}\hat{\Phi}\mathcal{A}_{3}^{+}( \hat{U})\big{)},\]
and the new source term \(F^{+}=\mathcal{A}_{0}(\hat{U})\big{(}J(\hat{U})\big{)}^{-1}\big{(}A_{0}(\hat{U })\big{)}^{-1}f^{+}\). The crucial role is then played by the boundary matrix \(\widetilde{\mathcal{A}}_{1}(\hat{U},\hat{\Phi})\) calculated on the boundary \(\Sigma_{T}\):
\[\widetilde{\mathcal{A}}_{1}(\hat{U},\hat{\Phi})=\begin{pmatrix}\frac{\hat{m}} {2\hat{p}_{\perp}}&\hat{N}^{\mathsf{T}}-\frac{1}{2}(\hat{b}\cdot\hat{N})\hat{ \Phi}^{\mathsf{T}}&0&0&0\\ \hat{N}-\frac{1}{2}(\hat{b}\cdot\hat{N})\hat{b}&\hat{\rho}\hat{m}I&\hat{N} \otimes\hat{H}-(\hat{H}\cdot\hat{N})\hat{\mathcal{B}}&-(\hat{b}\cdot\hat{N}) \hat{b}&0\\ 0&\hat{H}\otimes\hat{N}-(\hat{H}\cdot\hat{N})\hat{\mathcal{B}}&\hat{m}\hat{ \mathcal{B}}&0&0\\ 0&-(\hat{b}\cdot\hat{N})\hat{b}^{\mathsf{T}}&0&\frac{2\hat{m}}{6\hat{p}_{ \parallel}-\hat{p}_{\perp}}&0\\ 0&0&0&0&\hat{m}\end{pmatrix}\quad\text{on }\Sigma_{T},\]
where \(\hat{m}:=\hat{v}\cdot\hat{N}-\partial_{t}\hat{\varphi}\). It follows from (3.4) that
\[\widetilde{\mathcal{A}}_{1}(\hat{U},\hat{\Phi})=\begin{pmatrix}0&\hat{N}^{ \mathsf{T}}&0&0&0\\ \hat{N}&0&\hat{N}\otimes\hat{H}&0&0\\ 0&\hat{H}\otimes\hat{N}&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{pmatrix}\quad\text{on }\Sigma_{T}.\]
Up to the additional bottom zero row and right zero column this matrix coincides with the corresponding boundary matrix on \(\Sigma_{T}\) in isotropic MHD, cf. [25].
Then, we easily calculate
\[\big{(}(\widetilde{\mathcal{A}}_{1}(\hat{U},\hat{\Phi})\hat{V}\cdot\hat{V}) \big{|}_{\Sigma_{T}}=2\hat{q}(\hat{v}\cdot\hat{N})|_{\Sigma_{T}},\]
where \(\hat{q}=\hat{p}_{\perp}+\hat{H}\cdot\hat{H}\). Similarly to [22, 23, 27], from the unknowns \(\hat{p}_{\perp}\), \(\hat{v}\) and \(\hat{H}\) we could pass in (5.1) to the unknowns \(\hat{q}\), \((\hat{v}\cdot\hat{N},\hat{v}_{2},\hat{v}_{3})\) and \((\hat{H}\cdot\hat{N},\hat{H}_{2},\hat{H}_{3})\) keeping the symmetry of the matrices. But, this is not really necessary. It is only important that the structure of our boundary matrix on the boundary is the same as in isotropic MHD. Since up to the notation \(\hat{p}:=\hat{p}_{\perp}\) the boundary conditions (3.12c), (3.13) coincide with those in isotropic MHD, the rest arguments are entirely the same as in [22, 23]. This completes the proof of Theorem 2.1.
|
2305.01480 | Exploring the synergistic potential of quantum annealing and gate model
computing for portfolio optimization | Portfolio optimization is one of the most studied problems for demonstrating
the near-term applications of quantum computing. However, large-scale problems
cannot be solved on today's quantum hardware. In this work, we extend upon a
study to use the best of both quantum annealing and gate-based quantum
computing systems to enable solving large-scale optimization problems
efficiently on the available hardware. The existing work uses a method called
Large System Sampling Approximation (LSSA) that involves dividing the large
problem into several smaller problems and then combining the multiple solutions
to approximate the solution to the original problem. This paper introduces a
novel technique to modify the sampling step of LSSA. We divide the portfolio
optimization problem into sub-systems of smaller sizes by selecting a diverse
set of assets that act as representatives of the entire market and capture the
highest correlations among assets. We conduct tests on real-world stock data
from the Indian stock market on up to 64 assets. Our experimentation shows that
the hybrid approach performs at par with the traditional classical optimization
methods with a good approximation ratio. We also demonstrate the effectiveness
of our approach on a range of portfolio optimization problems of different
sizes. We present the effects of different parameters on the proposed method
and compare its performance with the earlier work. Our findings suggest that
hybrid annealer-gate quantum computing can be a valuable tool for portfolio
managers seeking to optimize their investment portfolios in the near future. | Naman Jain, M Girish Chandra | 2023-05-02T15:02:13Z | http://arxiv.org/abs/2305.01480v1 | Exploring the synergistic potential of quantum annealing and gate model computing for portfolio optimization
###### Abstract
Portfolio optimization is one of the most studied problems for demonstrating the near-term applications of quantum computing. However, large-scale problems cannot be solved on today's quantum hardware. In this work, we extend upon a study to use the best of both quantum annealing and gate-based quantum computing systems to enable solving large-scale optimization problems efficiently on the available hardware. The existing work uses a method called Large System Sampling Approximation (LSSA) that involves dividing the large problem into several smaller problems and then combining the multiple solutions to approximate the solution to the original problem. This paper introduces a novel technique to modify the sampling step of LSSA. We divide the portfolio optimization problem into sub-systems of smaller sizes by selecting a diverse set of assets that act as representatives of the entire market and capture the highest correlations among assets. We conduct tests on real-world stock data from the Indian stock market on up to 64 assets. Our experimentation shows that the hybrid approach performs at par with the traditional classical optimization methods with a good approximation ratio. We also demonstrate the effectiveness of our approach on a range of portfolio optimization problems of different sizes. We present the effects of different parameters on the proposed method and compare its performance with the earlier work. Our findings suggest that hybrid annealer-gate quantum computing can be a valuable tool for portfolio managers seeking to optimize their investment portfolios in the near future.
Quantum Annealing Portfolio Optimization Quantum Computing
## 1 Introduction
Portfolio optimization is the problem of selecting the best distribution of assets that optimises a particular objective function [1]. Usually, this objective function attempts to minimize the risk and maximize the expected returns. It is a complicated problem that has been the subject of extensive research in finance, computer science and mathematics. The complexity of this problem arises from several factors such as the large number of assets available for investment, the dynamic nature of market conditions and various constraints that must be considered. Optimizing a portfolio involves balancing the trade-off between the expected returns and the risk by selecting the optimal combination of assets. This is a challenging task because the risks and returns associated with different assets are often interdependent and continuously changing. A well-optimized portfolio can provide significant benefits, including improved returns,
reduced risks and increased diversification to achieve an efficient portfolio that is tailored to the investment goals and the risk tolerance of an investor. In practice, it is well-known that the portfolio optimization problem is generally computationally intractable.
Classical optimization methods such as mean-variance optimization, and Monte Carlo simulations, have been widely used in the finance industry. However, these methods have limitations when dealing with large-scale problems, nonlinear constraints and non-convex objective functions. Quantum computing methods, viz. quantum annealing [2; 3] and gate-based quantum computing can potentially solve complex optimization problems more efficiently than classical methods and may provide better solutions for practical problems with many variables and constraints. It has been understood recently that quantum and quantum-inspired computing can help in tasks such as Monte-Carlo simulations [4; 5] and combinatorial optimization problems [6; 2; 7] in many industries. Applications of quantum optimization to real-world problems have been demonstrated for portfolio optimizations [8; 9], detection of arbitrage cycles [10], Travelling salesman problems [11] and many more. To solve problems on quantum computers, an appropriate mapping of the original problem into a quantum-solvable one is required. A commonly studied class of combinatorial optimization problems are Quadratic Unconstrained Binary Optimization (QUBO) problems. There are multiple approaches to solving QUBO problems on a quantum device.
Quantum annealing is a meta-heuristic utilized by adiabatic quantum computers. The QUBO problem is mapped to an Ising Hamiltonian whose ground state solution is related to the solution of the original problem [12]. It resembles simulated annealing and is applied to determine near-optimal solutions to QUBO problems. There are also variational quantum optimization heuristics such as the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) [6] to solve QUBO problems. VQE operates by using a set of parameterised gates to construct an ansatz (trial) state and uses a classical optimizer to optimise the parameters that best approximate the ground state of the problem Hamiltonian. QAOA on the other hand alternatively applies a series of operators which in infinite depth limit, would recover the adiabatic evolution and converge to the optimal solution. These heuristics are designed for near-term, noisy quantum machines without performance guarantees.
Although several studies show remarkable results in portfolio optimization using the above-described common methods, these approaches require an \(N\)-qubit quantum computer to solve the problem with \(N\) assets. This limits their usage to small problems as the largest available gate-based quantum computer to date is the Osprey processor from IBM with 433 physical qubits [13] and the largest annealer is the DWave 5000-qubit system on the Pegasus chip with 15 connections per qubit [14]. There have been proposals to mitigate this issue by dividing the original problem into smaller sub-problems and then re-combining the solutions to approximate the full problem solution.
In this paper, we extend upon a study by Liu et al. [15], to utilize both quantum annealing and gate-based quantum computing for the case of portfolio optimization. Their proposition is to divide a large Ising problem into smaller sub-systems, solve the sub-systems on available quantum hardware and then recombine the solutions. They propose a hybrid structure that solves the sub-system problem either by annealer or gate-based chips and then combines the solutions with amplitudes optimized using VQE on a gate-based quantum computer. This technique allows to solve much larger problems efficiently on the available hardware. Here we introduce a novel technique to sample sub-systems by selecting assets such that they form groups that capture the maximum dependencies between variables. To do this, we build a market graph of the assets and select a set of diverse assets among them using the Maximum Independent Set (MIS) of the graph. The assets in the MIS represent the entire market intuitively and so are used to build the sub-systems.
We investigate a QUBO formulation of a simplified version of the portfolio optimization problem and test the proposed method on real-world stock datasets. In the process, we also compare the performance of the method against previous techniques for varying numbers of assets and changing different parameters. Overall, our study contributes to the growing body of research on quantum computing in finance and is a step towards practical implications for investors and asset managers.
The rest of the paper is organized as follows. In Section 2, we introduce the portfolio optimization problem and its QUBO formulation. We also discuss traditional classical algorithms and their limitations. In Section 3, we briefly discuss the previous research - LSSA [15], followed by the proposal of our method. In Section 4, we document the experimental parameters and implementation details. Further, in Section 5, we present our findings on actual stock data and also provide a comparison of the performance of different methods. Finally, in Section 6, we conclude our study and provide directions for future research.
## 2 Portfolio Optimization
Portfolio optimization is aimed at creating an investment portfolio that maximizes returns while minimizing risk. Here, we consider the portfolio optimization problem expressed in a QUBO formulation [16; 17]
\[H=-\mu^{T}\omega+\gamma\omega^{T}\Sigma\omega \tag{1}\]
where \(\omega\) is an _N_-dimensional vector of binary decision variables, \(\mu\) is the vector of expected returns and \(\Sigma\) is the covariance matrix of the returns on the assets in the portfolio. The term \(\mu^{T}\omega\) represents the expected return on the portfolio and the term \(\gamma\omega^{T}\Sigma\omega\) denotes the variance of portfolio return. \(\gamma\geq 0\) is the risk-aversion factor and indicates the risk appetite of the investor. In this relaxed formulation, we assume that only long positions are possible. We suppose that the total budget is equally distributed among the selected assets and that the risk is estimated as the volatility, which is the square root of the portfolio variance. We also assume a static nature and do not consider the changing market conditions or investor preferences.
Classical algorithms for solving portfolio optimization problems, include Markowitz mean-variance optimization, which aims to maximize expected returns while minimizing variance, and the Capital Asset Pricing Model (CAPM), which focuses on estimating the expected returns of assets based on their systematic risks. Other popular approaches include the Sharpe ratio and the Black-Litterman model, which incorporate additional factors such as transaction costs and investor preferences. While these methods have been widely used, they have certain limitations which can make them infeasible for large portfolios. There is an exponential increase in the number of computations required as the number of assets in a portfolio increases. These methods can also get stuck in local optima, leading to sub-optimal solutions. Quantum optimization methods can likely overcome these limitations to deliver better portfolio allocations and potentially higher returns.
## 3 Methodology
### Large System Sampling Approximation
LSSA [15] divides a full Ising problem of \(N\) variables into smaller \(N_{s}\) sub-system problems each of size \(N_{g}\) variables (\(N_{g}\leq N\)). The sub-systems are solved independently considering the original problem Hamiltonian either on annealing or gate-based quantum chip. The solutions of these sub-systems are then recombined by optimizing the amplitude contributions of each of them by using a VQE on a gate-based quantum computer. The full problem solution is a statistical mixture of sub-problem solutions. The complete mathematical description of this procedure is described in the following paragraphs.
An Ising problem of the below form is considered
\[H=\sum_{i,j=1}^{N}J_{i,j}z_{i}z_{j}+\sum_{i=1}^{N}h_{i}z_{i} \tag{2}\]
where \(z_{i}\) are the spin variables, \(z_{i}\in\{-1,+1\}\), \(N\) is the total number of variables, and \(h_{i}\) and \(J_{i,j}\) correspond to the bias and coupling strength of the spin system respectively. Ising and QUBO formulations are interchangeable by means of the transformation \(z_{i}=2x_{i}-1\) where \(x_{i}\in\{0,1\}\) is the binary variable and \(z_{i}\in\{-1,+1\}\) is the spin variable.
The sub-systems are created by randomly sampling \(N_{g}\leq N\) sites \(N_{s}\) times. The sampling procedure guarantees that all variables are picked at least \(\lfloor(N_{s}\times N_{g})/N\rfloor\) times. The new sub-Hamiltonians remain in the same form as (2), only containing the variables selected in the corresponding sub-systems.
The eigenvalue problem of the reduced Hamiltonian is solved and the corresponding ground state is labelled as \(|GS^{(i)}_{sub}\rangle\), where \(i\) represents the \(i^{th}\) sub-system. This gives a partial spin-configuration of the full system, that is, a vector of length \(N\) with \(N_{g}\) non-empty sites. Each element in this vector is either \(+1,-1\)_or_\(0\), corresponding to selecting, rejecting and variable not being present in that sub-system conditions. At the end of this procedure, there are \(N_{s}\) such vectors that are the ground state solutions of the \(N_{s}\) sub-systems.
The \(N_{s}\) sub-systems are then aggregated as a weighted sum of the sub-system ground states,
\[|\mathcal{S}^{wc}\rangle=\sum_{i=1}^{N_{s}}C^{(i)}|GS^{(i)}_{sub}\rangle \tag{3}\]
Further, the ground state configuration of the full system is approximated by the sign of each variable in \(|\mathcal{S}^{wc}\rangle\).
\[|sign(S^{wc}\rangle\approx|GS_{full}\rangle \tag{4}\]
It is expected that the above approximation is accurate when the sub-system size approaches the full system size, i.e., as \(N_{g}\longrightarrow N\).
To determine the coefficients \(C^{(i)}\) in (3), VQE is employed to minimize the expectation value of the full-system Hamiltonian \(H\) and determine the ground state of the full-system. The \(N_{s}\) coefficients are encoded into the amplitudes of the quantum state, using \(N_{gb}=\lceil log_{2}N_{s}\rceil\) qubits of a gate-based system, and the cost function is defined as
\[\text{Cost}(\textbf{C}(\overrightarrow{\theta}))=\langle sign(S^{wc})|H|sign (S^{wc})\rangle \tag{5}\]
where \(\overrightarrow{\theta}\) represents a set of tunable parameters of the VQE.
LSSA enables solving large problems on the available quantum hardware. The sub-systems could be solved by an annealer which usually has greater qubits than the gate-based systems. This hybrid structure uses the best of both kinds of quantum systems to solve large Ising problems efficiently. We refer the reader to [15] for a complete description and analysis of LSSA.
One of the possible limitations of LSSA is the number of samples \(N_{s}\). Ideally, to ensure that quadratic interactions between all the variables are captured, \(N_{s}=\binom{N}{N_{g}}\) samples are required. This number grows combinatorially and hence a very delicate trade-off between \(N_{g}\) and \(N_{s}\) is needed to ensure acceptable performance and the problem being accommodatable on the available quantum machines. However, randomly selecting the sub-systems would require a very large number of samples and consequently many expensive calls to the quantum chips. Moreover, choosing fewer samples might lead to sub-optimal results as the strongest couplings between the variables might be disregarded in the random process. Thus a better sampling method is needed to have quality small number of samples.
### Proposed Method
#### 3.2.1 Level 1 LSSA with MIS
The process of selecting the sub-systems in LSSA at random is not perfect. Important and significant couplings between the variables might be omitted simply due to the nature of the sampling. To better select the sub-systems, we propose a coupling-dependent sampling methodology based on the diversification of the portfolio to select assets that are representative of the entire market and capture the highest couplings between all the variables. The complete description of the proposed method is presented in the following paragraphs.
Assets in the market are usually correlated with each other. The strength of the correlation between two assets is an indication of the risk of investing in both assets. Hence, an investment in negatively correlated assets is generally rewarding, while an investment in strongly positively correlated assets is risky. Using the absolute value of correlation as a selection criterion, a graph \(G=(V,E)\) of assets is constructed with the asset symbol set as the vertices, \(V\), and the edge connection, \(E\), determined by the selection criteria. Therefore, an edge is drawn between two vertices if the corresponding pair of assets have an absolute correlation value above some static threshold value \(\alpha\). The MIS of this market graph produces a diversified portfolio [18, Chapter 4], but only considering the correlation measure. The assets present in the MIS are then used to generate the sub-systems. A thorough explanation of the mechanism to accomplish this task is provided subsequently.
MIS is a maximal set of vertices in a graph such that no two of them are adjacent. It is a classic NP-Hard graph problem [19] that has been studied extensively. There is also a well-known QUBO formulation for MIS that facilitates the use of quantum optimization techniques to address the problem.
\[H=-\sum_{i\in V}x_{i}+2\sum_{(i,j)\in E}x_{i}x_{j} \tag{6}\]
The optimization routine to achieve the ground state can be either annealing or gate-based quantum systems. The formulation in (6) requires \(N\) logical qubits for an MIS problem of \(N\) variables. It might seem that there is no decrease whatsoever in the number of qubits. To handle this, we add a second-level division to address the MIS problem. The mechanism to achieve that is explained in Section 3.2.2
Figure 1: Flow of the proposed method. The first step is obtaining the expected returns vector and the covariance (and correlation) matrix from the data. The correlation matrix is thresholded to create the market graph. The MIS of the graph gives assets used to create sub-systems. The MIS problem may be further decomposed as described in Section 3.2.2. The sub-systems are then solved and combined following the LSSA methodology.
The assets selected in the MIS are indicated by the ground state of the Hamiltonian in (6). These assets act as placeholders for dividing the full problem. The market graph contains many assets connected to each of those present in the MIS. We propose to make sub-systems out of each of these groups. Therefore, if there are \(N_{m}\) assets in the MIS, then we create \(N_{s}=N_{m}\) sub-systems such that each of those contains a group of assets connected to the corresponding placeholder asset in the MIS. However, for practical purposes, we limit the sub-system size to \(N_{g}\) (dependent on the available quantum hardware). Eventually, we have \(N_{s}\) sub-systems each containing a maximum of \(N_{g}\) most correlated assets connected to each of the placeholder assets. After the creation of the sub-systems, the LSSA methodology shown in Section 3.1 is followed. The sub-systems are independently solved and the solutions are recombined using a VQE with a circuit structure similar to [15]. The entire process is shown in Fig. 1 for a better understanding.
Since correlation is not a transitive property, it is possible for an asset to be present in many sub-systems, even though their placeholder assets are not connected. Thus, the sub-systems are not mutually exclusive. It facilitates the model to capture dependencies across the different groups in the market. The structure of the market graph, and thereafter the sub-systems are highly dependent on the threshold value. A very low threshold would produce a complete graph and therefore would reduce the sampling procedure to random selection. Thus, the original LSSA becomes a special case of the proposed method under low threshold conditions. A very high threshold would produce an unconnected graph, with single assets in some sub-systems. Accordingly, the right balance of the threshold is needed to ascertain that the resulting market graph is neither too sparse nor too dense.
A suitable trade-off between the sub-system size \(N_{g}\) and the number of samples \(N_{s}\) is also needed. More samples do not necessarily mean better results, although larger sub-systems do indicate improved performance, the latter being dependent on the quantum hardware capability. The MIS-based sampling procedure presented here creates far fewer samples (worst-case \(N\) samples) without compromising the quality of results. It builds problem-specific sub-systems that capture most of the strongest interactions between the variables in a limited number of samples. It not only allows large-scale portfolio optimization problems but also ensures excellent performance.
Figure 2: Closing prices of the 64 assets over the past 5 years shown in log scale
#### 3.2.2 Level 2 LSSA with MIS (random sampling)
The method proposed in Section 3.2.1 requires a \(N\) qubit machine to solve a portfolio optimization problem of \(N\) assets. To accommodate much larger problems on the available hardware, we present a second-level division of the MIS problem itself. We use LSSA in its original form to first solve the MIS of the market graph and then use the final result to extract the placeholder assets, and subsequently create sub-systems for the portfolio optimization problem. To elaborate further, the MIS problem is addressed as follows - a) random sampling to create sub-systems with fewer variables than the original problem, b) solving the smaller problems using quantum optimization methods, c) recombining the solutions using VQE to approximate the ground state of the original MIS problem. Algorithm 1 shows the step-by-step procedure of solving a portfolio optimization problem of \(N\) assets using the proposed methodology.
```
0:\(\mu,\Sigma,\alpha,N,N_{g},N_{s}\)
0: Optimal asset investment for maximum returns and minimum risk Create a market graph \(G=(V,E)\) of N assets by thresholding the correlation matrix at \(\alpha\); Create \(N_{s}\) sub-systems each of size \(N_{g}\) by random sampling; \(i\gets 1\) \(\texttt{results}\leftarrow\left[...\right]_{N_{s}\times N}\) while\(i\leq N_{s}\)do \(\texttt{results[i]}\gets x:\operatorname*{argmin}_{x}H=-\sum_{j\in V}x_{j}+2 \sum_{(j,k)\in E}x_{j}x_{k}\) \(i\gets i+1\) endwhile \(\mathcal{S}^{wc}=\sum_{i=1}^{N_{s}}C^{(i)}\cdot\texttt{results[i]}\) Apply VQE to optimize the coefficients \(C^{(i)}\) \(\texttt{MIS\_assets}\gets sign(\mathcal{S}^{wc})\) \(\triangleright\) End of MIS portfolio\(\texttt{subsystems}\leftarrow[...]\) for asset in MIS\(\_\)assets do \(\texttt{sub\_system}\gets N_{g}\) highest correlated vertices to asset in \(G\); add sub\(\_\)system to portfolio\(\_\)subsystems endfor \(i\gets 1\) \(k\gets count(\texttt{MIS\_assets})\) portfolio\(\texttt{results}\leftarrow\left[...\right]_{k\times N}\) while\(i\leq k\)do portfolio\(\texttt{results[i]}\gets x:\operatorname*{argmin}_{x}H=-\mu^{T}x+\gamma x^{T}\Sigma x\) \(i\gets i+1\) endwhile \(\mathcal{S}^{wc}=\sum_{i=1}^{k}C^{(i)}\cdot\texttt{portfolio\_results[i]}\) Apply VQE to optimize the coefficients \(C^{(i)}\) \(\texttt{portfolio\_assets}\gets sign(\mathcal{S}^{wc})\) \(\triangleright\) End of Portfolio Optimization
```
**Algorithm 1** MIS based portfolio optimization
Unlike the conventional portfolio optimization problem, the MIS formulation in (6) shows that quadratic interactions between all the variables are equally valued. This means that any selection of assets for the sub-systems will be mathematically similar to any other in approximating the full system objective. It is, therefore, justified that random sampling to construct sub-systems is as good as any other method in this case. However, there is now a need for more samples. Yet, the number of samples \(N_{s}\) is not substantial because the market graph is not usually dense. The density of the graph depends precisely on the threshold value, which can be chosen to control the number of edge connections as \(\mathcal{O}(N)\). With a little tuning, a reasonable estimation can be drawn for both the threshold value and the number of samples.
## 4 Implementation
The proposed method is tested on real data from the Indian stock market [20] with the D-Wave Advantage_system4.1 [14] for sub-system size \(N_{g}=N/2\) for \(N\in\{8,16,32,40,64\}\) and \(N_{g}=N/4\) for \(N\in\{40,64\}\). The number of samples drawn is limited to a maximum of \(N/2\) after testing for higher values without significant improvement. The VQE amplitude optimization is carried out on a local shot-based Qiskit [21] simulator of \(N_{gb}=\lceil log_{2}N_{s}\rceil\) qubits with 2048 shots. We use data over the past 5 years to compute the expected returns and co-variances of the assets. The Indian stock market data for \(N=64\) assets is shown in Fig. 2. The closing price of the assets is shown in a _log_ scale for better visibility. The risk-aversion factor \(\gamma\) in (1) is chosen as 0.5 in all the experiments. The classical optimizer to optimize the parameters of VQE is COBYLA. Experiments showed no substantial differences in using other optimizers. The threshold value, \(\alpha\) for the construction of the market graph is taken as 0.25 (meaning all asset connections with absolute strength less than 0.25 are removed) after careful assessment of the sparsity of the resulting graph. The python library PyQUBO [22; 23] was used to build the QUBO models. In this implementation, we limit to simulated and quantum annealing to solve the QUBOs of the individual sub-systems and use a local simulator to apply the VQE for amplitude estimation.
Figure 3: Approximation ratios of different methods. The labels indicate the different techniques used to solve the problem. LSSA_random represents the original LSSA approach with random sampling, LSSA_MIS represents the approach described in Section 3.2.1 and LSSA_MIS_random is the process explained in Section 3.2.2
Results and Discussion
To establish a comparison of performance between the different methods, we define an approximation ratio
\[R_{ar}=\frac{\texttt{Method GSE}}{\texttt{Classical GSE}} \tag{7}\]
where GSE is the Ground State Energy, Method refers to the different methods, and Classical GSE is the Exact solution energy for \(N<20\) and D-Wave-Tabu solver energy for \(N\geq 20\) assets. Overall, an \(N\) asset portfolio optimization problem is solved by first deriving \(N_{s}\) samples and solving MIS of the market graph on a \(N_{g}\) qubit quantum annealer and \(\lceil log_{2}N_{s}\rceil\) qubit gate-based machine. The results are then used to solve the original portfolio problem on a \(N_{g}\) qubit quantum annealer and \(\lceil log_{2}|MIS|\rceil\) qubit gate-based machine (\(|MIS|\ll N\) is the cardinality of the Maximum independent set). The total calls made to the annealer are \(N_{s}+|MIS|\).
The approximation ratios are shown in Fig 3. On the horizontal axis, the ticks represent the two values \(N\) (_problem size_) and \(N_{g}\) (_sub-system size_) separated by a hyphen (\(-\)). So, 40-20 corresponds to a portfolio optimization problem of 40 assets solved by creating samples each of size 20. The ratios shown here are derived by experimenting with different values of \(N_{s}\) and choosing the one that is best among them. The results indicate that the proposed method performs at par with the Tabu solver with much fewer samples than the original method. Specifically, for \(N=64\) assets and \(N_{g}=32\), LSSA_random requires \(N_{s}=32\) samples, while LSSA_MIS and LSSA_MIS_random use only \(N_{s}=12\) and \(N_{s}=13\) samples respectively. Table 1 shows the experimental data for the least number of sub-systems required by different methods for the performance indicated in Fig. 3.
Fig. 4 shows the efficient frontier and represents the expected returns at various levels of volatility (risk) for a problem with 64 assets and a sub-system size of 32 variables. The individual assets are shown in orange dots (for better visualization, the axes are clipped). The assets chosen for investment by the LSSA_MIS_random procedure are shown in red. The return and volatility of the portfolio formed by choosing these assets are shown in a blue plus symbol which is substantially better than random picking of stocks. To put things into perspective, random combinations of assets are picked and their corresponding values are shown as scattered dots in blue. The proposed method only
Figure 4: Return and volatility of the portfolio optimization problem with \(N=64\) assets. In Fig. (a), the scattered dots in blue represent results from a random sampling of 5000 different portfolios. The orange and red dots represent individual assets and the stocks chosen by LSSA_MIS_random, respectively. The axes are clipped for better visibility. In Fig. (b), the results obtained from classical solver D-Wave Tabu are shown by the triangle, the LSSA_random by the star, the LSSA_MIS by the square and LSSA_MIS_random by the plus. Fig. (b) is a zoomed-in version of Fig. (a)
uses 13 samples for portfolio optimization and 16 samples for solving MIS, while the baseline LSSA_random uses 32 samples.
Although the results shown here are stochastic in nature, the data from a number of experiments indicate that the proposed method matches the baseline method in terms of results and thus provides a practical and feasible approach to portfolio optimization using quantum computing and quantum annealing.
## 6 Conclusion and Future Work
In this paper, we introduced an efficient way to create sub-systems for the LSSA technique. We demonstrated our algorithm on a simple portfolio optimization problem on a real dataset. Our approach significantly reduces the number of samples required compared to the earlier method, i.e. it lowers the requirements to apply the technique on real quantum hardware for practically relevant problems. We recognise that the MIS-based method is applicable when the assets are from different sectors of the market and thus have different degrees of correlation in general. LSSA becomes a specific case of the proposed method for a small set of strongly correlated assets. Therefore, the proposed method is best suited for problem instances where there are grades of diversity, which is usually true in a real setting.
In this study, we have relied on solving the sub-systems using Quantum Annealing. However, even gate-based quantum chips can be used. Algorithms like the Grover Adaptive Search [24] could be better methods to solve the individual sub-systems. The VQE step to estimate the coefficients of the ground state combinations might be replaced with the Variational non-orthogonal optimization strategy [25] to allow much larger problem sizes. We aim to explore these ideas in the future. There are still several practical considerations to the problem of portfolio optimization. Besides being able to solve large-scale problems efficiently, the solutions must adhere to practical constraints and be flexible to accommodate changes as per market conditions. Overall, our findings suggest that a hybrid of annealing and gate-based quantum computing can be a promising tool for portfolio optimization, and we look forward for further exploration of this exciting field.
## Acknowledgements
The authors express their sincere gratitude to Mr. Ankit Khandelwal, Mr. Manoj Nambiar and Dr. Gautam Shroff from TCS Research for their invaluable insights, unwavering support and encouragement.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(N\) & \(N_{g}\) & \multicolumn{3}{c}{\(N_{s}\)} \\ \hline & & LSSA\_random & LSSA\_MIS & LSSA\_MIS\_random \\
8 & 4 & 4 & 4 & 4 - MIS, 4 - PO \\
16 & 8 & 4 & 4 & 4 - MIS, 4 - PO \\
32 & 16 & 16 & 8 & 8 - MIS, 8 - PO \\
40 & 20 & 16 & 7 & 8 - MIS, 8 - PO \\
40 & 10 & 8 & 8 & 8 - MIS, 8 - PO \\
64 & 32 & 32 & 12 & 16 - MIS, 13 - PO \\
64 & 16 & 16 & 16 & 16 - MIS, 16 - PO \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental data showing the proposed method using fewer sub-systems. MIS refers to the number of samples derived for solving MIS problem and PO refers to the number of samples used for Portfolio Optimization problem. |
2302.05641 | Emulator-based Bayesian Inference on Non-Proportional Scintillation
Models by Compton-Edge Probing | Scintillator detector response modelling has become an essential tool in
various research fields such as particle and nuclear physics, astronomy or
geophysics. Yet, due to the system complexity and the requirement for accurate
electron response measurements, model inference and calibration remains a
challenge. Here, we propose Compton edge probing to perform non-proportional
scintillation model (NPSM) inference for inorganic scintillators. We use
laboratory-based gamma-ray radiation measurements with a NaI(Tl) scintillator
to perform Bayesian inference on a NPSM. Further, we apply machine learning to
emulate the detector response obtained by Monte Carlo simulations. We show that
the proposed methodology successfully constrains the NPSM and hereby quantifies
the intrinsic resolution. Moreover, using the trained emulators, we can predict
the spectral Compton edge dynamics as a function of the parameterized
scintillation mechanisms. The presented framework offers a novel way to infer
NPSMs for any inorganic scintillator without the need for additional electron
response measurements. | David Breitenmoser, Francesco Cerutti, Gernot Butterweck, Malgorzata Magdalena Kasprzak, Sabine Mayer | 2023-02-11T09:59:53Z | http://arxiv.org/abs/2302.05641v3 | # Emulator-based Bayesian Inference on Non-Proportional Scintillation Models by Compton-Edge Probing
###### Abstract
Scintillator detector response modelling has become an essential tool in various research fields such as particle and nuclear physics, astronomy or geophysics. Yet, due to the system complexity and the requirement for accurate electron response measurements, model inference and calibration remains a challenge. Here, we propose Compton edge probing to perform non-proportional scintillation model (NPSM) inference for inorganic scintillators. We use laboratory-based gamma-ray radiation measurements with a NaI(Tl) scintillator to perform Bayesian inference on a NPSM. Further, we apply machine learning to emulate the detector response obtained by Monte Carlo simulations. We show that the proposed methodology successfully constrains the NPSM and hereby quantifies the intrinsic resolution. Moreover, using the trained emulators, we can predict the spectral Compton edge dynamics as a function of the parameterized scintillation mechanisms. The presented framework offers a novel way to infer NPSMs for any inorganic scintillator without the need for additional electron response measurements.
**Keywords:** Bayesian inversion, Gamma-ray spectrometry, Inorganic scintillator, Machine learning, Monte Carlo, Surrogate modelling
## Introduction
Inorganic scintillation detectors are a prevalent tool to measure ionizing radiation in various research fields such as nuclear and particle physics, astronomy or planetary science [1, 2, 3, 4, 5, 6, 7]. Other applications include radiation protection, medical diagnostics and homeland security [8, 9]. In almost all applications, the measured signal needs to be deconvolved to infer the properties of interest, e.g. the flux from a gamma-ray burst or the elemental composition on a comet. This deconvolution requires accurate detector response models and consequently detailed knowledge about the scintillation mechanisms themselves.
Detector response models can either be derived empirically by radiation measurements or numerically using Monte Carlo simulations [10]. Regarding the numerical derivation, the most common approach to simulate the detector response is to use a proportional energy deposition model. In this model, the scintillation light yield \(L\) is assumed to be proportional to the deposited energy \(E\)[6, 11]. Consequently, the detector response characterization is reduced to a comparably simple energy deposition problem, which can be solved by any standard multi-purpose Monte Carlo code.
However, thanks to the development of the Compton coincidence measurement technique [12], recent studies could conclusively confirm the conjecture reported in earlier investigations [13, 14, 15] that not only organic but also inorganic scintillators exhibit a pronounced non-proportional relation between the deposited energy and the scintillation light yield [16, 17, 18]. The origin of this scintillation non-proportionality seems to be linked to the intrinsic scintillation response to electrons and the different mechanisms associated with the creation and transport of excitation carriers in the scintillation crystal [19, 20]. Nonetheless, our understanding about these phenomena is still far from complete and, thanks to the advent of novel experimental techniques and the development of new scintillator materials, interest in scintillation physics has steadily grown over the past years [16, 17, 18, 19, 20, 21, 22, 23, 24].
Regarding the detector response modelling, the scintillation non-proportionality has two major implications. First, it leads to an intrinsic spectral broadening and thereby sets a lower limit on the spectral resolution achievable with the corresponding scintillator [1, 25, 26, 27, 28]. Second, various studies stated the conjecture that specific spectral features such as the Compton edges are shifted and distorted as a result of the non-proportional scintillation response [1, 14, 15, 29, 30]. Furthermore, additional studies revealed a complex dependence of the scintillation non-proportionality on various scintillator properties including the activator concentration, the temperature and the crystal size, among others [1, 21, 22, 25, 28, 31, 32, 33, 34].
Based on these findings, we conclude that non-proportional scintillation models (NPSM) should be included in the detector response simulations to prevent systematic errors in the predicted spectral response. Non-proportional effects are known to increase with increasing crystal size [25, 28, 31]. NPSMs are therefore particularly relevant for scintillators with large crystal volumes, e.g. in dark matter research, total absorption spectroscopy or remote sensing [1, 2, 3, 4, 5, 6, 7, 30]. In addition, especially due to the sensitivity on the activator concentration and impurities [34], NPSMs need to be calibrated for each individual detector system. In case the scintillator properties change after detector deployment, e.g. due to radiation damage or temperature changes in space, this calibration should be repeated regularly.
Currently, K-dip spectroscopy, the already mentioned Compton coincidence technique as well as electron beam measurements are the only available methods to calibrate NPSM [35, 36, 37, 12, 38]. Moreover, only a very limited number of laboratories are able to perform these measurements. Therefore, these methods are not readily available for extensive calibration campaigns of custom detectors, e.g. large satellite probes or scintillators for dark matter research. Additionally, they can not be applied during detector deployment, which, as discussed above, might be important for certain applications such as deep space missions.
In this study, we propose Compton edge probing together with Bayesian inversion to infer and calibrate NPSMs. This approach is motivated by the already mentioned conjecture, that the Compton edge shifts as a result of the scintillation non-proportionality [1, 14, 15, 29, 30]. We obtained the spectral Compton edge data by gamma-ray spectrometry using a NaI(Tl) scintillator and calibrated radionuclide sources for photon irradiations under laboratory conditions. We applied Bayesian inversion with state-of-the-art Markov-Chain Monte Carlo algorithms [39] to perform the NPSM inference with the gamma-ray spectral data. In contrast to traditional frequentist methods or simple data-driven optimization algorithms, a Bayesian approach offers a natural, consistent and transparent way of combining prior information with empirical data to infer scientific model properties using a solid decision theory framework [40, 41, 42]. We simulated the detector response using a multi-purpose Monte Carlo radiation transport code in combination with parallel computing. To meet the required evaluation speed for the Bayesian inversion solver, we used machine learning trained polynomial chaos expansion (PCE) surrogate models to emulate the simulated detector response [43, 44]. This new approach offers not only a novel way to calibrate NPSMs with minimal effort--especially during the detector deployment--but it also allows new insights into the non-proportional scintillation physics without the need for additional electron response measurements.
## Results
### Compton edge probing
To obtain the spectral Compton edge data, we performed gamma-ray spectrometry under controlled laboratory conditions [30]. The adopted spectrometer consisted of four individual 10.2 cm \(\times\) 10.2 cm \(\times\) 40.6 cm prismatic NaI(Tl) scintillation crystals. We used seven different calibrated radionuclide sources (\({}^{57}\)Co, \({}^{60}\)Co, \({}^{88}\)Y, \({}^{109}\)Cd, \({}^{133}\)Ba, \({}^{137}\)Cs and \({}^{152}\)Eu) for the radiation measurements. However, only the \({}^{60}\)Co, \({}^{88}\)Y and \({}^{137}\)Cs measurements could be used for Compton edge probing. For the remaining sources, the Compton edges were obscured by additional full energy peaks and associated Compton continua. We used those remaining sources for energy and resolution calibrations. A schematic depiction of the measurement setup is shown in Fig. 1a.
### Forward modelling
We simulated the detector response for the performed radiation measurements using the multi-purpose Monte Carlo code FLUKA [46]. The performed simulations feature fully coupled photon, electron and positron radiation transport for our source-detector configuration with a lower kinetic energy threshold of 1 keV. As shown in Fig. 1a, the applied mass model includes all relevant detector and source components in high detail. On the other hand, the laboratory room together with additional instruments and equipment are modelled in less detail. For this simplifications, care was taken to preserve the overall opacity as well as the mass density.
We used a mechanistic model recently published by Payne and his co-workers to include the non-proportional scintillation physics in our simulations [17, 18, 22]. In general, the sequence of scintillation processes in inorganic scintillators can be qualitatively divided in five steps [48, 20, 49]. After interaction of the ionizing radiation with the scintillator, the emitted high-energetic electrons are relaxed by the production of numerous secondary electrons, phonons and plasmons. The low energetic secondary electrons are then thermalized by a phonon coupling mechanism producing excitation carriers, i.e. electron-hole pairs (\(e^{-}/h\)) and excitons. These excitation carriers are then transferred to the luminescent centers within the scintillator crystal, where they recombine and induce radiative relaxation of the excited luminescent centers producing scintillation photons. The first two processes, i.e. the interaction of the ionizing radiation with the scintillator as well as the \(e^{-}\)-\(e^{-}\) relaxation, are explicitly simulated by the Monte Carlo code. The creation and migration of the excitation carriers on the other hand is accounted for by Payne's mechanistic model.
In this mechanistic model it is assumed that only excitons are capable to radiatively recombine at the luminescent centers. Consequently, \(e^{-}/h\) pairs need to convert to excitons by the classic Onsager mechanism [50] in order to contribute to the scintillation emission. In addition, creation and migration of the excitation carriers compete with several quenching phenomena. The quenching mechanisms considered in Payne's model are the trapping of \(e^{-}/h\) pairs at point defects [20, 22] as well as exciton-exciton annihilation described by the Birks mechanism [51].
Using this NPSM, the non-proportional light yield \(L\) as a function of the differential energy loss \(dE\) per differential path length \(ds\) for electrons is given by [22]:
\[L\left(dE/ds\right)\propto\frac{1-\eta_{e/h}\exp\left[-\frac{dE/ds}{dE/ds|_{ \mathrm{Ons}}}\exp\left(-\frac{dE/ds|_{\mathrm{Tap}}}{dE/ds}\right)\right]}{1+ \frac{dE/ds}{dE/ds|_{\mathrm{Birks}}}} \tag{1}\]
where \(\eta_{e/h}\), \(dE/ds\mid_{\mathrm{Ons}}\), \(dE/ds\mid_{\mathrm{Trap}}\) and \(dE/ds\mid_{\mathrm{Birks}}\) are the model parameters characterizing the fraction of excitation carriers, which are created as \(e^{-}/h\) pairs at the thermalization phase, as well as the stopping power related to the Onsager, trapping and Birks mechanisms, respectively. A scheme
Figure 1: **Compton edge probing to perform Bayesian inference on non-proportional scintillation models.****a** Monte Carlo mass model of the experimental setup to perform Compton edge probing with an inorganic gamma-ray scintillation spectrometer under laboratory conditions. The spectrometer consists of four individual 10.2 cm \(\times\) 10.2 cm \(\times\) 40.6 cm prismatic NaI(Tl) scintillation crystals with the associated photomultiplier tubes (PMT), the electronic components, e.g. the multi-channel analyzers (MCA), embedded in a thermal-insulating and vibration-damping polyethylene (PE) foam protected by a rugged aluminum detector box. We inserted radiation sources consisting of a radionuclide carrying ion exchange sphere (diameter 1 mm) embedded in a 25 mm \(\times\) 3 mm solid plastic disc into a custom low absorption source holder made out of a polylactide polymer (PLA) and placed this holder on a tripod in a fixed distance of 1 m to the detector front on the central detector \(x\)-axis. The mass model figures were created using the graphical interface FLAIR [45]. For better visibility and interpretability, we applied false colors. **b** Overview of the Bayesian inference framework highlighting the gamma-ray spectrometry based Compton edge probing measurements, the Monte Carlo simulations using the multi-purpose code FLUKA [46] combined with the machine learning trained polynomial chaos expansion (PCE) emulator models supported by principal component analysis (PCA) as well as the Bayesian inference by Markov Chain Monte Carlo (MCMC) itself using UQLab [47]. **c** Radiation transport mechanisms inside the inorganic scintillation crystal, which is surrounded by a thin reflector layer and a rugged aluminum crystal casing. **d** Schematic representation of an inorganic scintillation crystal lattice including the activator atoms and point defects. **e** Mechanistic depictions of the various scintillation and quenching pathways for \(e^{-}/h\) pairs as well as excitons within the inorganic scintillation crystal lattice.
highlighting the individual scintillation processes included in the present study is presented in the Figs. 1c-e.
### Bayesian inversion
We applied Bayesian inversion using Markov Chain Monte Carlo [39] to infer the NPSM parameters as well as to predict spectral and resolution scintillator properties from the measured Compton edge spectra and our forward model. Following the principle of maximum entropy [52], we used uniform priors with the support defined by the available empirical data from previous studies [17, 18, 22]. In addition, we fixed the Onsager related stopping power parameter \(dE/ds\mid_{\rm Ons}\) to 36.4 MeV cm\({}^{-1}\) as suggested by previous investigators [18, 22]. Because the high-fidelity radiation transport simulations described in the previous section are computationally very intense, we emulated the detector response as a function of the NPSM parameters using a machine learning trained vector-valued PCE surrogate
Figure 2: **Posterior distribution estimate.** The off-diagonal subfigures present samples from the multivariate posterior marginals given the experimental data set \(\mathbf{y}\) for the model parameters \(\mathbf{x}\coloneqq(dE/ds\mid_{\rm Birk},\ \eta_{e/h},\ dE/ds\mid_{\rm map})\)[7]. We colored these samples by the corresponding normalized multivariate log-likelihood function values \(\log\pi^{\prime}\left(\mathbf{y}\mid/\mathbf{x}\right)\). In addition, the Spearman’s rank correlation coefficient \(r_{s}\) is provided for the model parameters in the corresponding off-diagonal subfigures. The subfigures on the diagonal axis highlight the normalized univariate marginal likelihood \(\pi^{\prime}\left(x\mid\mathbf{y}\right)\) for the model parameter \(x\). Both, the univariate and multivariate likelihood values, were normalized by their corresponding global maxima. Derived posterior point estimators, i.e. the maximum a posteriori probability estimate \(\mathbf{x}_{\rm MAP}\), the posterior mean \(\mathbf{x}_{\rm Mean}\) and the posterior median \(\mathbf{x}_{\rm Median}\), are indicated as well in each subfigure.
model [43]. We performed the Bayesian inversion on the \({}^{60}\)Co (activity \(A=3.08(5)\times 10^{5}\) Bq) spectral dataset [30] leaving the remaining measurements for validation.
Using the Bayesian framework, we present the solution to our inversion problem as a multivariate posterior distribution estimate in Fig. 2. We find a unimodal solution with a maximum a posteriori (MAP) probability estimate given by \(\eta_{e/h}=5.96^{+0.08}_{-0.14}\times 10^{-1}\), \(dE/ds\mid_{\rm Trap}=1.46^{+0.02}_{-0.17}\times 10^{1}\) MeV cm\({}^{-1}\) and \(dE/ds\mid_{\rm Birks}=3.22^{+0.38}_{-0.36}\times 10^{2}\) MeV cm\({}^{-1}\), where we used the central credible intervals with a probability mass of 95% to estimate the associated uncertainties. Considering these uncertainty estimates, we observe only minor differences between the different posterior point estimators reported in Fig. 2.
### Compton edge predictions
We can use the trained PCE surrogate model to predict the spectral Compton edge as a function of the NPSM parameters and consequently the parameterized scintillation and quenching phenomena. In the Figs. 3a-c, we present the spectral response of the PCE surrogate model as a function of the Birks related stopping power parameter \(dE/ds\mid_{\rm Birks}\), the free carrier fraction \(\eta_{e/h}\) and the trapping related stopping power parameter \(dE/ds\mid_{\rm Trap}\). We observe a shift of the Compton edge toward smaller spectral energies for an increase in \(dE/ds\mid_{\rm Birks}\) and \(\eta_{e/h}\) as well as a decrease in \(dE/ds\mid_{\rm Trap}\).
We leveraged the analytical relation between the polynomial chaos expansion and the Hoeffding-Sobol decomposition [53] to perform a global sensitivity analysis of the NPSM. In Fig. 3e, we present total Sobol indices \(S^{T}\) for the model parameters \(dE/ds\mid_{\rm Birks}\), \(\eta_{e/h}\) and \(dE/ds\mid_{\rm Trap}\). We find that the total Sobol indices can be ordered as \(S^{T}(\eta_{e/h})>S^{T}(dE/ds\mid_{\rm Birks})>S^{T}(dE/ds\mid_{\rm Trap})\) over the entire spectral Compton edge domain indicating a corresponding contribution to the total model response variance.
In addition, we can also predict the spectral Compton edge using the prior and posterior predictive density estimates shown in Fig. 3d. A comparison of these densities indicates that our methodology successfully constrains the adopted NPSM. However, we find also some model discrepancies, especially around the Compton continuum at the very low end of the investigated spectral range (\(<920\) keV). From a modelling perspective, it is interesting to note that we observe no significant difference for Compton edge predictions using the various point estimators discussed in the previous section.
### Intrinsic resolution
With the Bayesian calibrated NPSM, we are able to quantify the intrinsic spectral resolution of our detector system using our numerical forward model. We adopted a set of multiple monoenergetic Monte Carlo simulations to characterize the intrinsic resolution for different spectral energies. Using this dataset, we then trained a Gaussian process (GP) regression model to predict the intrinsic resolution characterized by the standard deviation \(\sigma\) for a given spectral energy \(E\). The resulting GP model predictions together with the intrinsic data are highlighted in Fig. 3f. In the same graph, we include also the empirical resolution model as well as the corresponding empirical data, both published in [30].
Comparing the intrinsic and empirical spectral resolution, we find an almost constant ratio \(\sigma_{\rm intr}/\sigma_{\rm tot}\approx 0.6\) for \(E\gtrsim 1500\) keV. Around \(E\approx 420\) keV, there is a pronounced peak with \(\sigma_{\rm intr}/\sigma_{\rm tot}\approx 0.65\) and for \(E\lesssim 420\) keV, we observe a significant decrease in \(\sigma_{\rm intr}/\sigma_{\rm tot}\) with decreasing spectral energy \(E\). Moreover, we find a more complex behaviour in \(\sigma_{\rm intr}\) for \(E\lesssim 110\) keV. For \(28\) keV \(\lesssim E\lesssim 60\) keV, the K-absorption edge for iodine K[I] at \(E=33.1694(4)\) keV [54] alters the resolution significantly. On the other hand, at even smaller spectral energies, there is again a pronounced increase in \(\sigma_{\rm intr}\) with decreasing spectral energy compared to the mere moderate increase for \(60\) keV \(\lesssim E\lesssim 110\) keV.
### Bayesian calibrated NPSM simulations
In addition to the insights into the Compton edge dynamics as well as the intrinsic resolution, the Bayesian inferred NPSM in combination with our forward model offers also the possibility to predict
Figure 3: **Compton edge and intrinsic resolution predictions.****a–c** Compton edge dynamics characterized by the trained polynomial chaos expansion emulator as a function of the individual non-proportional scintillation model parameters, i.e. the Birks related stopping power parameter \(dE/ds\left|{}_{\text{Birks}}\right.\), the free carrier fraction \(\eta_{c/h}\) as well as the trapping related stopping power parameter \(dE/ds\left|{}_{\text{Trap}}\right.\), for the corresponding prior range given in Table 1. We fixed the remaining parameters at the corresponding maximum a posterior probability estimate values \(\mathbf{x}_{\text{MAP}}\). The experimental data \(\mathbf{y}\) from the measurement with a \({}^{60}\)Co source (activity \(A=3.08(5)\times 10^{5}\) Bq) is indicated as well [30]. \(d\) In this graph, we show the prior and posterior predictive distributions using the \(99\%\) central credible interval. In addition, the experimental data \(\mathbf{y}\) together with the derived posterior point estimators, i.e. the maximum a posteriori probability estimate \(\mathbf{x}_{\text{MAP}}\), the posterior mean \(\mathbf{x}_{\text{Mean}}\) and the posterior median \(\mathbf{x}_{\text{Median}}\), are indicated. **e** We show the total Sobol indices \(S^{T}\) computed by the polynomial chaos expansion emulator [53] as a function of the spectral energy for the individual model parameters. **f** This graph presents the empirical (\(\sigma_{\text{tot}}\)) and the intrinsic (\(\sigma_{\text{int}}\)) spectral resolution for the adopted detector system characterized by the standard deviation \(\sigma\) as a function of the spectral energy \(E\). The empirical resolution data as well as the corresponding empirical resolution model were presented already elsewhere [30]. For the zoomed inset with \(E<110\) keV, the K-absorption edge for iodine K[I] is highlighted [54]. For all graphs presented in this figure, uncertainties are provided as 1 standard deviation (SD) values (coverage factor \(k=1\)).
the full spectral detector response for new radiation sources accounting for non-proportional scintillation effects. We used the \({}^{88}\)Y (\(A=6.83(14)\times 10^{5}\) Bq) and \({}^{137}\)Cs (\(A=2.266(34)\times 10^{5}\) Bq) radiation measurements to validate our calibrated NPSM. For the Monte Carlo simulations, we applied the posterior point estimators \(\mathbf{x}_{\rm MAP}\) in combination with the intrinsic and empirical resolution models discussed in the previous sections.
In Fig. 4, we present the measured and simulated spectral detector response for \({}^{88}\)Y and \({}^{137}\)Cs together with \({}^{60}\)Co, whose Compton edge domain was used to perform the Bayesian inversion. For the simulations, we adopted a standard proportional scintillation model as well as the Bayesian inferred NPSM presented in this study. We quantify the Compton edge shift between the prediction \(E_{\rm CE}\) according to the Compton scattering theory and the measured detector responses to be \(\approx 20\) keV for all measurements highlighted in Fig. 4. For all three measurements, we observe a significant improvement in the Compton edge prediction for the NPSM simulations compared to the standard proportional approach. However, there are still some discrepancies at the lower end of the Compton edge domain. Moreover, we find also some deviations between the Compton edge and the full energy peak for \({}^{88}\)Y and \({}^{137}\)Cs. It is important to note that these discrepancies are smaller or at least of similar size for the NPSM simulations compared to the proportional approach indicating that the former performs statistically significantly better over the entire spectral domain. Additional validation results for \({}^{57}\)Co, \({}^{109}\)Cd, \({}^{133}\)Ba and \({}^{152}\)Eu together with a detailed uncertainty analysis for each source are attached in the supplementary materials for this study.
Figure 4: **Simulated spectral detector response using a Bayesian calibrated non-proportional model.** The measured and simulated spectral detector responses are shown for three different calibrated radionuclide sources: **a**\({}^{60}\)Co (\(A=3.08(5)\times 10^{5}\) Bq). **b**\({}^{88}\)Y (\(A=6.83(14)\times 10^{5}\) Bq). **c**\({}^{137}\)Cs (\(A=2.266(34)\times 10^{5}\) Bq). The zoomed-in subfigures highlight the Compton edge region and include also the Compton edge \(E_{\rm CE}\) predicted by the Compton scattering theory [10]. The measured net count rate \(c_{\rm exp}\) as well as the simulated net count rate adopting a proportional scintillation model \(c_{\rm sim}\) were presented already elsewhere [30]. We obtained the simulated net count rate \(c_{\rm sim}^{\rm corr}\) the same way as \(c_{\rm sim}\) but accounted for the non-proportional scintillation effects by the Bayesian calibrated NPSM presented in this study. For the calibration, we used the \({}^{60}\)Co dataset. For all graphs presented in this figure, uncertainties are provided as 1 standard deviation (SD) shaded areas (coverage factor \(k=1\)). These uncertainties are only visible for \(c_{\rm exp}\).
## Discussion
Here we demonstrated that Compton edge probing combined with Monte Carlo simulations and Bayesian inversion can successfully infer NPSMs for NaI(Tl) inorganic scintillators. A detailed Bayesian data analysis revealed no significant differences between standard posterior point estimators and the related spectral detector response predictions. Consequently, the Bayesian inversion results indicate that our methodology successfully constrained the NPSM parameters to a unique solution.
Various studies reported a distortion of the Compton edge in gamma-ray spectrometry with inorganic scintillators [1, 14, 15, 29, 30]. In this study, we presented conclusive evidence that this shift is, at least partly, the result of the scintillation non-proportionality. Moreover, using our numerical models, we can predict the Compton edge shift as a function of the NPSM parameters. We observed a Compton edge shift toward smaller spectral energies for an increase in \(dE/ds\mid_{\rm Birks}\) and \(\eta_{e/h}\) as well as a decrease in \(dE/ds\mid_{\rm Trap}\). These results imply that an enhanced scintillation non-proportionality promotes a Compton edge shift toward smaller spectral energies. In line with these observations, the non-proportionality is enhanced by a large \(e^{-}/h\) fraction, an increased Birks mechanism as well as a reduction in the \(e^{-}/h\) trapping rate [20, 24, 49].
Further, we quantified the sensitivity of the NPSM on the individual NPSM parameters using a PCE-based Sobol decomposition approach. The sensitivity results indicate that \(e^{-}/h\) has the highest sensitivity on the Compton edge, followed by \(dE/ds\mid_{\rm Birks}\) and \(dE/ds\mid_{\rm Trap}\). However, previous studies showed a pronounced dependence of \(dE/ds\mid_{\rm Trap}\) on the ambient temperature [22, 33]. In addition, we expect also a substantial change of the crystal structure by radiation damage, i.e. the creation of new point defects in harsh radiation environments [55, 10]. Therefore, the obtained sensitivity results should be interpreted with care. \(dE/ds\mid_{\rm Trap}\) might be of significant importance to model the dynamics in the detector response with changing temperature or increase in radiation damage to the crystals, e.g. in deep space missions.
Using the Bayesian calibrated NPSM, we are also able to numerically characterize the intrinsic resolution of our detector system. At higher spectral energies (\(E\gtrsim 400\) keV), we observed a significant contribution in the order of 60% to the total spectral resolution. At lower energies (\(10\) keV \(\lesssim E\lesssim 400\) keV), the intrinsic contribution is reduced and shows substantial distortions around the K-absorption edge for iodine at \(\approx 33\) keV. These observations are in good agreement with previous results [56, 57, 58, 59, 60, 28] and thereby substantiate the predictive power of our numerical model.
Most of the theoretical studies focused on the prediction of NPSMs themselves. In contrast, available numerical models to predict the full detector response are scarce, computational intense and complex due to the adopted multi-step approaches with offline convolution computations [57, 58, 61]. In this study, we present an alternative way to implement NPSMs and simulate the full spectral detector response to gamma-ray fields by directly evaluating the NPSM online during the Monte Carlo simulations. This approach saves considerable computation time and has the additional advantage of not having to store and analyze large files with secondary particle data. We have used this implementation to predict the full spectral detector response for additional radiation fields accounting for non-proportional scintillation effects. Validation measurements revealed a significant improvement in the simulated detector response compared to proportional scintillation models. However, there are still some model discrepancies, especially at the lower and higher end of the Compton edge domain. These discrepancies might be attributed to systematic uncertainties in the Monte Carlo mass model or deficiencies in the adopted NPSM. Sensitivity analysis performed in a previous study in conjunction with the prior prediction density results might indicate the latter [30].
While we focused our work on NaI(Tl) in electron and gamma-ray fields, the presented methodology can easily be extended to a much broader range of applications. First, it is general consensus that the light yield \(L\) as a function of the stopping power \(-dE/ds\) is, at least to a first approximation, independent of the ionizing particle type [31, 16]. Second, the adopted NPSM was validated with an extensive database of measured scintillation light yields for inorganic scintillators, i.e. BGO,
CaF\({}_{2}\)(Eu), CeBr\({}_{3}\), Cs(Tl), Cs(Na), LaBr\({}_{3}\)(Ce), LSO(Ce), NaI(Tl), SrI\({}_{2}\), SrI\({}_{2}\)(Eu), YAP(Ce) and YAG(Ce), among others [17, 18, 22]. From this it follows that, given a gamma-ray field with resolvable Compton edges can be provided, our methodology may in principle be applied to any combination of inorganic scintillator and ionizing radiation field, including protons, \(\alpha\)-particles and heavy ions.
In summary, we conclude that NPSMs are essential for accurate detector response simulations, especially for scintillators with large crystal volumes [25, 28, 31], e.g. in dark matter research, total absorption spectroscopy or remote sensing [1, 2, 3, 4, 5, 6, 7, 30]. The novel methodology presented in this study offers a reliable and cost-effective alternative to existing experimental methods to investigate non-proportional scintillation physics phenomena and perform accurate full detector response predictions with Bayesian calibrated NPSM. Moreover, this new technique does not require any additional measurement equipment and can therefore be applied for any inorganic scintillator spectrometer, also during detector deployment. This is especially attractive for applications, where the scintillator properties change in operation, e.g. due to radiation damage or temperature changes, but also for detector design and the development of novel scintillator materials. Last but not least, we can use the derived numerical models not only for NPSM inference but also to investigate and predict various scintillator properties, e.g. intrinsic resolution or Compton edge dynamics, and thereby contribute to a better understanding of the complex scintillation physics in inorganic scintillators.
## Methods
### Gamma-ray spectrometry
We performed gamma-ray spectrometric measurements in the calibration laboratory at the Paul Scherrer Institute (PSI) (inner room dimensions: 5.3 m \(\times\) 4.5 m \(\times\) 3 m). The adopted spectrometer consisted of four individual 10.2 cm \(\times\) 10.2 cm \(\times\) 40.6 cm prismatic NaI(Tl) scintillation crystals with the associated photomultiplier tubes and the electronic components embedded in a thermal-insulating and vibration-damping polyethylene foam protected by a rugged aluminum detector box (outer dimensions: 86 cm \(\times\) 60 cm \(\times\) 30 cm). We used seven different calibrated radionuclide sources (\({}^{57}\)Co, \({}^{60}\)Co, \({}^{88}\)Y, \({}^{109}\)Cd, \({}^{133}\)Ba, \({}^{137}\)Cs and \({}^{152}\)Eu) from the Eckert & Ziegler Nuclitec GmbH. We inserted these sources consisting of a radionuclide carrying ion exchange sphere (diameter 1 mm) embedded in a 25 mm \(\times\) 3 mm solid plastic disc into a custom low absorption source holder made out of a polylactide polymer (PLA) and placed this holder on a tripod in a fixed distance of 1 m to the detector front on the central detector \(x\)-axis. To measure the source-detector distances and to position the sources accurately, distance as well as positioning laser systems were used. A schematic depiction of the measurement setup is shown in Fig. 0(a).
Between radiation measurements, background measurements were performed regularly for background correction and gain stability checks. For all measurements, the air temperature as well as the air humidity in the calibration laboratory was controlled by an air conditioning unit and logged by an external sensor. The air temperature was set at 18.8(4) \({}^{\circ}\)C and the relative air humidity at 42(3)%. The ambient air pressure, which was also logged by the external sensor, fluctuated around 982(5) hPa.
During measurements, additional instruments and laboratory equipment were located in the calibration laboratory, e.g. shelves, a workbench, a source scanner or a boiler as shown in Fig. 0(a). The effect of these features on the detector response was carefully assessed in [30].
After postprocessing the spectral raw data according to the data reduction pipelines described in [30], we extracted the Compton edge spectral data from the net count rate spectra. The spectral domain of the Compton edge \(\mathcal{D}_{E}\) was defined as \(\mathcal{D}_{E}\coloneqq\{E:E_{\text{CE}}-3\cdot\sigma_{\text{tot}}\left(E_{ \text{CE}}\right)\leq E\leq E_{\text{FEP}}-2\cdot\sigma_{\text{tot}}\left(E_{ \text{FEP}}\right)\}\), where \(E\) is the spectral energy, \(\sigma_{\text{tot}}\) the energy dependent empirical resolution characterized by the standard deviation [30] and \(E_{\text{FEP}}\) the full energy peak associated with the Compton edge \(E_{\text{CE}}\). We compute \(E_{\text{CE}}\) according to the Compton scattering theory [10]:
\[E_{\text{CE}}\coloneqq E_{\text{FEP}}\left(1-\frac{1}{1+\frac{2E_{\text{FEP}}}{m_{ e}c^{2}}}\right) \tag{2}\]
where \(m_{e}c^{2}\) is defined as the energy equivalent electron mass. In this study, we consulted the ENDF/B-VIII.0 nuclear data file library [62] for nuclear decay related data as well as the Particle Data Group library [63] for fundamental particle properties.
To investigate the sensitivity of the selected Compton edge domain \(\mathcal{D}_{E}\) on the Bayesian inversion results, we performed a sensitivity analysis on \(\mathcal{D}_{E}\). Within the uncertainty bounds, the inversion results have proven to be insensitive to small alterations in \(\mathcal{D}_{E}\). All results of this sensitivity analysis are provided in Table S2 in the supplementary materials for this study.
It is important to note that, if not otherwise stated, uncertainties are provided as 1 standard deviation (SD) values in this study (coverage factor \(k=1\)). For more information about the radiation measurements and adopted data reduction pipelines, e.g. the energy and the empirical resolution calibration or the uncertainty estimations, the reader is referred to the attached supplementary materials and to [30].
### Monte Carlo simulations
We performed all simulations with the multi-purpose Monte Carlo code FLUKA version 4.2.1 [46] together with the graphical interface FLAIR version 3.1-15.1 [45]. We used the most accurate physics settings (precisio) featuring a high-fidelity fully coupled photon, electron and positron radiation transport for our source-detector configuration. In addition, this module accounts for secondary electron production and transport, Landau fluctuations as well as X-ray fluorescence, all of which are essential for an accurate description of non-proportional scintillation effects [16, 18, 23, 58]. Motivated by the range of the transported particles, lower kinetic energy transport thresholds were set to 1 keV for the scintillation crystals as well as the closest objects to the crystals, e.g. reflector, optical window and aluminum casing for the crystals. For the remaining model parts, the transport threshold was set to 10 keV to decrease the computational load while maintaining the high-fidelity transport simulation in the scintillation crystals. All simulations were performed on a computer cluster at the Paul Scherrer Institute utilizing parallel computing.
We scored the energy deposition events in the scintillation crystals individually on an event-by-event basis using the custom user routine _usreou_ together with the _detect_ card. The number of primaries was set to \(10^{7}\) for all simulations, which guaranteed a maximum relative statistical standard deviation \(\sigma_{\text{stat,sim,}k}/c_{\text{sim,}k}~{}<~{}1\%\) and a maximum relative variance of the sample variance \(\text{VOV}_{k}~{}<~{}0.01\%\) for all detector channels \(k\). More details on the simulation settings as well as on the postprocessing of the energy deposition data can be found in [30].
To implement the NPSM described by 1, we developed an additional user routine _comscw_. Similar to [1, 64], we weight each individual energy deposition event in the scintillator, point-like or along the charged particle track, by the scintillation light yield given in 1. The resulting simulated response is then rescaled to match the energy calibration models derived in [30].
### Surrogate modelling
We applied a custom machine learning trained vector-valued polynomial chaos expansion (PCE) surrogate model to emulate the spectral Compton edge detector response over \(\mathcal{D}_{E}\). PCE models are ideal candidates to emulate expensive-to-evaluate vector-valued computational models [43, 44]. As shown by [65, 66, 67], any function \(\mathbf{Y}=\mathcal{M}\left(\mathbf{X}\right)\) with the random input vector \(\mathbf{X}\in\mathbb{R}^{M\times 1}\) and random response vector \(\mathbf{Y}\in\mathbb{R}^{N\times 1}\) can be expanded as a so-called polynomial chaos expansion provided that \(\mathbb{E}[\|\mathbf{Y}\|^{2}]<\infty\):
\[\mathbf{Y}=\mathcal{M}\left(\mathbf{X}\right)=\sum_{\mathbf{\alpha}\in\mathbb{N}^{M}}\mathbf{a}_{ \mathbf{\alpha}}\Psi_{\mathbf{\alpha}}\left(\mathbf{X}\right) \tag{3}\]
where \(\mathbf{a}_{\mathbf{\alpha}}\coloneqq(a_{1,\mathbf{\alpha}},\ldots,a_{N,\mathbf{\alpha}})^{ \intercal}\in\mathbb{R}^{N\times 1}\) are the deterministic expansion coefficients, \(\mathbf{\alpha}\coloneqq(\alpha_{1},\ldots,\alpha_{M})^{\intercal}\in\mathbb{N}^ {M\times 1}\) the multi-indices storing the degrees of the univariate polynomials \(\psi_{\alpha}\) and \(\Psi_{\mathbf{\alpha}}\left(\mathbf{X}\right)\coloneqq\prod_{i=1}^{M}\psi_{\alpha_{i} }^{i}\left(X_{i}\right)\) the multivariate polynomial basis functions, which are orthonormal with respect to the joint probability density function \(f_{\mathbf{X}}\) of \(\mathbf{X}\), i.e. \(\left\langle\Psi_{\mathbf{\alpha}},\Psi_{\mathbf{\beta}}\right\rangle_{f_{\mathbf{X}}}= \delta_{\mathbf{\alpha},\mathbf{\beta}}\).
To reduce the computational burden, we combined the PCE model with principal component analysis (PCA) allowing us to characterize the main spectral Compton edge features of the response by means of a small number \(N^{\prime}\) of output variables compared to the original number \(N\) of spectral variables, i.e. \(N^{\prime}\ll N\)[43]. Similar to [68], we computed the emulated computational model response \(\hat{\mathcal{M}}_{\text{PCE}}\left(\mathbf{X}\right)\) in matrix form as:
\[\mathbf{Y}\approx\hat{\mathcal{M}}_{\text{PCE}}\left(\mathbf{X}\right)=\mathbf{\mu}_{\bm {Y}}+\text{diag}\left(\mathbf{\sigma}_{\mathbf{Y}}\right)\mathbf{\Phi}^{\prime}\mathbf{A} \mathbf{\Psi}\left(\mathbf{X}\right) \tag{4}\]
with \(\mathbf{\mu}_{\mathbf{Y}}\) and \(\mathbf{\sigma}_{\mathbf{Y}}\) being the mean and standard deviation of the random vector \(\mathbf{Y}\) and \(\mathbf{\Phi}^{\prime}\) the matrix containing the retained eigenvectors \(\mathbf{\phi}\) from the PCA, i.e. \(\mathbf{\Phi}^{\prime}\coloneqq(\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{N^{\prime}})\in \mathbb{R}^{N\times N^{\prime}}\). On the other hand, the vector \(\mathbf{\Psi}\left(\mathbf{X}\right)\in\mathbb{R}^{\text{card}(\mathcal{A}^{\star}) \times 1}\) and matrix \(\mathbf{A}\in\mathbb{R}^{N^{\prime}\times\text{card}(\mathcal{A}^{\star})}\) store the multivariate orthonormal polynomials and corresponding PCE coefficients, respectively. The union set \(\mathcal{A}^{\star}\coloneqq\bigcup_{j=1}^{N^{\prime}}\mathcal{A}_{j}\) includes the finite sets of multi indices \(\mathcal{A}_{j}\) for the \(N^{\prime}\) output variables following a specific truncation scheme.
We used a Latin hypercube experimental design \(\mathbf{\mathcal{X}}\in\mathbb{R}^{M\times K}\)[69, 70] with \(K=200\) instances sampled from a probabilistic model, which itself is defined by the model parameter priors described in the next subsection. The model response \(\mathbf{\mathcal{Y}}\in\mathbb{R}^{N\times K}\) for this design was then evaluated using the forward model described in the previous subsection. We adopted a hyperbolic truncation scheme \(\mathcal{A}_{j}\coloneqq\{\mathbf{\alpha}\in\mathbb{N}^{M}:(\sum_{i=1}^{M}\alpha_ {i}^{q})^{1/q}\leq p\}\) with \(p\) and \(q\) being hyperparameters defining the maximum degree for the associated polynomial and the q-norm, respectively. To compute the PCE coefficient matrix \(\mathbf{A}\), we applied adaptive least angle regression [71] and optimized the hyperparameters \(p\coloneqq\{1,2,\ldots,7\}\) and \(q\coloneqq\{0.5,0.6,\ldots,1\}\) using machine learning with a holdout partition of \(80\%\) and \(20\%\) for the training and test set, respectively. For the PCA truncation, we adopted a relative PCA-induced error \(\varepsilon_{\text{PCA}}\) of \(0.1\%\), i.e. \(N^{\prime}\coloneqq\min\{S\in\{1,\ldots,N\}:\sum_{j=1}^{S}\lambda_{j}/\sum_{j= 1}^{N}\lambda_{j}\geq 1-\varepsilon_{\text{PCA}}\}\) with \(\lambda\) being the eigenvalues from the PCA. The resulting generalization error of the surrogate model, characterized by the relative mean squared error over the test set, is \(<1\%\). All PCE computations were performed with the UQLab code [47] in combination with custom scripts to perform the PCA. More information about the PCE-PCA models as well as the PCE-PCA-based Sobol indices including detailed derivations are included in the supplementary materials attached to this study.
### Bayesian inference
Following the Bayesian framework [40], we approximate the measured spectral detector response \(\mathbf{y}\in\mathbb{R}^{N\times 1}\) with a probabilistic model combining the forward model \(\mathcal{M}(\mathbf{x}_{\mathcal{M}})\) and model parameters \(\mathbf{x}_{\mathcal{M}}\in\mathbb{R}^{M_{\mathcal{M}}\times 1}\) with an additive discrepancy term \(\mathbf{\varepsilon}\), i.e. \(\mathbf{y}\coloneqq\mathcal{M}(\mathbf{x}_{\mathcal{M}})+\mathbf{\varepsilon}\). For the discrepancy term \(\mathbf{\varepsilon}\), which characterizes the measurement noise and prediction error, we assume a Gaussian model \(\pi(\mathbf{\varepsilon}\mid\sigma_{\varepsilon}^{2})=\mathcal{N}(\mathbf{\varepsilon} \mid\mathbf{0},\sigma_{\varepsilon}^{2}\mathbb{I}_{N})\) with unknown discrepancy variance \(\sigma_{\varepsilon}^{2}\). On the other hand, as discussed in the previous subsection, we emulate the forward model \(\mathcal{M}(\mathbf{x}_{\mathcal{M}})\) with a PCE surrogate model \(\hat{\mathcal{M}}_{\text{PCE}}(\mathbf{x}_{\mathcal{M}})\). Consequently, we can compute the likelihood function as follows:
\[\pi\left(\mathbf{y}\mid\mathbf{x}\right)=\mathcal{N}\left(\mathbf{y}\mid\hat{\mathcal{M}} _{\text{PCE}}\left(\mathbf{x}_{\mathcal{M}}\right),\sigma_{\varepsilon}^{2} \mathbb{I}_{N}\right) \tag{5}\]
with \(\mathbf{x}\coloneqq[\,\mathbf{x}_{\mathcal{M}}\,,\,\sigma_{\varepsilon}^{2}]^{\intercal}\) and \(\mathbf{x}_{\mathcal{M}}\coloneqq[\,dE/ds\mid_{\text{Birks}},\,\eta_{e/h}\,,\, dE/ds\mid_{\text{Trap}}]^{\intercal}\). In combination with the prior density \(\pi\left(\mathbf{x}\right)\), we can then compute the posterior distribution using Bayes' theorem [42]:
\[\pi\left(\mathbf{x}\mid\mathbf{y}\right)=\frac{\pi\left(\mathbf{y}\mid\mathbf{x}\right)\pi\left( \mathbf{x}\right)}{\int_{\mathcal{D_{\mathbf{X}}}}\pi\left(\mathbf{y}\mid\mathbf{x}\right)\pi \left(\mathbf{x}\right)\,\mathrm{d}\mathbf{x}} \tag{6}\]
where we assume independent marginal priors, i.e. \(\pi\left(\mathbf{x}\right)=\prod_{i=1}^{M}\pi\left(x_{i}\right)\) with \(M=M_{\mathcal{M}}+1\). Following the principle of maximum entropy [52], we applied uniform marginal priors with the support defined by the available empirical data from previous studies [17, 18, 22]. A full list of these priors together with consulted studies is given in Table 1. Using the prior and posterior distributions, we can then also make predictions on future model response measurements \(\mathbf{y}^{*}\) leveraging the prior and posterior predictive densities:
\[\pi\left(\mathbf{y}^{*}\right) =\int_{\mathcal{D_{\mathbf{x}}}}\pi\left(\mathbf{y}^{*}\mid\mathbf{x}\right) \pi\left(\mathbf{x}\right)\,\mathrm{d}\mathbf{x} \tag{7a}\] \[\pi\left(\mathbf{y}^{*}\mid\mathbf{y}\right) =\int_{\mathcal{D_{\mathbf{x}}}}\pi\left(\mathbf{y}^{*}\mid\mathbf{x}\right) \pi\left(\mathbf{x}\mid\mathbf{y}\right)\,\mathrm{d}\mathbf{x} \tag{7b}\]
All Bayesian computations were performed with the UQLab code [47]. We applied an affine invariant ensemble algorithm [39] to perform Markov Chain Monte Carlo (MCMC) and thereby estimate the posterior distribution \(\pi\left(\mathbf{x}\mid\mathbf{y}\right)\). We used 10 parallel chains with \(2\times 10^{4}\) MCMC iterations per chain together with a 50% burn-in. The convergence and precision of the MCMC simulations were carefully assessed using standard diagnostics tools [42, 72]. We report a potential scale reduction factor \(\hat{R}<1.04\) and an effective sample size ESS \(\gg 400\) for all performed MCMC simulations. Additional trace and convergence plots for the individual parameters \(\mathbf{x}\) and point estimators, a full list of the Bayesian inversion results as well as a sensitivity analysis on the adopted Compton edge domain can be found in the attached supplementary materials.
### Intrinsic resolution modelling
We performed additional Monte Carlo simulations with different isotropic monoenergetic gamma-ray sources and included the NPSM with MAP point estimators to characterize the intrinsic resolution of our detector system for spectral energies 10 keV \(\leq E\leq\) 3200 keV. To account for the different spectral scales, we applied a non-uniform experimental design with a 2 keV spacing below 110 keV and 100 keV spacing above. We used then the extracted \(\sigma_{\text{intr}}\) from the individual full energy peaks to train a Gaussian Process (GP) regression model with [73]:
\[\sigma_{\text{intr}}\left(E\right)\sim\mathcal{GP}\left(\mathbf{f}\left(E \right)^{\intercal}\mathbf{\beta},\kappa\left(E,E^{\prime}\right)+\sigma_{ \mathcal{GP}}^{2}\delta_{E,E^{\prime}}\right) \tag{8}\]
where we applied a polynomial trend function of the second order, i.e. \(\mathbf{f}\left(E\right)\coloneqq\left(1,E,E^{2}\right)^{\intercal}\) and \(\mathbf{\beta}\coloneqq\left(\beta_{0},\beta_{1},\beta_{2}\right)^{\intercal}\), a homoscedastic noise model with the noise variance \(\sigma_{\mathcal{GP}}^{2}\) and Kronecker delta \(\delta_{E,E^{\prime}}\) as well as a Matern-3/2 covariance function \(\kappa\left(E,E^{\prime}\right)\coloneqq\left(1+\sqrt{3}\mid E-E^{\prime} \mid/\theta\right)\exp\left(-\sqrt{3}\mid E-E^{\prime}\mid/\theta\right)\)
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Prior & Unit & References \\ \hline \(dE/ds\mid_{\text{Birks}}\) & \(\mathcal{U}\left(150,450\right)\) & MeV cm\({}^{-1}\) & [17, 18, 22] \\ \(dE/ds\mid_{\text{Trap}}\) & \(\mathcal{U}\left(10,15\right)\) & MeV cm\({}^{-1}\) & [22] \\ \(\eta_{e/h}\) & \(\mathcal{U}\left(0.45,0.65\right)\) & – & [17, 18, 22] \\ \(\sigma_{z}^{2}\) & \(\mathcal{U}\left(0,550\right)^{1}\) & cps\({}^{2}\) Bq\({}^{-2}\) & – \\ \hline \hline \end{tabular} \({}^{1}\)Upper limit is defined as \(\langle c_{\text{exp}}^{2}\rangle\) with \(c_{\text{exp}}\) being the measured net count rate over \(\mathcal{D}_{E}\).
\end{table}
Table 1: Summary of the prior distribution. This table summarizes the adopted prior distributions and lists the consulted studies, which motivated the individual priors.
with the kernel scale \(\theta\). With the \(N\)-dimensional intrinsic data set \(\{\mathbf{E},\mathbf{\sigma}_{\text{intr}}\}\), we can then predict the intrinsic resolution \(\mathbf{\sigma}_{\text{intr}}^{*}\) for a new set of \(N^{*}\) spectral energies \(\mathbf{E}^{*}\) using the GP posterior predictive density as follows [73]:
\[\pi\left(\mathbf{\sigma}_{\text{intr}}^{*}\mid\mathbf{E}^{*},\mathbf{E},\mathbf{ \sigma}_{\text{intr}}\right) =\mathcal{N}\left(\mathbf{\sigma}_{\text{intr}}^{*}\mid\mathbf{\mu}_{ \mathcal{GP}},\mathbf{\Sigma}_{\mathcal{GP}}\right) \tag{9a}\] \[\mathbf{\mu}_{\mathcal{GP}} =\mathbf{F}_{*}^{\intercal}\hat{\mathbf{\beta}}+\mathbf{K}_{*}^{\intercal} \mathbf{K}^{-1}\left(\mathbf{\sigma}_{\text{intr}}-\mathbf{F}^{\intercal}\hat{ \mathbf{\beta}}\right)\] (9b) \[\mathbf{\Sigma}_{\mathcal{GP}} =\mathbf{K}_{**}-\mathbf{K}_{*}^{\intercal}\mathbf{K}^{-1} \mathbf{K}_{*}+\mathbf{U}^{\intercal}\left(\mathbf{F}\mathbf{K}^{-1}\mathbf{F }^{\intercal}\right)^{-1}\mathbf{U}\] (9c) \[\hat{\mathbf{\beta}} =\left(\mathbf{F}\mathbf{K}^{-1}\mathbf{F}^{\intercal}\right)^{ -1}\mathbf{F}\mathbf{K}^{-1}\mathbf{\sigma}_{\text{intr}}\] (9d) \[\mathbf{U} =\mathbf{F}_{*}-\mathbf{F}\mathbf{K}^{-1}\mathbf{K}_{*} \tag{9e}\]
with the matrices \(\mathbf{F}=\mathbf{f}\left(\mathbf{E}\right)\in\mathbb{R}^{3\times N}\), \(\mathbf{F}_{*}=\mathbf{f}\left(\mathbf{E}^{*}\right)\in\mathbb{R}^{3\times N^{*}}\), \(\mathbf{K}=\kappa\left(\mathbf{E},\mathbf{E}\right)+\sigma_{\mathcal{GP}}^{2}\mathbb{ I}_{N}\in\mathbb{R}^{N\times N}\), \(\mathbf{K}_{*}=\kappa\left(\mathbf{E},\mathbf{E}^{*}\right)\in\mathbb{R}^{N\times N^{*}}\) and \(\mathbf{K}_{**}=\kappa\left(\mathbf{E}^{*},\mathbf{E}^{*}\right)\in\mathbb{R}^{N^{*} \times N^{*}}\).
To account for the different spectral scales, we trained two GP models, one for 10 keV \(\leq E\leq\) 90 keV and the other one for 90 keV \(\leq E\leq\) 3200 keV, using the MATLAB(r) code. For both models, we applied 5-fold cross-validation in combination with Bayesian optimization to determine the GP hyperparameters \(\sigma_{\mathcal{GP}}^{2}\) and \(\theta\).
Supplementary information.The online version contains supplementary materials.
Acknowledgments.We gratefully acknowledge the technical support by Dominik Werthmuller for the execution of the Monte Carlo simulations on the computer cluster at the Paul Scherrer Institute. We also thank Eduardo Gardenali Yukihara for helpful discussions and advices. Further, we would like to express our gratitude to the Swiss Armed Forces and the National Emergency Operations Centre (NEOC) for providing the detector system. This research has been supported in part by the Swiss Federal Nuclear Safety Inspectorate (grant no. CTR00491).
Competing interests.The authors declare no competing interests.
Author contributions.D.B. designed the study, supervised the project, performed the measurements, simulations, data postprocessing and wrote the manuscript. F.C. significantly contributed to the implementation of the NPSM in FLUKA. G.B. supervised the project. S.M. acquired the project funding. All authors contributed to the completion of the manuscript.
Data availability.The radiation measurement raw data presented herein are deposited on the ETH Research Collection repository: [https://doi.org/10.3929/ethz-b-000528920](https://doi.org/10.3929/ethz-b-000528920) [74]. Additional data sets related to this study are available from the corresponding author upon reasonable request.
Code availability.The FLUKA code [46] used for Monte Carlo radiation transport and detector response simulations is available at [https://fluka.cern/](https://fluka.cern/). We adopted the graphical user interphase FLAIR [45], freely available at [https://flair.web.cern.ch/flair/](https://flair.web.cern.ch/flair/), to setup the FLUKA input files and create the mass model figures. The custom FLUKA user routines adopted in the Monte Carlo simulations are deposited on the ETH Research Collection repository: [https://doi.org/10.3929/ethz-b-000595727](https://doi.org/10.3929/ethz-b-000595727) [75]. Data processing, machine learning computation and figure creation was performed by the MATLAB(r) code in combination with the open-source toolbox UQLab [47] available at [https://www.uqlab.com/](https://www.uqlab.com/).
## References
* [1] Cano-Ott, D. _et al._ Monte Carlo simulation of the response of a large NaI(Tl)total absorption spectrometer for \(\beta\)-decay studies. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**430** (2-3), 333-347 (1999). [https://doi.org/10.1016/S0168-9002](https://doi.org/10.1016/S0168-9002)(99)00217-X.
* [2] Bernabei, R. _et al._ First results from DAMA/LIBRA and the combined results with DAMA/NaI. _The European Physical Journal C_**56** (3), 333-355 (2008). [https://doi.org/10.1140/EPJC/S10052-008-0662-Y](https://doi.org/10.1140/EPJC/S10052-008-0662-Y).
* [3] Adhikari, G. _et al._ An experiment to search for dark-matter interactions using sodium iodide detectors. _Nature_**564** (7734), 83-86 (2018). [https://doi.org/10.1038/s41586-018-0739-1](https://doi.org/10.1038/s41586-018-0739-1).
* [4] Paynter, J., Webster, R. & Thrane, E. Evidence for an intermediate-mass black hole from a gravitationally lensed gamma-ray burst. _Nature Astronomy_**5** (6), 560-568 (2021). [https://doi.org/10.1038/s41550-021-01307-1](https://doi.org/10.1038/s41550-021-01307-1).
* [5] Yang, J. _et al._ A long-duration gamma-ray burst with a peculiar origin. _Nature_**612** (7939), 232-235 (2022). [https://doi.org/10.1038/s41586-022-05403-8](https://doi.org/10.1038/s41586-022-05403-8).
* [6] Lawrence, D. J. _et al._ Global elemental maps of the moon: The Lunar Prospector gamma-ray spectrometer. _Science_**281** (5382), 1484-1489 (1998). [https://doi.org/10.1126/science.281.5382.1484](https://doi.org/10.1126/science.281.5382.1484).
* [7] Trombka, J. I. _et al._ The elemental composition of asteroid 433 Eros: Results of the NEAR-Shoemaker x-ray spectrometer. _Science_**289** (5487), 2101-2105 (2000). [https://doi.org/10.1126/science.289.5487.2101](https://doi.org/10.1126/science.289.5487.2101).
* [8] Bashkirov, V. A. _et al._ Novel scintillation detector design and performance for proton radiography and computed tomography. _Medical Physics_**43** (2), 664-674 (2016). [https://doi.org/10.1118/1.4939255](https://doi.org/10.1118/1.4939255).
* [9] Curtis, J. C. _et al._ Simulation and validation of the Mobile Urban Radiation Search (MURS) gamma-ray detector response. _Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**954**, 161128 (2020). [https://doi.org/10.1016/j.nima.2018.08.087](https://doi.org/10.1016/j.nima.2018.08.087).
* [10] Knoll, G. F. _Radiation Detection and Measurement_ 4th edn (John Wiley & Sons, New York, USA, 2010).
* [11] Prettyman, T. H. _et al._ Dawn's gamma ray and neutron detector. _Space Science Reviews_**163** (1-4), 371-459 (2011). [https://doi.org/10.1007/s11214-011-9862-0](https://doi.org/10.1007/s11214-011-9862-0).
* [12] Valentine, J. D. & Rooney, B. D. Design of a Compton spectrometer experiment for studying scintillator non-linearity and intrinsic energy resolution. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**353** (1-3), 37-40 (1994). [https://doi.org/10.1016/0168-9002](https://doi.org/10.1016/0168-9002)(94)91597-0.
* [13] Engelkemeir, D. Nonlinear Response of NaI(Tl) to Photons. _Review of Scientific Instruments_**27** (8), 589-591 (1956). [https://doi.org/10.1063/1.1715643](https://doi.org/10.1063/1.1715643).
* [14] Saito, K. & Moriuchi, S. Monte Carlo calculation of accurate response functions for a NaI(Tl) detector for gamma rays. _Nuclear Instruments and Methods_**185** (1-3), 299-308 (1981). [https://doi.org/10.1016/0029-554X](https://doi.org/10.1016/0029-554X)(81)91225-8.
* [15] Gardner, R. P. & Sood, A. A Monte Carlo simulation approach for generating NaI detector response functions (DRFs) that accounts for non-linearity and variable flat continua. _Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms_**213**, 87-99 (2004). [https://doi.org/10.1016/S0168-583X](https://doi.org/10.1016/S0168-583X)(03)01539-8.
* [16] Moses, W. W., Payne, S. A., Choong, W. S., Hull, G. & Reutter, B. W. Scintillator non-proportionality: Present understanding and future challenges. _IEEE Transactions on Nuclear Science_**55** (3), 1049-1053 (2008). [https://doi.org/10.1109/TNS.2008.922802](https://doi.org/10.1109/TNS.2008.922802).
* [17] Payne, S. A. _et al._ Nonproportionality of scintillator detectors: Theory and experiment. _IEEE Transactions on Nuclear Science_**56** (4), 2506-2512 (2009). [https://doi.org/10.1109/TNS.2009.2023657](https://doi.org/10.1109/TNS.2009.2023657).
* [18] Payne, S. A. _et al._ Nonproportionality of scintillator detectors: Theory and experiment. II. _IEEE Transactions on Nuclear Science_**58** (6), 3392-3402 (2011). [https://doi.org/10.1109/TNS.2011.2167687](https://doi.org/10.1109/TNS.2011.2167687).
* [19] Moses, W. W. _et al._ The origins of scintillator non-proportionality. _IEEE Transactions on Nuclear Science_**59** (5), 2038-2044 (2012). [https://doi.org/10.1109/TNS.2012.2186463](https://doi.org/10.1109/TNS.2012.2186463).
* [20] Vasil'Ev, A. N. & Gektin, A. V. Multiscale approach to estimation of scintillation characteristics. _IEEE Transactions on Nuclear Science_**61** (1), 235-245 (2014). [https://doi.org/10.1109/TNS.2013.2282117](https://doi.org/10.1109/TNS.2013.2282117).
* [21] Khodyuk, I. V. & Dorenbos, P. Trends and patterns of scintillator nonproportionality. _IEEE Transactions on Nuclear Science_**59** (6), 3320-3331 (2012). [https://doi.org/10.1109/TNS.2012.2221094](https://doi.org/10.1109/TNS.2012.2221094).
* [22] Payne, S. A., Hunter, S., Ahle, L., Cherepy, N. J. & Swanberg, E. Nonproportionality of scintillator detectors. III. Temperature dependence studies. _IEEE Transactions on Nuclear Science_**61** (5), 2771-2777 (2014). [https://doi.org/10.1109/TNS.2014.2343572](https://doi.org/10.1109/TNS.2014.2343572).
* [23] Payne, S. A. Nonproportionality of scintillator detectors. IV. Resolution contribution from delta-rays. _IEEE Transactions on Nuclear Science_**62** (1), 372-380 (2015). [https://doi.org/10.1109/TNS.2014.2387256](https://doi.org/10.1109/TNS.2014.2387256).
* [24] Beck, P. R. _et al._ Nonproportionality of Scintillator Detectors. V. Comparing the Gamma and Electron Response. _IEEE Transactions on Nuclear Science_**62** (3), 1429-1436 (2015). [https://doi.org/10.1109/TNS.2015.2414357](https://doi.org/10.1109/TNS.2015.2414357).
* [25] Zerby, C. D., Meyer, A. & Murray, R. B. Intrinsic line broadening in NaI(Tl) gamma-ray spectrometers. _Nuclear Instruments and Methods_**12** (C), 115-123 (1961). [https://doi.org/10.1016/0029-554X](https://doi.org/10.1016/0029-554X)(61)90119-7.
* [26] Hill, R. & Collinson, A. J. The relationships between light output and energy resolution in thallium activated sodium iodide crystals. _Nuclear Instruments and Methods_**44** (2), 245-252 (1966). [https://doi.org/10.1016/0029-554X](https://doi.org/10.1016/0029-554X)(66)90157-1.
* [27] Prescott, J. R. & Narayan, G. H. Electron responses and intrinsic line-widths in NaI(Tl). _Nuclear Instruments and Methods_**75** (1), 51-55 (1969). [https://doi.org/10.1016/0029-554X](https://doi.org/10.1016/0029-554X)(69)90648-X.
* [28] Valentine, J. D. The light yield nonproportionality component of scintillator energy resolution. _IEEE Transactions on Nuclear Science_**45** (3), 512-517 (1998). [https://doi.org/10.1109/23.682438](https://doi.org/10.1109/23.682438).
* [29] Shi, H. X., Chen, B. X., Li, T. Z. & Yun, D. Precise Monte Carlo simulation of gamma-ray response functions for an NaI(Tl) detector. _Applied Radiation and Isotopes_**57** (4), 517-524 (2002). [https://doi.org/10.1016/S0969-8043](https://doi.org/10.1016/S0969-8043)(02)00140-9.
* [30] Breitenmoser, D., Butterweck, G., Kasprzak, M. M., Yukihara, E. G. & Mayer, S. Experimental and Simulated Spectral Gamma-Ray Response of a NaI(Tl) Scintillation Detector used in Airborne Gamma-Ray Spectrometry. _Advances in Geosciences_**57**, 89-107 (2022). [https://doi.org/10.5194/ADGEO-57-89-2022](https://doi.org/10.5194/ADGEO-57-89-2022).
* [31] Murray, R. B. & Meyer, A. Scintillation Response of Activated Inorganic Crystals to Various Charged Particles. _Physical Review_**122** (3), 815-826 (1961). [https://doi.org/10.1103/PhysRev.122.815](https://doi.org/10.1103/PhysRev.122.815).
* [32] Hill, R. & Collinson, A. J. L. The effect on the scintillation efficiency of NaI(Tl) of changes in the thallium concentration and strain: I. Experimental. _British Journal of Applied Physics_**17** (11), 1377-1383 (1966). [https://doi.org/10.1088/0508-3443/17/11/301](https://doi.org/10.1088/0508-3443/17/11/301).
* [33] Swiderski, Moszynski, M., Czarnacki, W., Syntfeld-Kazuch, A. & Gierlik, M. Non-proportionality and energy resolution of NaI(Tl) at wide temperature range (-40\({}^{\circ}\)C to +23\({}^{\circ}\)C). _IEEE Nuclear Science Symposium Conference Record_**2**, 1122-1128 (2006). [https://doi.org/10.1109/NSSMIC.2006.356043](https://doi.org/10.1109/NSSMIC.2006.356043).
* [34] Hull, G. _et al._ Measurements of NaI(Tl) electron response: Comparison of different samples. _IEEE Transactions on Nuclear Science_**56** (1), 331-336 (2009). [https://doi.org/10.1109/TNS.2008.2009876](https://doi.org/10.1109/TNS.2008.2009876).
* [35] Porter, F. T., Freedman, M. S., Wagner, F. & Sherman, I. S. Response of NaI, anthracene and plastic scintillators to electrons and the problems of detecting low energy electrons with scintillation counters. _Nuclear Instruments and Methods_**39** (1), 35-44 (1966). [https://doi.org/10.1016/0029-554X](https://doi.org/10.1016/0029-554X)(66)90041-3.
* [36] Wayne, L. R., Heindl, W. A., Hink, P. L. & Rothschild, R. E. Response of NaI(Tl) to X-rays and electrons. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**411** (2-3), 351-364 (1998). [https://doi.org/10.1016/S0168-9002](https://doi.org/10.1016/S0168-9002)(98)00193-4.
* [37] Choong, W. S. _et al._ Design of a facility for measuring scintillator non-proportionality. _IEEE Transactions on Nuclear Science_**55** (3), 1753-1758 (2008). [https://doi.org/10.1109/TNS.2008.921491](https://doi.org/10.1109/TNS.2008.921491).
* [38] Khodyuk, I. V., Rodnyi, P. A. & Dorenbos, P. Nonproportional scintillation response of NaI:Tl to low energy x-ray photons and electrons. _Journal of Applied Physics_**107** (11), 113513 (2010). [https://doi.org/10.1063/1.3431009](https://doi.org/10.1063/1.3431009).
* [39] Goodman, J. & Weare, J. Ensemble samplers with affine invariance. _Communications in Applied Mathematics and Computational Science_**5** (1), 65-80 (2010). [https://doi.org/10.2140/CAMCOS.2010.5.65](https://doi.org/10.2140/CAMCOS.2010.5.65).
* [40] Kennedy, M. C. & O'Hagan, A. Bayesian calibration of computer models. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_**63** (3), 425-464 (2001). [https://doi.org/10.1111/1467-9868.00294](https://doi.org/10.1111/1467-9868.00294).
* [41] Trotta, R. Bayes in the sky: Bayesian inference and model selection in cosmology. _Contemporary Physics_**49** (2), 71-104 (2008). [https://doi.org/10.1080/00107510802066753](https://doi.org/10.1080/00107510802066753).
* [42] Gelman, A. _et al._ _Bayesian Data Analysis_ 3rd edn (Chapman and Hall/CRC, New York, USA, 2013).
* [43] Blatman, G. & Sudret, B. Sparse polynomial chaos expansions of vector-valued response quantities. _11th International Conference on Structural Safety and Reliability (ICOSSAR 2013)_ (2013). [https://doi.org/10.3929/ethz-a-010057918](https://doi.org/10.3929/ethz-a-010057918).
* [44] Torre, E., Marelli, S., Embrechts, P. & Sudret, B. Data-driven polynomial chaos expansion for machine learning regression. _Journal of Computational Physics_**388**, 601-623 (2019). [https://doi.org/10.1016/j.jcp.2019.03.039](https://doi.org/10.1016/j.jcp.2019.03.039).
* [45] Vlachoudis, V. Flair: A powerful but user friendly graphical interface for FLUKA. _International Conference on Mathematics, Computational Methods & Reactor Physics (M&C 2009)_ (2009).
* [46] Ahdida, C. _et al._ New Capabilities of the FLUKA Multi-Purpose Code. _Frontiers in Physics_**9**, 788253 (2022). [https://doi.org/10.3389/fphy.2021.788253](https://doi.org/10.3389/fphy.2021.788253).
* [47] Marelli, S. & Sudret, B. UQLab: A Framework for Uncertainty Quantification in Matlab. _Proceedings of the 2nd International Conference on Vulnerability and Risk Analysis and Management, ICVRAM 2014 and the 6th International Symposium on Uncertainty Modeling_ 2554-2563 (2014). [https://doi.org/10.1061/9780784413609.257](https://doi.org/10.1061/9780784413609.257).
* [48] Rodnyi, P. A. _Physical Processes in Inorganic Scintillators_ 1st edn (CRC Press, New York, USA, 1997).
* [49] Lecoq, P., Gektin, A. & Korzhik, M. _Inorganic Scintillators for Detector Systems_ 2nd edn. Particle Acceleration and Detection (Springer International Publishing, Cham, Switzerland, 2017).
* [50] Onsager, L. Initial Recombination of Ions. _Physical Review_**54** (8), 554-557 (1938). [https://doi.org/10.1103/PhysRev.54.554](https://doi.org/10.1103/PhysRev.54.554).
* [51] Birks, J. B. Scintillations from Organic Crystals: Specific Fluorescence and Relative Response to Different Radiations. _Proceedings of the Physical Society. Section A_**64** (10), 874-877 (1951). [https://doi.org/10.1088/0370-1298/64/10/303](https://doi.org/10.1088/0370-1298/64/10/303).
* [52] Jaynes, E. T. Information theory and statistical mechanics. _Physical Review_**106** (4), 620-630 (1957). [https://doi.org/10.1103/PhysRev.106.620](https://doi.org/10.1103/PhysRev.106.620).
* [53] Sudret, B. Global sensitivity analysis using polynomial chaos expansions. _Reliability Engineering & System Safety_**93** (7), 964-979 (2008). [https://doi.org/10.1016/J.RESS.2007.04.002](https://doi.org/10.1016/J.RESS.2007.04.002).
* [54] Bearden, J. A. & Burr, A. F. Reevaluation of X-Ray Atomic Energy Levels. _Reviews of Modern Physics_**39** (1), 125-142 (1967). [https://doi.org/10.1103/RevModPhys.39.125](https://doi.org/10.1103/RevModPhys.39.125).
* [55] Zhu, R. Y. Radiation damage in scintillating crystals. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**413** (2-3), 297-311 (1998). [https://doi.org/10.1016/S0168-9002](https://doi.org/10.1016/S0168-9002)(98)00498-7.
* [56] Narayan, G. H. & Prescott, J. R. The Contribution of the NaI(Tl) Crystal to the Total Linewidth of NaI(Tl) Scintillation Counters. _IEEE Transactions on Nuclear Science_**15** (3), 162-166 (1968). [https://doi.org/10.1109/TNS.1968.4324933](https://doi.org/10.1109/TNS.1968.4324933).
* [57] Mengesha, W. & Valentine, J. D. Benchmarking NaI(Tl) electron energy resolution measurements. _IEEE Transactions on Nuclear Science_**49** (5), 2420-2426 (2002). [https://doi.org/10.1109/TNS.2002.803890](https://doi.org/10.1109/TNS.2002.803890).
* [58] Moszynski, M. _et al._ Intrinsic energy resolution of NaI(Tl). _Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**484** (1-3), 259-269 (2002). [https://doi.org/10.1016/S0168-9002](https://doi.org/10.1016/S0168-9002)(01)01964-7.
* [59] Swiderski, L. _et al._ Response of doped alkali iodides measured with gamma-ray absorption and Compton electrons. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**705**, 42-46 (2013). [https://doi.org/10.1016/J.NIMA.2012.11.188](https://doi.org/10.1016/J.NIMA.2012.11.188).
* [60] Moszynski, M. _et al._ Energy resolution of scintillation detectors. _Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**805**, 25-35 (2016). [https://doi.org/10.1016/j.nima.2015.07.059](https://doi.org/10.1016/j.nima.2015.07.059).
* [61] Rooney, B. D. & Valentine, J. D. Scintillator light yield nonproportionality: Calculating photon response using measured electron response. _IEEE Transactions on Nuclear Science_**44** (3), 509-516 (1997). [https://doi.org/10.1109/23.603702](https://doi.org/10.1109/23.603702).
* [62] Brown, D. A. _et al._ ENDF/B-VIII.0: The 8th Major Release of the Nuclear Reaction Data Library with CIELO-project Cross Sections, New Standards and Thermal Scattering Data. _Nuclear Data Sheets_**148**, 1-142 (2018). [https://doi.org/10.1016/j.nds.2018.02.001](https://doi.org/10.1016/j.nds.2018.02.001).
* [63] Workman, R. L. _et al._ Review of Particle Physics. _Progress of Theoretical and Experimental Physics_**2022** (8) (2022). [https://doi.org/10.1093/PTEP/PTAC097](https://doi.org/10.1093/PTEP/PTAC097).
* [64] Rasco, B. C. _et al._ The nonlinear light output of NaI(Tl) detectors in the Modular Total Absorption Spectrometer. _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_**788**, 137-145 (2015). [https://doi.org/10.1016/J.NIMA.2015.03.087](https://doi.org/10.1016/J.NIMA.2015.03.087).
* [65] Xiu, D. & Em Karniadakis, G. The Wiener-Askey polynomial chaos for stochastic differential equations. _SIAM Journal on Scientific Computing_**24** (2), 619-644 (2002). [https://doi.org/10.1137/S1064827501387826](https://doi.org/10.1137/S1064827501387826).
* [66] Soize, C. & Ghanem, R. Physical systems with random uncertainties: Chaos representations with arbitrary probability measure. _SIAM Journal on Scientific Computing_**26** (2), 395-410 (2005). [https://doi.org/10.1137/S1064827503424505](https://doi.org/10.1137/S1064827503424505).
* [67] Ernst, O. G., Mugler, A., Starkloff, H. J. & Ullmann, E. On the convergence of generalized polynomial chaos expansions. _ESAIM: Mathematical Modelling and Numerical Analysis_**46** (2), 317-339 (2012). [https://doi.org/10.1051/m2an/2011045](https://doi.org/10.1051/m2an/2011045).
* [68] Wagner, P. R., Fahrni, R., Klippel, M., Frangi, A. & Sudret, B. Bayesian calibration and sensitivity analysis of heat transfer models for fire insulation panels. _Engineering Structures_**205**, 110063 (2020). [https://doi.org/10.1016/j.engstruct.2019.110063](https://doi.org/10.1016/j.engstruct.2019.110063).
* [69] McKay, M. D., Beckman, R. J. & Conover, W. J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. _Technometrics_**21** (2), 239-245 (1979). [https://doi.org/10.2307/1268522](https://doi.org/10.2307/1268522).
* [70] Choi, S. K., Grandhi, R. V., Canfield, R. A. & Pettit, C. L. Polynomial chaos expansion with latin hypercube sampling for estimating response variability. _AIAA Journal_**42** (6), 1191-1198 (2004). [https://doi.org/10.2514/1.2220](https://doi.org/10.2514/1.2220).
* [71] Blatman, G. & Sudret, B. Adaptive sparse polynomial chaos expansion based on least angle regression. _Journal of Computational Physics_**230** (6), 2345-2367 (2011). [https://doi.org/10.1016/j.jcp.2010.12.021](https://doi.org/10.1016/j.jcp.2010.12.021).
* [72] Brooks, S. P. & Gelman, A. General Methods for Monitoring Convergence of Iterative Simulations. _Journal of Computational and Graphical Statistics_**7** (4), 434-455 (1998). [https://doi.org/10.2307/1390675](https://doi.org/10.2307/1390675).
* [73] Rasmussen, C. E. & Williams, C. K. I. _Gaussian Processes for Machine Learning_ (MIT Press, Cambridge, Massachusetts, 2006).
* [74] Breitenmoser, D., Butterweck, G., Kasprzak, M. M., Yukihara, E. G. & Mayer, S. Laboratory based Spectral Measurement Data of the Swiss Airborne Gamma-ray Spectrometer RLL. _ETH Research Collection_ (2022). [https://doi.org/10.3929/ethz-b-000528920](https://doi.org/10.3929/ethz-b-000528920).
* [75] Breitenmoser, D., Cerutti, F., Butterweck, G., Kasprzak, M. M. & Mayer, S. FLUKA user routines for non-proportional scintillation simulations. _ETH Research Collection_ (2023). [https://doi.org/10.3929/ETHZ-B-000595727](https://doi.org/10.3929/ETHZ-B-000595727).
**Supplementary Materials for**
**Emulator-based Bayesian Inference on Non-Proportional Scintillation Models**
**by Compton-Edge Probing**
David Breitenmoser\({}^{1,2,*}\), Francesco Cerutti\({}^{3}\), Gernot Butterweck\({}^{1}\), Malgorzata Magdalena Kasprzak\({}^{1}\), Sabine Mayer\({}^{1}\)
\({}^{1}\)Department of Radiation Safety and Security, Paul Scherrer Institute (PSI), Forschungstrasse 111, Villigen PSI, 5232, Switzerland
\({}^{2}\)Department of Physics, Swiss Federal Institute of Technology (ETH), Otto-Stern-Weg 5, Zurich, 8093, Switzerland
\({}^{3}\)European Organization for Nuclear Research (CERN), Esplanade des Particules 1, Geneva, 1211, Switzerland
\({}^{*}\)Corresponding author: David Breitenmoser ([email protected], ORCID: 0000-0003-0339-6592)
**The PDF includes:**
Materials and Methods
Figs. S1-S10
Tables S1-S3
References
## Materials and Methods
### Adaptive sparse PCE-PCA surrogate model
Here, based on previous work [1, 2, 3], we derive our custom vector-valued adaptive sparse polynomial chaos expansion surrogate model (PCE), which we combine with principal component analysis (PCA).
We start with the PCA model part. Consider our vector-valued model response as a random vector \(\mathbf{Y}\in\mathbb{R}^{N\times 1}\) with mean \(\mathbf{\mu_{Y}}\), standard deviation \(\mathbf{\sigma_{Y}}\) and correlation matrix \(\mathbf{\Sigma_{Y}}\coloneqq\mathrm{corr}\left(\mathbf{Y}\right)=\mathbb{E}\left[\mathbf{ Y}^{*}\left(\mathbf{Y}^{*}\right)^{\intercal}\right]\). Note that, in contrast to previous studies [1, 2, 3], we standardize our model response \(\mathbf{Y}\) with \(\mathbf{Y}^{*}\coloneqq\mathrm{diag}\left(\mathbf{\sigma_{Y}}\right)^{-1}\left(\mathbf{Y} -\mathbf{\mu_{Y}}\right)\) to account for the differences in the variance of the individual response variables. We can then perform an eigenvalue decomposition of the correlation matrix \(\mathbf{\Sigma_{Y}}\) with the eigenvalues \(\lambda_{j}\) and eigenvectors \(\mathbf{\phi}_{j}\coloneqq(\phi_{1},\ldots,\phi_{N})^{\intercal}\) satisfying \(\mathbf{\Sigma_{Y}}\mathbf{\phi}_{j}=\lambda_{j}\mathbf{\phi}_{j}\) for \(j=1,\ldots,N\). Since \(\mathbf{\Sigma_{Y}}\) is symmetric and positive definite, the eigenvectors define an orthonormal basis \(\mathbb{R}^{N}=\mathrm{span}(\{\mathbf{\phi}_{j}\}_{j=1}^{N})\) and we can perform an orthogonal transformation of our random vectors \(\mathbf{Y}^{*}\) as follows:
\[\mathbf{Z}=\mathbf{\Phi}^{\intercal}\mathbf{Y}^{*}\] (S1)
with the orthonormal matrix \(\mathbf{\Phi}\coloneqq(\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{N})\in\mathbb{R}^{N\times N}\), where \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{N}\). We call the transformed vectors \(\mathbf{Z}\coloneqq(Z_{1},\ldots,Z_{N})^{\intercal}\) the principal components of \(\mathbf{Y}^{*}\). Once we get the principal components, we can transform them back to the original response variable space with
\[\mathbf{Y}=\mathbf{\mu_{Y}}+\mathrm{diag}\left(\mathbf{\sigma_{Y}}\right)\sum_{j=1}^{N}Z_ {j}\mathbf{\phi}_{j}\] (S2)
To reduce the dimensions of our problem, we retain only \(N^{\prime}\) principal components with the highest variance and thereby approximate our random vector \(\mathbf{Y}\) as
\[\mathbf{Y}\approx\mathbf{\mu_{Y}}+\mathrm{diag}\left(\mathbf{\sigma_{Y}}\right)\sum_{j=1 }^{N^{\prime}}Z_{j}\mathbf{\phi}_{j}\] (S3)
where we choose \(N^{\prime}\coloneqq\min\{S\in\{1,\ldots,N\}:\sum_{j=1}^{S}\lambda_{j}/\sum_{ j=1}^{N}\lambda_{j}\geq 1-\varepsilon_{PCA}\}\) with a prescribed approximation error \(\varepsilon_{\mathrm{PCA}}\).
For the PCE model part, we start again with the polynomial chaos expansion of the model response \(\mathcal{M}\left(\mathbf{X}\right)\) with the random input vector \(\mathbf{X}\in\mathbb{R}^{M\times 1}\) as described in Eq. 3 in the main study:
\[\mathbf{Y}=\sum_{\mathbf{\alpha}\in\mathbb{N}^{M}}\mathbf{a_{\alpha}}\Psi_{\mathbf{\alpha}} \left(\mathbf{X}\right)\] (S4)
where \(\mathbf{a_{\alpha}}\coloneqq(a_{1,\mathbf{\alpha}},\ldots,a_{N,\mathbf{\alpha}})^{ \intercal}\in\mathbb{R}^{N\times 1}\) are the deterministic expansion coefficients, \(\mathbf{\alpha}\coloneqq(\alpha_{1},\ldots,\alpha_{M})^{\intercal}\in\mathbb{N} ^{M\times 1}\) the multi-indices storing the degrees of the univariate polynomials \(\psi_{\alpha}\) and \(\Psi_{\mathbf{\alpha}}\left(\mathbf{X}\right)\coloneqq\prod_{i=1}^{M}\psi_{\alpha_{i} }^{i}\left(X_{i}\right)\) the multivariate polynomial basis functions, which are orthonormal with respect to the joint probability density function \(f_{\mathbf{X}}\) of \(\mathbf{X}\), i.e. \(\left\langle\Psi_{\mathbf{\alpha}},\Psi_{\mathbf{\beta}}\right\rangle_{f_{\mathbf{X}}}= \delta_{\mathbf{\alpha},\mathbf{\beta}}\). For computational purposes, we truncate the PCE series by adopting a truncation set \(\mathcal{A}_{j}\) for the multi-index \(\mathbf{\alpha}\) of each individual response variable \(j=1,\ldots,N\) resulting in:
\[Y_{j}\approx\sum_{\mathbf{\alpha}\in\mathcal{A}_{j}}a_{j,\mathbf{\alpha}}\Psi_{\mathbf{ \alpha}}\left(\mathbf{X}\right)\] (S5)
For the truncation, we can use a hyperbolic truncation scheme defining the multi-index set as \(\mathcal{A}_{j}\coloneqq\{\mathbf{\alpha}\in\mathbb{N}^{M}:(\sum_{j=1}^{M}\alpha_{ j}^{q})^{1/q}\leq p\}\) with \(p\) and \(q\) defining the maximum degree for the associated polynomial and the q-norm, respectively.
To reduce the computational burden, we can now combine these results and perform the PCE not in the original response variable space but in the truncated principal component space. For that, we insert Eq. S5 in Eq. S3:
\[\mathbf{Y}\approx\hat{\mathcal{M}}\left(\mathbf{X}\right)=\mathbf{\mu}_{\mathbf{Y}}+\mathrm{diag} \left(\mathbf{\sigma_{Y}}\right)\sum_{j=1}^{N^{\prime}}\left(\sum_{\mathbf{\alpha}\in \mathcal{A}_{j}}a_{j,\mathbf{\alpha}}\Psi_{\mathbf{\alpha}}\left(\mathbf{X}\right)\right) \mathbf{\phi}_{j}\] (S6)
which we can rearrange by introducing the union set \(\mathcal{A}^{\star}\coloneqq\bigcup_{j=1}^{N^{\prime}}\mathcal{A}_{j}\) to:
\[\mathbf{Y}\approx\hat{\mathcal{M}}\left(\mathbf{X}\right)=\mathbf{\mu}_{\mathbf{Y}}+\mathrm{ diag}\left(\mathbf{\sigma_{Y}}\right)\sum_{\mathbf{\alpha}\in\mathcal{A}^{\star}}\sum_{j=1} ^{N^{\prime}}a_{j,\mathbf{\alpha}}\Psi_{\mathbf{\alpha}}\left(\mathbf{X}\right)\mathbf{\phi}_ {j}\] (S7)
or expressed in a more compact matrix form:
\[\mathbf{Y}\approx\hat{\mathcal{M}}\left(\mathbf{X}\right)=\mathbf{\mu}_{\mathbf{Y}}+\mathrm{ diag}\left(\mathbf{\sigma_{Y}}\right)\mathbf{\Phi}^{\prime}\mathbf{A}\mathbf{\Psi}\left(\mathbf{X}\right)\] (S8)
with the vector \(\mathbf{\Psi}\left(\mathbf{X}\right)\in\mathbb{R}^{\mathrm{card}(\mathcal{A}^{\star}) \times 1}\) as well as the two matrices \(\mathbf{\Phi}^{\prime}\in\mathbb{R}^{N\times N^{\prime}}\) and \(\mathbf{A}\in\mathbb{R}^{N^{\prime}\times\mathrm{card}(\mathcal{A}^{\star})}\) storing the multivariate orthonormal polynomials \(\Psi_{\mathbf{\alpha}}\), the retained eigenvectors \(\mathbf{\phi}_{j}\) and the PCE coefficients \(a_{j,\mathbf{\alpha}}\), respectively.
For model training, we introduce an experimental design with the input matrix \(\mathbf{\mathcal{X}}\in\mathbb{R}^{M\times K}\) and response matrix \(\mathbf{\mathcal{Y}}\in\mathbb{R}^{N\times K}\) for \(K\) instances, \(M\) input variables and \(N\) response variables. For the PCA model, we can use the response matrix \(\mathbf{\mathcal{Y}}\) to estimate \(\mathbf{\mu}_{\mathbf{Y}}\), \(\mathbf{\sigma_{Y}}\) as well as \(\mathbf{\Sigma}_{\mathbf{Y}}\):
\[\hat{\mathbf{\mu}}_{\mathbf{Y}} =\frac{1}{K}\sum_{k=1}^{K}\mathbf{y}^{(k)}\] (S9a) \[\hat{\mathbf{\sigma}}_{\mathbf{Y}} =\sqrt{\frac{1}{K-1}\sum_{k=1}^{K}\left(\mathbf{y}^{(k)}-\hat{\mathbf{ \mu}}_{\mathbf{Y}}\right)^{2}}\] (S9b) \[\hat{\mathbf{\Sigma}}_{\mathbf{Y}} =\frac{1}{K-1}\mathbf{\mathcal{Y}}^{\ast}\left(\mathbf{\mathcal{Y}}^{\ast }\right)^{\intercal}\] (S9c)
with \(\mathbf{\mathcal{Y}}^{\ast}\) denoting the standardized response matrix storing the standardized response variables \(\mathbf{y}^{\ast}\coloneqq\mathrm{diag}\left(\hat{\mathbf{\sigma}}_{\mathbf{Y}}\right) \left(\mathbf{y}-\hat{\mathbf{\mu}}_{\mathbf{Y}}\right)\), i.e. \(\mathbf{\mathcal{Y}}^{\ast}\coloneqq\left(\mathbf{y}^{\ast(k)},\dots,\mathbf{y}^{\ast(k) },\dots,\mathbf{y}^{\ast(K)}\right)^{\intercal}\in\mathbb{R}^{N\times K}\). On the other hand, a rich variety of non-intrusive and sparse methods exist to estimate the PCE coefficient matrix \(\mathbf{A}\) using both, the input matrix \(\mathbf{\mathcal{X}}\in\mathbb{R}^{M\times K}\) and response matrix \(\mathbf{\mathcal{Y}}\)[4]. In the main study, we chose the least angle regression algorithm [5] due to its high evaluation speed and its high accuracy even for very small experimental designs.
#### PCA-PCE based Hoeffding-Sobol decomposition & Sobol indices
One of the major advantages to use PCE emulators for computational intense simulations is the relation between PCE and the Hoeffding-Sobol decomposition and thereby Sobol indices [6]. For completeness, we repeat here some of the theory already discussed elsewhere [3, 6, 7, 8, 9] and derive the PCA-PCE based Sobol indices accounting for the standardization in the PCA discussed in the previous subsection.
We start with the global variance decomposition theory derived by Sobol in 1993 [7]. It can be shown that for any univariate integrable function \(\mathcal{M}\left(\mathbf{X}\right)\) with \(M\) mutually independent random input variables \(X_{i}\) in \(\mathcal{D}_{\mathbf{X}}\) and \(i=\{1,2,\dots,M\}\), there exists a unique functional decomposition, which is often referred to as Hoeffding-Sobol decomposition [7, 9]:
\[\mathcal{M}\left(\mathbf{X}\right)\ =\ \mathcal{M}_{0}\,+\,\sum_{i=1}^{M} \mathcal{M}_{i}\left(X_{i}\right)\,+\,\sum_{1\leq i<j\leq M}\mathcal{M}_{i,j} \left(X_{i},X_{j}\right)\,+\,\dots\,+\,\mathcal{M}_{1,2,\dots,M}\left(X_{1}, \dots,X_{M}\right)\] (S10)
where the following two conditions hold:
1. The first term \(\mathcal{M}_{0}\) is constant and equal to the expected value of \(\mathcal{M}\left(\mathbf{x}\right)\): \[\mathcal{M}_{0}=\mathbb{E}\left[\mathcal{M}\left(\mathbf{X}\right)\right]=\int_{ \mathcal{D}_{\mathbf{X}}}\mathcal{M}\left(\mathbf{x}\right)\,\mathrm{d}\mathbf{x}\] (S11)
2. All the terms in the functional decomposition are orthogonal: \[\int_{\mathcal{D}_{\mathbf{X}_{u}}}\mathcal{M}_{u}\left(\mathbf{x}_{u}\right)\,dx_{i_{ k}}=0\ \,\ \ 1\leq k\leq s\] (S12)
with \(u\) being defined as a subset of indices, i.e. \(u\coloneqq\{i_{1},\ldots,i_{s}\}\subset\{1,\ldots,M\}\)
Further assuming that the function \(\mathcal{M}\left(\mathbf{X}\right)\) is square-integrable, the functional decomposition in Eq. S10 may be squared and integrated to provide the variance decomposition:
\[V=\sum_{i=1}^{M}V_{i}+\sum_{1\leq i<j\leq M}V_{i,j}+\ldots+V_{1,2,\ldots,M}\] (S13)
with the total variance \(V\) and the partial variances \(V_{u}\) defined as:
\[V =\mathrm{Var}\left[\mathcal{M}\left(\mathbf{X}\right)\right]=\int_{ \mathcal{D}_{\mathbf{X}}}\mathcal{M}^{2}\left(\mathbf{x}\right)\,\mathrm{d}\mathbf{x}- \mathcal{M}_{0}^{2}\] (S14a) \[V_{u} =\mathrm{Var}\left[\mathcal{M}_{u}\left(\mathbf{X}_{u}\right)\right]= \int_{\mathcal{D}_{\mathbf{X}_{u}}}\mathcal{M}_{u}^{2}\left(\mathbf{x}_{u}\right)\, \mathrm{d}\mathbf{x}_{u}\] (S14b)
Based on these results, Sobol indices \(S_{u}\) can be defined as a natural global sensitivity measure of \(\mathcal{M}\left(\mathbf{X}\right)\) on the input variables \(\mathbf{X}_{u}\):
\[S_{u}\coloneqq\frac{V_{u}}{V}\] (S15)
Consequently, \(S_{u}\) represents the relative contribution of the set of variables \(u\) to the total variance \(V\). First order indices \(S_{i}\) indicate the influence of \(X_{i}\) alone, whereas the higher order indices quantify possible interactions or mixed influences between multiple variables. In addition, we can also define the total Sobol \(S_{i}^{T}\) index to evaluate the total effect of an input parameter \(X_{i}\) on \(\mathcal{M}\left(\mathbf{X}\right)\):
\[S_{i}^{T}\coloneqq\frac{1}{V}\sum_{u\supset i}V_{u}\] (S16)
As shown by [6], \(S_{i}^{T}\) can also be computed as:
\[S_{i}^{T} =1-S_{\sim i}\] (S17a) \[=1-\frac{\mathrm{Var}_{X_{\sim i}}\left[\mathbb{E}_{X_{i}}\left[ \mathcal{M}\left(\mathbf{X}\right)\right]\right]}{\mathrm{Var}\left[\mathcal{M} \left(\mathbf{X}\right)\right]}\] (S17b)
where we use \(\sim\)\(i\) to denote a set of indices, which do not include \(i\), i.e. \(S_{\sim i}=S_{v}\) with \(v=\{1,\ldots,i-1,i+1,\ldots,M\}\).
Suppose now that we have a PCA-PCE surrogate model to emulate the vector-valued model response \(\mathbf{Y}=\mathcal{M}\left(\mathbf{X}\right)\) with the random input vector \(\mathbf{X}\in\mathbb{R}^{M\times 1}\) and random response vector
\(\mathbb{R}^{N\times 1}\). To derive the \(S_{i,k}^{T}\) for each response variable \(k\in\{1,2,\ldots,N\}\), we start with \(\text{Var}_{X_{\sim i}}\left[\mathbb{E}_{X_{i}}\left[Y_{k}\right]\right]\) from Eq. S17b by replacing \(Y_{k}\) with the \(k^{\text{th}}\) component of Eq. S8:
\[\text{Var}_{X_{\sim i}}\left[\mathbb{E}_{X_{i}}\left[Y_{k}\right]\right] =\mathbb{E}_{X_{\sim i}}\left[\left(\mathbb{E}_{X_{i}}\left[Y_{k} \right]\right)^{2}\right]-\left(\mathbb{E}_{X}\left[Y_{k}\right]\right)^{2}\] (S18a) \[=\mathbb{E}_{X_{\sim i}}\left[\left(\mathbb{E}_{X_{i}}\left[\mu_{Y _{k}}+\sigma_{Y_{k}}\boldsymbol{\phi}_{k}^{\text{row}}\mathbf{A}\boldsymbol{ \Psi}\left(\boldsymbol{X}\right)\right]\right)^{2}\right]-\mu_{Y_{k}}^{2}\] (S18b)
where we used \(\boldsymbol{\phi}_{k}^{\text{row}}\coloneqq\left(\phi_{k1},\ldots,\phi_{kN^{ \prime}}\right)\). We can simplify this expression by expanding the first term and considering that the expectation vanishes for all principal components, i.e. \(\mathbb{E}\left[\mathbf{A}\boldsymbol{\Psi}\left(\boldsymbol{X}\right)\right]=0\):
\[\text{Var}_{X_{\sim i}}\left[\mathbb{E}_{X_{i}}\left[Y_{k}\right]\right] =\mathbb{E}_{X_{\sim i}}\left[\left(\sigma_{Y_{k}}\boldsymbol{\phi }_{k}^{\text{row}}\mathbf{A}\mathbb{E}\left[\boldsymbol{\Psi}\left( \boldsymbol{X}\right)\right]\right)^{2}\right]\] (S19a) \[=\mathbb{E}_{X_{\sim i}}\left[\left(\sum_{\boldsymbol{\alpha} \in\mathcal{A}^{\star}}\sum_{j=1}^{N^{\prime}}\sigma_{Y_{k}}\phi_{kj}a_{j, \boldsymbol{\alpha}}\mathbb{E}\left[\boldsymbol{\Psi}\left(\boldsymbol{X} \right)\right]\right)^{2}\right]\] (S19b)
As shown by [3], due to the orthonormality of the polynomial basis \(\left\{\Psi_{\boldsymbol{\alpha}}\right\}_{\boldsymbol{\alpha}\in\mathcal{A} ^{\star}}\), we can further simplify Eq. S19b resulting in:
\[\text{Var}_{X_{\sim i}}\left[\mathbb{E}_{X_{i}}\left[Y_{k}\right]\right]= \sigma_{Y_{k}}^{2}\sum_{\boldsymbol{\alpha}\in\mathcal{A}_{i=0}^{\star}}\left( \sum_{j=1}^{N^{\prime}}\phi_{kj}a_{j,\boldsymbol{\alpha}}\right)^{2}\] (S20)
with the subset \(\mathcal{A}_{i=0}^{\star}\coloneqq\left\{\boldsymbol{\alpha}\in\mathcal{A}^{ \star}\mid\alpha_{i}=0\right\}\). Using these results, we can compute the total variance with:
\[\text{Var}\left[Y_{k}\right]=\sigma_{Y_{k}}^{2}\sum_{\boldsymbol{\alpha}\in \mathcal{A}^{\star}}\left(\sum_{j=1}^{N^{\prime}}\phi_{kj}a_{j,\boldsymbol{ \alpha}}\right)^{2}\] (S21)
In the end, we get the total PCE-PCA based Sobol index \(S_{i,k}^{T}\) for the input variable \(i\) and the response variable \(k\) by inserting Eq. S20 and Eq. S21 into Eq. S17b:
\[S_{i,k}^{T}=1-\frac{\sum_{\boldsymbol{\alpha}\in\mathcal{A}_{i=0}^{\star}} \left(\sum_{j=1}^{N^{\prime}}\phi_{kj}\,a_{j,\boldsymbol{\alpha}}\right)^{2}}{ \sum_{\boldsymbol{\alpha}\in\mathcal{A}^{\star}}\left(\sum_{j=1}^{N^{\prime}} \phi_{kj}\,a_{j,\boldsymbol{\alpha}}\right)^{2}}\] (S22)
### Uncertainty analysis
For completeness, we repeat here the uncertainty analysis pipeline adopted for the measured and simulated pulse-height spectra and highlight some changes to [10].
For the radiation measurements, the statistical uncertainty of the net count rate spectra \(c_{\text{exp},k}\) characterized by the standard deviation was computed adopting a probabilistic Poisson model [11]:
\[\sigma_{\text{pois},\text{exp},k}=\sqrt{\frac{C_{\text{gr},k}}{t_{\text{gr}}^{ 2}}+\frac{C_{\text{bg},k}}{t_{\text{bg}}^{2}}}\] (S23)
where \(C_{\text{gr},k}\) and \(C_{\text{bg},k}\) are the gross and background counts in channel \(k\) together with the gross and background measurement live times \(t_{\text{gr}}\) and \(t_{\text{bg}}\), respectively. The small statistical uncertainty in the live time measurement is neglected. To compute the source activity \(A\) as a function of the measurement date \(t\), we use the fundamental exponential law of decay, i.e. \(A=A_{0}\cdot 2^{-\Delta t/t_{1/2}}\)[11].
The uncertainty induced by the source activity \(A\) normalization is quantified using the standard error propagation methodology for independent variables [12; 13]:
\[\sigma_{A}=\sigma_{A_{0}}\cdot 2^{-\Delta t/t_{1/2}}\] (S24)
with the reference activity \(A_{0}\) and associated uncertainty \(\sigma_{A_{0}}\) provided by the vendor, the source half life \(t_{1/2}\)[14] as well as the time difference \(\Delta t=t-t_{0}\) between the reference date \(t_{0}\) and the measurement date \(t\). Contributions of the uncertainties in \(t_{1/2}\) and \(\Delta t\) to \(\sigma_{A}\) are found to be less than 1% for all performed measurements and are therefore neglected. We then summarize the total experimental uncertainty as follows [12; 13]:
\[\sigma_{\text{tot,exp},k}=\sqrt{\left(\frac{\sigma_{\text{pois,exp},k}}{A} \right)^{2}+\left(\frac{c_{\text{exp},k}}{A}\cdot\sigma_{A}\right)^{2}}\] (S25)
For the simulations, we computed the statistical uncertainty of the net count rate spectrum \(c_{\text{sim},k}\) characterized by the standard deviation as follows [11]:
\[\sigma_{\text{stat,sim},k}=\sqrt{\frac{1}{N_{\text{pr}}\left(N_{\text{pr}}-1 \right)}\cdot\left[\left(N_{\text{pr}}-N_{\text{dep}}\right)\cdot c_{\text{ sim},k}^{2}+\sum_{l=1}^{N_{\text{dep}}}\left(c_{\text{sim},kl}-c_{\text{ sim},k}\right)^{2}\right]}\] (S26)
where \(c_{\text{sim},kl}\) are the individual broadened energy deposition events in the detector channel \(k\), \(N_{\text{dep}}\) the number of recorded events and \(N_{\text{pr}}\) the number of simulated primaries. It is good practice in Monte Carlo studies to report not only the estimated uncertainty in the sample mean \(c_{\text{sim},k}\) using the sample standard deviation \(\sigma_{\text{stat,sim},k}\) but also the so called variance of the sample variance \(\text{VOV}_{k}\) for the detector channel \(k\) to quantify the statistical uncertainty in \(\sigma_{\text{stat,sim},k}^{2}\) itself [15]:
\[\text{VOV}_{k}=\frac{\text{Var}\left(\sigma_{\text{stat,sim},k}^{2}\right)}{ \sigma_{\text{stat,sim},k}^{4}}=\frac{\left(N_{\text{pr}}-N_{\text{dep}} \right)\cdot c_{\text{sim},k}^{4}+\sum_{l=1}^{N_{\text{dep}}}\left(c_{\text{ sim},kl}-c_{\text{sim},k}\right)^{4}}{\left[\left(N_{\text{pr}}-N_{\text{dep}} \right)\cdot c_{\text{sim},k}^{2}+\sum_{l=1}^{N_{\text{dep}}}\left(c_{\text{ sim},kl}-c_{\text{sim},k}\right)^{2}\right]^{2}}-\frac{1}{N_{\text{pr}}}\] (S27)
The propagation of the systematic uncertainties for the simulated detector response was performed by the Monte Carlo sampling technique. We considered the same model parameters for the uncertainty propagation as in [10]. These parameters are the energy calibration factor \(D_{1}\left[\text{keV}^{-1}\right]\) as well as the empirical resolution parameters \(B_{1}\left[-\right]\) and \(B_{2}\left[-\right]\). However, we adapted the marginal distributions by introducing truncated normal distributions as summarized in Table S3. In addition, we accounted for the statistical dependence of the model parameters \(B_{1}\) and \(B_{2}\) by correlated sampling using the Gaussian copula \(\mathcal{C}_{\mathcal{N}}\)[16]:
\[\{B_{1}^{\star},B_{2}\} \sim\mathcal{C}_{\mathcal{N}}\left(F_{B_{1}^{\star}}\left(b_{1}^{ \star}\right),F_{B_{2}}\left(b_{2}\right);\;\mathbf{R}\right)\] (S28a) \[\sim\Phi_{2}\left(\Phi^{-1}\left(F_{B_{1}^{\star}}\left(b_{1}^{ \star}\right)\right),\Phi^{-1}\left(F_{B_{2}}\left(b_{2}\right)\right);\; \mathbf{R}\right)\] (S28b)
with the log-transformed variable \(B_{1}^{\star}\coloneqq\log\left(B_{1}\right)\), the linear correlation matrix \(\mathbf{R}\) obtained by the regression analysis, the marginal distribution functions \(F\) provided in Table S3, the bivariate Gaussian distribution function \(\Phi_{2}\) associated with the Gaussian copula \(\mathcal{C}_{\mathcal{N}}\) and the inverse cumulative distribution function of the standard normal distribution \(\Phi^{-1}\), respectively. The energy calibration factor \(D_{1}\) is sampled independently according to the corresponding marginal as in [10]. For more details and relevant literature on the copula theory, the reader is referred to [17; 18].
The \(N_{\text{MC}}\in\mathbb{N}_{>1}\) independently drawn input samples \(\boldsymbol{\mathcal{X}}_{\text{MC}}=\left(\boldsymbol{x}^{(1)},...,\boldsymbol {x}^{(m)},...\boldsymbol{x}^{(N_{\text{MC}})}\right)^{\intercal}\) from the probabilistic input model with \(\boldsymbol{X}\coloneqq\left(D_{1},B_{1},B_{2}\right)^{\intercal}\) are then propagated through the postprocessing pipeline described in [10] to obtain the corresponding spectral count rate samples \(\boldsymbol{\mathcal{Y}}_{\text{MC}}=\boldsymbol{\mathcal{Y}}_{\text{MC}}\)
\((c^{(1)}_{\text{sim},k},...,c^{(m)}_{\text{sim},k},...,c^{(\text{M}_{\text{MC}})} _{\text{sim},k})^{\intercal}\) with \(k\in\{1,...,1024\}.\) These samples can then be used to compute the sample standard deviation \(\sigma_{\text{sys},\text{sim},k}\) similar to Eq S9b and thereby quantify the systematic uncertainty with respect to the empirical model parameters \(D_{1},\)\(B_{1}\) and \(B_{2}.\) The total uncertainty characterized by the sample standard deviation can be summarized in the same way as for the experimental uncertainty [12, 13]:
\[\sigma_{\text{tot},\text{sim},k}=\sqrt{\sigma_{\text{stat},\text{sim},k}^{2} +\sigma_{\text{sys},\text{sim},k}^{2}}\] (S29)
**Supplementary Figures**
**Fig. S2**: **Posterior point estimator convergence.** These graphs show the convergence of the posterior point estimators, i.e. the maximum a posteriori probability estimate \(\mathbf{x}_{\rm MAP}\), the posterior mean \(\mathbf{x}_{\rm Mean}\) and the posterior median \(\mathbf{x}_{\rm Median}\), as a function of the Markov Chain Monte Carlo steps and each individual model parameter: **a** The Birks related stopping power parameter \(dE/ds\mid_{\rm Birks}\). **b** The free carrier fraction \(\eta_{e/h}\). **c** The trapping related stopping power parameter \(dE/ds\mid_{\rm Trap}\). **d** The discrepancy model variance \(\sigma_{\varepsilon}^{2}\). In addition, the burn-in threshold is highlighted as a dashed-dotted black line in each graph.
**Fig. S3**: **Simulated spectral detector response using a Bayesian calibrated non-proportional model.** The measured and simulated spectral detector responses are shown for four different calibrated radionuclide sources: **a**\({}^{57}\)Co (\(A=1.113(18)\times 10^{5}\) Bq). **b**\({}^{109}\)Cd (\(A=7.38(15)\times 10^{4}\) Bq). **c**\({}^{133}\)Ba (\(A=2.152(32)\times 10^{5}\) Bq). **d**\({}^{152}\)Eu ( \(A=1.973(30)\times 10^{4}\) Bq). The measured net count rate \(c_{\rm exp}\) as well as the simulated net count rate adopting a proportional scintillation model \(c_{\rm sim}\) were presented already elsewhere [10]. We obtained the simulated net count rate \(c_{\rm sim}^{\rm corr}\) the same way as \(c_{\rm sim}\) but accounted for the non-proportional scintillation effects by the Bayesian calibrated model presented in this study. For the calibration, we used the \({}^{60}\)Co dataset [10]. For all graphs presented in this figure, uncertainties are provided as 1 standard deviation (SD) shaded areas (coverage factor \(k=1\)). These uncertainties are only visible for \(c_{\rm exp}\).
**Fig. S4**: **Uncertainty quantification for the \({}^{60}\)Co spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{60}\)Co calibrated radionuclide source (\(A=3.08(5)\times 10^{5}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. Distinct spectral regions, i.e. the backscatter peak (BSP), the Compton edge (CE) as well as the full energy peaks (FEP) are highlighted for both graphs. The normalized residual level \(|\)\(c_{\rm exp}-c_{\rm sim}\)\(|\)\(/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}:=\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^{2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S5**: **Uncertainty quantification for the \({}^{88}\)Y spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{88}\)Y calibrated radionuclide source (\(A=6.83(14)\times 10^{5}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. Distinct spectral regions, i.e. the backscatter peak (BSP), the Compton edges (CE) as well as the full energy peaks (FEP) are highlighted for both graphs. The normalized residual level \(|\)\(c_{\rm exp}-c_{\rm sim}\)\(|\)\(/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}:=\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^{2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S6**: **Uncertainty quantification for the \({}^{137}\)Cs spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{137}\)Cs calibrated radionuclide source (\(A=2.266(34)\times 10^{5}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. Distinct spectral regions, i.e. the backscatter peak (BSP), the Compton edge (CE) as well as the full energy peak (FEP) are highlighted for both graphs. The normalized residual level \(\mid c_{\rm exp}-c_{\rm sim}\mid/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}\coloneqq\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^ {2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S7**: **Uncertainty quantification for the \({}^{57}\)Co spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{57}\)Co calibrated radionuclide source (\(A=1.113(18)\times 10^{5}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. The normalized residual level \(\mid c_{\rm exp}-c_{\rm sim}\mid/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}\coloneqq\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^ {2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S8**: **Uncertainty quantification for the \({}^{109}\)Cd spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{109}\)Cd calibrated radionuclide source (\(A=7.38(15)\times 10^{4}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. The normalized residual level \(\mid c_{\rm exp}-c_{\rm sim}\mid/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}\coloneqq\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^ {2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S9: Uncertainty quantification for the \({}^{133}\)Ba spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{133}\)Ba calibrated radionuclide source (\(A=2.152(32)\times 10^{5}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. The normalized residual level \(\mid c_{\rm exp}-c_{\rm sim}\mid/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}\coloneqq\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^ {2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
**Fig. S10**: **Uncertainty quantification for the \({}^{152}\)Eu spectral detector response.** The measured and simulated mean net count rates \(c_{\rm exp}\) and \(c_{\rm sim}\) are shown for a \({}^{152}\)Eu calibrated radionuclide source (\(A=1.973(30)\times 10^{4}\) Bq) together with the corresponding uncertainty estimates, i.e. the combined statistical and systematic measured uncertainty \(\sigma_{\rm tot,exp}\), the simulated statistical uncertainty \(\sigma_{\rm stat,sim}\) as well as the simulated systematic uncertainty \(\sigma_{\rm sys,sim}\), using 1 standard deviation values. The measurement results were presented already elsewhere [10]. Two different scintillation models have been used for the simulations: **a** Proportional scintillation model published in [10]. **b** Bayesian calibrated non-proportional scintillation model presented in this study. The normalized residual level \(\mid c_{\rm exp}-c_{\rm sim}\mid/\sigma_{\rm tot}\) with \(\sigma_{\rm tot}\coloneqq\sqrt{\sigma_{\rm tot,exp}^{2}+\sigma_{\rm tot,sim}^ {2}}\) for a coverage factor of 2 is marked with the horizontal dash-dotted black line in the lower subfigures. More information on the numerical computation of the uncertainty estimates can be found in [10] and in the Materials and Methods section of this document.
## Supplementary Tables
**Table S2**: **Compton edge domain sensitivity.** To investigate the sensitivity of the selected Compton edge domain \(\mathcal{D}_{E}\coloneqq\{E:E_{\text{CE}}-3\cdot\sigma_{\text{tot}}\left(E_{ \text{CE}}\right)\leq E\leq E_{\text{FEP}}-2\cdot\sigma_{\text{tot}}\left(E_{ \text{FEP}}\right)\}\) (cf. methods section in the main study) on the Bayesian inversion results, we have altered the domain size by \(2.5\%\) symmetrically with respect to the domain boundaries and performed the emulator training and Bayesian inversion computation on this new domain. This alteration corresponds to \(\approx 18\%\) of the observed Compton edge shift (cf. discussion section in the main study). This table summarizes the posterior point and dispersion estimator results for these additional computations, i.e. the maximum a posteriori probability estimate \(\mathbf{x}_{\text{MAP}}\), the posterior mean \(\mathbf{x}_{\text{Mean}}\) and the posterior median \(\mathbf{x}_{\text{Median}}\) together with the \(95\%\) credible interval and the posterior standard deviation \(\mathbf{\sigma}_{\mathbf{x}}\) for the parameters \(\mathbf{x}\coloneqq\left(dE/ds\mid_{\text{Birks}},\ dE/ds\mid_{\text{Trap}},\ \eta_{e/h},\ \sigma_{\varepsilon}^{2}\right)^{\intercal}\). These parameters are the Birks related stopping power parameter \(dE/ds\mid_{\text{Birks}}\), the trapping related stopping power parameter \(dE/ds\mid_{\text{Trap}}\), the free carrier fraction \(\eta_{e/h}\) as well as the discrepancy model variance \(\sigma_{\varepsilon}^{2}\).
\({}^{1}\)Central credible interval with a probability mass of \(95\%\).
**Table S3**: **Summary of the marginal distribution.** This table summarizes the adopted marginal distributions of the empirical models used to quantify the systematic uncertainties.
\begin{tabular}{c c c} \hline \hline Parameter & Marginal distribution1 & Unit \\ \hline \(D_{1}\) & \(\mathcal{N}\left(d_{1};\;3.33\times 10^{-1},8\times 10^{-8},-\infty,\infty\right)\) & keV\({}^{-1}\) \\ \(B_{1}^{*}\) & \(\mathcal{N}\left(b_{1}^{*};\;-5.62\times 10^{-1},6\times 10^{-2},-\infty,\infty\right)\) & – \\ \(B_{2}\) & \(\mathcal{N}\left(b_{2};\;6.33\times 10^{-1},1.1\times 10^{-2},0,\infty\right)\) & – \\ \hline \hline \end{tabular}
**Table S4**: **Summary of the marginal distribution.** This table summarizes the adopted marginal distributions of the empirical models used to quantify the systematic uncertainties. |
2303.12014 | Authority without Care: Moral Values behind the Mask Mandate Response | Face masks are one of the cheapest and most effective non-pharmaceutical
interventions available against airborne diseases such as COVID-19.
Unfortunately, they have been met with resistance by a substantial fraction of
the populace, especially in the U.S. In this study, we uncover the latent moral
values that underpin the response to the mask mandate, and paint them against
the country's political backdrop. We monitor the discussion about masks on
Twitter, which involves almost 600k users in a time span of 7 months. By using
a combination of graph mining, natural language processing, topic modeling,
content analysis, and time series analysis, we characterize the responses to
the mask mandate of both those in favor and against them. We base our analysis
on the theoretical frameworks of Moral Foundation Theory and Hofstede's
cultural dimensions. Our results show that, while the anti-mask stance is
associated with a conservative political leaning, the moral values expressed by
its adherents diverge from the ones typically used by conservatives. In
particular, the expected emphasis on the values of authority and purity is
accompanied by an atypical dearth of in-group loyalty. We find that after the
mandate, both pro- and anti-mask sides decrease their emphasis on care about
others, and increase their attention on authority and fairness, further
politicizing the issue. In addition, the mask mandate reverses the expression
of Individualism-Collectivism between the two sides, with an increase of
individualism in the anti-mask narrative, and a decrease in the pro-mask one.
We argue that monitoring the dynamics of moral positioning is crucial for
designing effective public health campaigns that are sensitive to the
underlying values of the target audience. | Yelena Mejova, Kyrieki Kalimeri, Gianmarco De Francisci Morales | 2023-03-16T17:08:16Z | http://arxiv.org/abs/2303.12014v2 | # Authority without Care:
###### Abstract
Face masks are one of the cheapest and most effective non-pharmaceutical interventions available against airborne diseases such as COVID-19. Unfortunately, they have been met with resistance by a substantial fraction of the populace, especially in the U.S. In this study, we uncover the latent moral values that underpin the response to the mask mandate, and paint them against the country's political backdrop. We monitor the discussion about masks on Twitter, which involves almost 600k users in a time span of 7 months. By using a combination of graph mining, natural language processing, topic modeling, content analysis, and time series analysis, we characterize the responses to the mask mandate of both those in favor and against them. We base our analysis on the theoretical frameworks of Moral Foundation Theory and Hofstede's cultural dimensions.
Our results show that, while the anti-mask stance is associated with a conservative political leaning, the moral values expressed by its adherens diverge from the ones typically used by conservatives. In particular, the expected emphasis on the values of authority and purity is accompanied by an atypical dearth of in-group loyalty. We find that after the mandate, both pro- and anti-mask sides decrease their emphasis on care about others, and increase their attention on authority and fairness, further politicizing the issue. In addition, the mask mandate reverse the expression of Individualism-Collectivism between the two sides, with an increase of individualism in the anti-mask narrative, and a decrease in the pro-mask one. We argue that monitoring the dynamics of moral positioning is crucial for designing effective public health campaigns that are sensitive to the underlying values of the target audience.
1ISI Foundation, V. Chisola 5, Turin, Italy, 2CENTAI, C. Inghilterra 3, Turin, Italy
[email protected], [email protected], [email protected]
## Introduction
The COVID-19 pandemic has been an unprecedented event, which has also brought about an infoculate that makes public health response difficult. COVID denial, anti-vaccine sentiment, and other flavors of theories (from doubts to full-blown conspiracies) have been documented in social media [14]. Among the many controversies, surprisingly, wearing a mask has become extremely politicitized and contentious. Inconsistent messaging by public health organizations has seriously undermined public compliance to this simple measure. Initially, the WHO recommended that masks should be worn only by professionals or those taking care of a sick person [2], and in the U.S., the Center for Disease Control and Prevention (CDC) recommended the use of masks only for healthcare workers and people who were sick. Only on April 3rd, 2020, the CDC officially recommended wearing non-medical cloth face coverings when in public places [1]. Soon after, speculations that masks do more harm than good [18] and that they foster a false sense of security [10] started to proliferate. Public health experts have acknowledged that it has been a challenge to communicate with groups holding strong values of freedom [14]. Indeed, globally, research has found that the value of individualism is associated with higher COVID-19 mortality rates [1] and less adherence to preventive measures [11]. However, how personal moral values evolve around governmental messaging concerning these preventive measures is not yet understood.
Mask-wearing behavior and attitudes can be influenced by multiple factors. There is extensive evidence that political ideology and identity influences attitudes, judgments, and behaviors [15]. At the same time, political identity is deeply intertwined with moral values [1, 12]. Finally, and perhaps obviously, moral values are directly linked to moral decision making [1], which completes the triad. In particular, there is an altruistic component in mask-wearing, as their main purpose is to protect others, which is related to solidarity towards the in-group in the face of an out-group threat [13]. For this reason, among moral values we pay particular attention to the individualist-collectivist angle. Finally, there is ample literature on the division and polarization of news and information sources in general.1 As users increasingly rely on social media to satisfy their information needs,2 the lines between professional reporting and personal opinions begin to blur. For this reason, we contextualize the information environment in which the mask debate takes place in terms of information sources from peers--within the Twitter
community--and from external sources.
This work provides a fine-grained analysis of the moral values of those expressing opinions around masking by applying the Moral Foundations Theory to a dataset of Twitter posts spanning the beginning of the pandemic, from January to July 2020. In particular, we ask: _What is the anatomy of the collective discussion on mask wearing around the mask mandate on Twitter?_ In particular, we analyze different facets of this discussion in the U.S.:
1. How does the users' stance relate to their political leaning?
2. What moral values do adherents to pro- or anti-masking stances hold?
3. What is the information environment around their arguments?
Our findings confirm the known political divisions, with liberals supporting the pro-mask and conservatives the anti-mask stance. The moral narrative of the two sides also differs: those on the anti-mask side invoke the values of _authority_ and _purity_, while those promoting mask-wearing emphasize _care_ and _loyalty_. The introduction of the mask mandate shifts these values from an emphasis on _care_ to an increased attention on _authority_ and _fairness_, thus escalating the politicization of the issue. Content analysis shows that the increasingly individualistic views espoused by the anti-mask side are accompanied by the allegations of the dangers and ineffectiveness of mask-wearing, supported by resources from social media and sometimes even governmental and scientific sources. We argue that the mask-wearing debate has a dynamic moral landscape that should be carefully monitored to design effective public messaging campaigns that reflect the shifting values of their target audience. Such studies are especially important, given the rising prominence of social media as a platform for public policy discussion and influence (Shapiro and Hemphill, 2017).
## Related Work
Masking Behavior.Mask mandates have been shown to be effective in decreasing COVID infection rates in the U.S., both in urbanized and rural areas (Krishnamachari et al., 2021). Early studies have tested the effect of mask adoption, with simulations showing that even relatively ineffective face coverings could meaningfully reduce community transmission (Eikenberry et al., 2020). Unfortunately, the public response to the masking mandates has been partial, with only a 12% increase in mask-wearing immediately after the CDC masking guideline on April 3, 2020 (Goldberg et al., 2020). A survey of residents of 10 states in the U.S. relates mask-wearing to COVID cases in the state, political party in power, and individual measures of "social capital", as well as some demographics (Hao, Shao, and Huang, 2021). They find that respondents are more likely to wear a mask if there are more COVID-related deaths in the state (Odds Ratio of \(1.26\)), the Democratic party is in power (\(\text{OR}=2.0\)), and if the respondents more often speak to their friends and family (\(\text{OR}=1.16\)). Although misinformation has been blamed for the resistance to mask-wearing and social distancing, a nationally representative survey has found that the beliefs about the consequences of these behaviors are more predictive of people's compliance (Hornik et al., 2021). Especially in the U.S., beliefs and trust in authority are often strongly related to the people's political affiliations. For instance, Republicans, conservatives, and nationalists are less likely to believe that the World Health Organization (WHO) can effectively manage the pandemic (Bayram and Shields, 2021).
Social Media & Masking.A study of tweets during the early days of the COVID pandemic (Feb-March, 2020) identified methods for decreasing the spread of COVID as one of the main themes and the wearing of masks as one of those most associated with positive sentiment (Abd-Alrazaq et al., 2020). However, authors of a later study spanning January to October 2020 showed that the output of automated sentiment analysis tools corresponds poorly with the mask-related sentiment expressed in the tweets due to the richness of the language used (He et al., 2021). Instead, they perform manual coding of a sample of the tweets to identify several major categories of concerns around masking, including physical discomfort, effectiveness, appropriateness, and political beliefs. A network analysis of the mask-related tweets has shown the pro-mask activists exist in a kind of "echo chamber" (Cinelli et al., 2021; Garimella et al., 2018), and that they tend to ignore the subversive rhetor of the anti-mask fringe (Lang, Erickson, and Jing-Schmidt, 2021). Instead, a recent Twitter study found a focus on ongoing news (Coffas et al., 2021). This fringe is more likely to use toxic language, including insults and profanity, than the pro-mask ones (Pascual-Ferra et al., 2021). The authors link this behavior to either the vociferous protestations of a minority group (Miller and Morrison, 2009) or potential signaling as an in-group behavior and a marker of personal identification. Alongside this toxicity, other studies find widespread misinformation and misunderstandings in the social media discussions around mask use. These include the beliefs that COVID19 is over-hyped by the media, that masks are ineffective, and that they do more harm than good (Keller, Honea, and Ollivant, 2021). These beliefs were then shown to impact the mask-wearing (and social distancing) behaviors (Hornik et al., 2021). In this study, we introduce a dimension of _moral values_, which we argue underlies some of the disagreements on the appropriate use of masks during the epidemic, and which may shed some light on the moral reasoning behind the rhetoric.
Theoretical Framework.Our study of the rhetoric around masks during COVID-19 is grounded in the Moral Foundations Theory (MFT), its manifestation in interpersonal and inter-group communication, and its reflection at the societal level as an individualist or collectivist cultural dimension. According to social identity theory, members of an in-group will look for negative aspects of an out-group, thus enhancing their self-image (Tajfel et al., 1979). Strong in-group and out-group reasoning at a societal level determines whether a society is individualistic or collectivist (IC), as defined by Hofstede (2001). The IC dimension considers the degree to which societies are integrated into groups and their perceived obligations and dependence on groups. Individualism indicates there is a greater importance placed on attaining
personal goals. Collectivism indicates there is a greater importance placed on the goals and well-being of the group. People in collectivist societies generally distinguish sharply between in- and out-groups, while people in individualistic societies treat everyone as a potential in-group member and thus apply universal values to everyone. Cross-cultural research has demonstrated that the United States is the prototypical individualist culture based on the IC dimension [12, 13]. At the same time, the U.S. show a measurable variation on this dimension [21] at state level.
The societal dimensions of collectivism and individualism can be related to the individuals' adherence to specific moral dimensions, as postulated by the Moral Foundations Theory [1, 1], which include: _care/harm_, fundamental concerns for the suffering of others, including virtues of caring and compassion; _fairness/cheating_, concerns about unfair treatment, inequality, and more abstract notions of justice; _loyalty/betrayal_, concerns related to obligations of group membership; _authority/subversion_, concerns related to social order and the obligations of hierarchical relationships such as obedience, respect, and proper role fulfillment; and _purity/degradation_, with concerns about physical and spiritual contagion, including virtues of chasticity, wholesomeness, and control of desires. These foundations are shown to underlie human judgements and decision-making [14] on societal topics ranging from vaccine hesitancy [1, 15], to politics [16, 17], religion, and social cooperation [1, 18]. Here, we place the focal point on the linguistic analysis of values expressed via Twitter, and aim to clarify peoples' dispositions and attitudes towards interpersonal and inter-group processes related to persuasion and communication narratives.
The moral dimensions of MFT have also been linked to political ideologies [10, 13], with conservatives emphasizing the in-group relationships and tradition, and liberals endorsing fairness and equal opportunity. In the United States, the mask-wearing measure has also been strongly associated with the partisan divide. Social identity is a primary reason behind people's decision whether to wear a mask during the pandemic [22], and surveys show that faith in President Trump is a strong predictor of refusal to social distance, and its effect is largest among individuals high in binding foundations [1]. Indeed, the U.S. counties that showed strong support for Trump in 2016 practiced significantly lower mask-wearing in 2020 [1]. This work illustrates the strong connection between the attitudes expressed towards mask-wearing on Twitter and the the political leaning of those expressing them, and shows the shifts in moral emphasis after the mask mandates are introduced.
## Data
We begin by collecting tweets mentioning the keywords "mask", "facemask", "ffp3", and "n95" (the latter two refer to popular kinds of masks), spanning the dates of January 1st to July 30th, 2020, using the GOT3 library [12]. These keywords were chosen by considering the special Twitter Covid-19, stream3 and picking the most common English keywords related to masks. This collection results in \(18\,245\,298\) tweets from \(5\,935\,103\) users. Following recommendations from existing literature, we then perform several filtering steps in order to ensure that the tweets can be used to assess the stance of the user on masking, and that the account is likely to belong to a human living in U.S.:
Footnote 3: [https://developer.twitter.com/en/docs/twitter-api/tweets/covid-19-stream/filtering-rules](https://developer.twitter.com/en/docs/twitter-api/tweets/covid-19-stream/filtering-rules)
Footnote 4: We use lists compiled by finding top bigrams (considering the 300 most frequent) in the dataset and then manually labeling them for non-relevance. The irrelevant list is {‘hair mask’,’gas mask’,’ sleeping mask’, ‘major’s mask’, ‘ski mask’,’eye mask’, ‘clay mask’, ‘tear gas’} and the relevant list is {‘covid’, ‘comovairus’,’sars-cov-2’, ‘sars-cov2’, ‘social distance’, ‘social distance’}.
Footnote 5: Code available at [https://sites.google.com/site/yelenamejova/resources](https://sites.google.com/site/yelenamejova/resources)
Footnote 6: [https://pypi.org/project/tuszipcode/](https://pypi.org/project/tuszipcode/)
* Relevance classifier (details next).
* Exclude users whose location cannot be mapped to one of U.S. states or Washington DC.
* Exclude those not having at least 1 English tweet [1].
* Exclude users with only 1 tweet [1].
* Exclude top 0.1% of users by the number of tweets (having higher posting rate) [13].
* Exclude users whose friends to followers ratio is \(>\)10 [1].
We proceed by making sure the tweets we collect are indeed about mask-wearing due to the COVID-19 epidemic. Upon manual examination, we find several other topics captured, such as advice on masking during protests, sports-related wear, and beauty products. To remove such content, we use a set of _distant_ labels to identify the non-relevant content.4 We manually annotate a balanced random selection of 300 tweets associated with a distant label and find 96% accuracy in distinguishing between relevant and non-relevant tweets (expected accuracy of a random baseline is 50%). We use \(430\,568\) tweets with the respective distant labels to train the relevance classifier. The model is a logistic regression, trained on tf-idf-weighted unigram counts extracted from the tweets and with inverse-proportional class weighting to mitigate class imbalance. As text pre-processing, we remove URLs, numbers, handles, hashtag (the # symbols), retweet indicators, and stopwords, keep whole words, and perform snowball stemming. The accuracy in 5-fold cross-validation is 99.7%. We draw another balanced sample of 300 tweets thus classified and find the accuracy of the classifier to be 82.3% (inter-annotator agreement overlap of 90%, Cohen's kappa of 0.76 between 3 annotators).
Footnote 4: We use lists compiled by finding top bigrams (considering the 300 most frequent) in the dataset and then manually labeling them for non-relevance. The irrelevant list is {‘hair mask’,’gas mask’,’sleeping mask’, ‘major’s mask’, ‘ski mask’,’eye mask’, ‘clay mask’, ‘tear gas’} and the relevant list is {‘covid’, ‘comovairus’,’sars-cov-2’, ‘scast’, ‘social distance’}.
Next, we geolocate the tweets identified as relevant by mapping the user location strings to the Geonames ID by using custom string matching,5 and to U.S. zip codes by using the 'uszipcode' library.6 We apply basic pre-processing (i.e., stopword and non-ASCII character removal). First, we filter locations assigned to the U.S. territory by 'uszipcode' and
then the remaining ones by the Geonames library, aiming to recover the GPS coordinates of the smallest possible administrative area. We are able to locate \(1\,383\,729\) of the users within a U.S. state or Washington DC. Our sample is representative of the U.S. population distribution (2019 Census estimates) with a Pearson correlation of \(0.96\) (\(n\)=51), which suggests that there is little bias in the sampling of the state in terms of number of users.
The latter four filtering steps aim at excluding users with either too little engagement on the topic or those who post so much that they are likely to be business or automated accounts. We do not use the popular tool Bootometer, as a recent study on mask-related tweets shows it to mostly find active human Twitter users [11]. At the end of this process, we are left with \(647\,730\) users. Finally, we use the Twitter API Friends call to collect the information about whom these users follow ("followees" or "friends"), thus resulting in the coverage of \(598\,792\) users.
vaccine hesitancy case (Cossard et al., 2020).
Figure 2 shows the daily volume of tweets by users classified as either pro- or anti-mask. The two time series are highly correlated, with a Pearson correlation of \(r=0.938\). The engagement begins roughly at the time of the first CDC recommendation to wear masks on 2020-04-03. The increase in volume confirms a previous study which finds increased "appetite" to share opinions after major mask-related news (Cofras et al., 2021). The subsequent peaks often revolve around major news stories involving masks, such as one when the U.S. Vice-President Pence visited a hospital without wearing a mask towards the end of April,9 continuous comparisons of the masking behavior of the two contenders for the U.S. Presidency in late May,10 and subsequent adjustments to the guidelines by the public health officials who were trying to "correct" their previous messaging in mid-July.11 Given the high correlation of the two time series, we ask whether this effect is endogenous, i.e., if there is a causal feedback loop whereby one of the two stances answers the other. A Granger causality test on these two time series for time lags from 1 to 14 days finds no strong relationship in either direction. The likely implication of this negative result is that the volume of both stances has a common cause that is exogenous: external events that get discussed on Twitter. Although we cannot exclude an effect with lag shorter than one day, the fact that the networks of two stances are well separated makes this hypothesis less likely.
Footnote 9: [https://www.washingtonpost.com/politics/pence-meets-with-mayo-clinic-patients-staff-while-not-wearing-face-mask/2020/04/28/57c4200c-897e-11ea-9dfd-990f9dcc71fc.story.html](https://www.washingtonpost.com/politics/pence-meets-with-mayo-clinic-patients-staff-while-not-wearing-face-mask/2020/04/28/57c4200c-897e-11ea-9dfd-990f9dcc71fc.story.html)
Footnote 10: [https://edition.cnn.com/2020/05/26/opinions/biden-mask-trump-ghiitis/index.html](https://edition.cnn.com/2020/05/26/opinions/biden-mask-trump-ghiitis/index.html)
Figure 3 shows the daily volume of tweets by users classified as either pro- or anti-mask. The two time series are highly correlated, with a Pearson correlation of \(r=0.938\). The engagement begins roughly at the time of the first CDC recommendation to wear masks on 2020-04-03. The increase in volume confirms a previous study which finds increased "appetite" to share opinions after major mask-related news (Cofras et al., 2021). The subsequent peaks often revolve around major news stories involving masks, such as one when the U.S. Vice-President Pence visited a hospital without wearing a mask towards the end of April,9 continuous comparisons of the masking behavior of the two contenders for the U.S. Presidency in late May,10 and subsequent adjustments to the guidelines by the public health officials who were trying to "correct" their previous messaging in mid-July.11 Given the high correlation of the two time series, we ask whether this effect is endogenous, i.e., if there is a causal feedback loop whereby one of the two stances answers the other. A Granger causality test on these two time series for time lags from 1 to 14 days finds no strong relationship in either direction. The likely implication of this negative result is that the volume of both stances has a common cause that is exogenous: external events that get discussed on Twitter. Although we cannot exclude an effect with lag shorter than one day, the fact that the networks of two stances are well separated makes this hypothesis less likely.
Footnote 11: [https://edition.cnn.com/2020/07/12/politics/jerome-adams-surgeon-general-mask-mandate/index.html](https://edition.cnn.com/2020/07/12/politics/jerome-adams-surgeon-general-mask-mandate/index.html)
Footnote 12: [https://www.pewresearch.org/fact-tank/2020/10/29/both-replaces-and-democrats-cite-masks-as-a-negative-effect-of-covid-19-but-for-very-different-reasons/](https://www.pewresearch.org/fact-tank/2020/10/29/both-replaces-and-democrats-cite-masks-as-a-negative-effect-of-covid-19-but-for-very-different-reasons/)
Figure 4: Distributions of (a) polarity scores of users (computed via METIS), and (b) political leaning of users (inferred via follower relationships on Twitter) grouped by inferred stance on mask-wearing.
Figure 3: Fraction of anti-mask users over the total geolocated users in the 48 U.S. states.
Figure 2: Daily time series of number of tweets by users classified by their mask stance. Vertical line on mandate date.
score as \(S_{PL}=(N_{R}-N_{L})/(N_{R}+N_{L})\), which results in \(S_{PL}\in[-1,1]\) with \(1\) the most right-leaning score. Thus, we are able to identify the political leaning of \(18\,422\) users.
Figure 3(b) presents the distribution of political leaning scores for the two categories of users based on their stances on masks. Pro-mask users are more likely to be following left-leaning accounts, and anti-mask ones the right-leaning ones, with almost no users existing in the middle political ground. In fact, we find a strong polarization at the extremes of the political spectrum, especially for anti-mask users.
Moral Values.Moral values are directly linked to decision making [15]. Since there is an altruistic component to mask wearing, we ask what are the moral values expressed by the holders of the two stances. We assess the moral narratives by employing the MoralStrength lexicon [14], which holds the state-of-the-art performance in moral text prediction. MoralStrength lexicon provides, along with each lemma, the _Moral Valence score_, a numeric assessment that indicates both the polarity and the intensity of the lemma in each moral foundation. According to this lexicon, the Moral Valence is expressed in a Likert-scale from 1 to 9, with 5 to be considered as neutral. Scores lower than 5 reflect notions closer to Harm, Cheating, Betrayal, Subversion, and Degradation, while values higher than 5 indicate Care, Fairness, Loyalty, Authority, and Purity. For each lemma in a tweet and for each foundation, we obtain a moral valence score which is then averaged for each tweet. Negation correction was not applied, as foundation polarities do not directly translate as opposites (e.g., "not care" is not the same as "harm"). The MoralStrength lexicon has a limited linguistic coverage; as a result only the 41.5% of the tweets were found to express a moral foundation. For all the rest, we assigned the value 5, neutral point of the Likert scale. This approach pushes the observed mean towards the center of the scale, but captures the variability of the value across all documents (we discuss the implications of this methodological step in Discussion). To assess statistical significance, we use a Student's t-test (with the Benjamini-Hochberg correction for multiple hypothesis testing) on the scores obtained from tweets written before and after the mandate date (2020-04-03) for each moral dimension.
Figure 5 shows the mean moral value scores of each side in the periods before and after the mandate. Before, the two sides display comparably similar values, except for _care_, which is by far higher for the pro-mask side (significant at \(p<0.001\)). However, there is a clear shift in the moral narratives expressed after the mandate by both sides of the debate. To understand the context of these morally-charged expressions, we examine a sample of tweets for each side and value.
First, we find an increase in the valence of _authority_ for the anti-mask side (\(p<0.001\)), which is mostly accompanied by criticism and mistrust of the decisions made by the authorities. For instance, the anti-mask side associates wearing a mask with weakness and lack of leadership (_"@JoeBiden Real leadership? With that thing on you look feeble"_). Conversely, the pro-mask side sees a lack of leadership in former President Trump's refusal to wear a mask (_"Real leaders won't mind when the mask smears your absurd orange makeup"_). Most of the examples we find in this moral category are indeed criticism of the authorities, which nevertheless signifies that authority is held in high importance, especially for the anti-mask side (it is the value with the highest valence). The fact that post-mandate the authority-related keywords have higher valence on the anti-mask side suggests stronger criticism of the authorities than the pro-mask side (for whom the increase is significant only at \(p=0.004\) before the correction).
In terms of _care_, both sides have a downwards shift after the mandate. For the pro-mask side, this shift is accompanied by an increase in _fairness_ and _loyalty_, which can be interpreted as a shift in focus from personal choice based on caring for others to complying with the mandate. Also, in this case, the spottight is often on the opposite side (_"Masks protect others, which Trump doesn't care about. He cares only about himself."_). Conversely, anti-mask supporters express themselves by prioritizing much less the notion of _care_, explicitly showing disregard for the protection of others, or simply stating that they do not care about being criticized for not wearing a mask (_"I'm tired of being called a murderer because I don't wear a mask. Cannot understand how people are such thoughtless idiots"_).
In addition, pro-mask supporters express significantly more _loyalty_ in their messaging after the mandate (\(p<0.0001\)). Upon examining a sample of posts, we find the increase is primarily related to criticism of those not wearing masks as loyalists to a political affiliation or to Trump personally (_"People don't wear masks because of Trump. Dear leader doesn't wear one, loyal followers go along"_).
After the mandate, the _fairness_ value increases for both sides (both at \(p<0.0001\)). Narratives about the fair treatment of individuals appear to be present equally on both sides, focusing on negotiation when wearing masks is reasonable (pro-mask: _"What if a person not wearing it understands the risks?"_), comparison with other rights violations (pro-mask: _"The only "rights" that are violated is just not wearing a mask? They don't appreciate the nice life they have."_), or whether the criticism is fair (anti-mask: _"Colorado Governor Says People Who Refuse to Wear Mask are "Selfish Bastands" OK, and anybody who wants others to harm their immune system
Figure 5: Moral valence in narratives expressed by pro-mask and anti-mask users in the periods before (lighter points) and after (darker points) the mandate. Dot represents the median value while the whiskers represent 5-95% quantiles.
is a POS Marxist propagandist"_). Instead, the valence of _purity_ does not change substantially. We find that the notion of purity is not tied to religious views, but to what people consider 'natural' and 'healthy' (e.g. _"My 8 y.o. niece will be heading back to school but will have to wear a mask! How is it physically or emotionally healthy?"_).
We further examine the evolution of moral values expressed by the two sides in time and model it via an Interrupted Time Series (ITS) linear model. Figure 6 depicts the evolution of the average moral foundation score per side, with a vertical dashed black line indicating the date of the official mask mandate. Complementing our previous analysis, the interrupted time series model shows that for all the moral dimensions, after the mandate, there is an evident change in behavior by both sides. Perhaps the most interesting moral dimension is _loyalty_, whose signal is evidently diverging for two sides exactly after the mandate date and continues the same trend until the end of our data collection. We also observe that not only does the value of _care_ decreases, the trend is downward over time, signaling a progressive shift in the debate. Similarly, the value of _purity_ has a progressively negative trend for pro-mask side over time. Thus, we find that the temporal dimension of the data can be instructive about the evolution of the rhetoric in terms of divergence between the two sides of conversation and changes in emphasis.
Collectivism vs Individualism.One of the main purposes of mask wearing is the protection of others, an expression of solidarity within the in-group against an external threat. Thus, we turn to the Individualism-Collectivism (IC) dimension [11], which captures the standing of individuals as interdependent members of a collective. We operationalize it via the personal pronouns used in the tweets, mainly first-person singular ("I", "me","mine" etc.) and first-person plural ("we", "us", "ours" etc), following existing literature [20].
Table 2 shows the average usage of the two sets of pronouns in the tweets posted by the two sides of the debate, separately before and after the mask mandate. In the period before mask mandate, anti-mask supporters use \(I\) and other singular pronouns at 0.53, and after the prevalence increases to 0.60 (\(+0.07\), \(p<0.0001\)), which points to an increased focus on the individual. Instead, pro-mask supporters decrease their usage of singular pronouns (\(-0.07\), \(p<0.0001\)). The mention of _We_ and other plural pronouns by both pro- and anti-mask supporters remain at the same levels (\(\pm 0.01\)), thus indicating no significant shifts in the focus on the self-identified group. Thus, although having comparatively similar usages of singular pronouns before the mandate, the debate after the government's messaging becomes more individualistic for anti-mask side and less so for advocates of masking.
Information Environment.Finally, we turn to the information context where the debate takes place, in form of links to external sources and information from peers. We begin by performing basic normalization steps on the original tweet text, including removing punctuation, accents, contractions, and stopwords, substituting emojis with a text description,16 and finally by performing lemmatization. We then identify the most distinguishing words used by each side by calculating the odds ratio of using particular words (lemmas) by one side compared to the other before and after the mask mandate. We filter these words by their total frequency since otherwise rare terms would emerge as most distinguishing. Because the periods before and after the mandate have significant differences in volumes, we set the word frequency threshold to \(100\) and \(800\) for the two periods, respectively. Politician names appeared in the top terms and were grouped together, hence resulting in two-word terms. The resulting top terms are:
Footnote 16: Using emoji library [https://pypi.org/project/emoji/](https://pypi.org/project/emoji/).
* Pro-mask: _trumpvirus, Louie Gohmert, Putin, Herman Cain, wearadammmask, penny, Mayo, GOP, DeSantis, co-vidiots_
* Anti-mask: _riot, micron, virtue, leftish, loot, sheeple, unhealthy, bacterium, MSM, antifa_
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Pronoun & \multicolumn{2}{c}{Singular} & \multicolumn{2}{c}{Plural} \\ \cline{2-5} Mandate & before & after & before & after \\ \hline Pro-mask & 0.55 & 0.48 & 0.11 & 0.10 \\ Anti-mask & 0.53 & 0.60 & 0.09 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Use of singular and plural personal pronouns in a tweet by side, before and after the mask mandate.
Figure 6: Time series of moral value scores of pro-mask and anti-mask users, along with an interrupted time series analysis model.
The most distinctive words by each side are highly politicized: aside from references to politiciticians (similar to other recent findings around masking Sanders et al. (2021)), we find several words used to attack the other side. For instance, pro-maskers often use the term 'trumpvirus' to refer to COVID-19, as a political response against the term 'chinavirus' used by President Trump: _"\(\langle\)realDonaldTrump #HermanCainRIP one more death due to #TrumpVirus. Just think. If only he wore a mask and NOT attended the Tulsa #Coronavirus-Rally"_. The focus on individuals is worth of notice, as in the case of Luie Gohmert, who strongly supported the use of hydroxychloroquine and attended a House Judiciary Committee hearing without wearing a mask, or in the case of Herman Cain, a Republican politician who opposed the mask mandate and later died of COVID-19.17 Similarly, the derogative term 'covidiots' is used to describe anti-mask supporters.
Footnote 17: [https://en.wikipedia.org/wiki/Herman_Cain#Health_and_death](https://en.wikipedia.org/wiki/Herman_Cain#Health_and_death)
The opposite side instead focuses on the 'riots' that would happen if lockdown and mask mandates are enforced, and on the claim that masks can stop a 'bacterium' but not a'micron'-size virus. There are also claims that masks are 'unhealthy' as they impede breathing, and are instead just 'virtue signaling' devices: _"Virtue signaling: mask and gloves. People lived with coronaviruses for 100+ years. Turn off the TV."_ A right-wing, anti-establishment sentiment can be inferred from the reference to'shepele': _"And the CDC telling everyone to mask up is just another test to see how long the sheep will obey. #Globalists"_, and from references to supposedly derogative terms such as 'leftish' and 'antifa'. Nonetheless, what is labeled as the anti-mask side is also more varied in its opinions, ranging from just hesitant to fully conspiratorial, somewhat similar to the spread of opinions around vaccines Cossard et al. (2020).
Moving to higher-level constructs, we aim to uncover common patterns in the argumentation proposed by both sides by applying a topic modeling approach base on latent Dirichlet allocation (LDA) Blei et al. (2003). Limitations of LDA clustering of short text are known, still, it offers a good compromise between clustering performance and computational cost Qiang et al. (2020).18 To derive the optimum number of topics \(k\), we optimize the topic coherency (\(C_{v}\) metric Roder et al. (2015)) models with \(k\in[2,10]\). For both the pro-mask and anti-mask sides, the model with \(k=3\) is the best fit. Table 3 presents the salient keywords that form the corresponding topics. The topics are ranked according to their prevalence, with T1 the most prevalent one, and similarly, the terms are ranked by descending importance for the specific topic.
Footnote 18: Using the Gensim library [https://radimrehurek.com/gensim](https://radimrehurek.com/gensim)
From the emergent topics, the most prominent one on the pro-mask side concerns the various interventions, including _social distancing_ and _wearing_ a mask. (_"COVID is not a flu! Everyone needs to wear a mask to protect others from these germs."_), echoing our earlier finding of higher care value of this side. The second topic includes references to _medicine_ and _science_, while the third centers around (Republican) political figures. On the anti-mask side, the most prominent topic also concerns the interventions, but instead focuses on whether interventions _work_ against the _spread_. The second one puts the mandate in the context of the _businesses_ and _stores_, in contrast with T1 from the pro-mask side which speaks about other interventions such as social _distancing_ and _staying_ at home. The third topic is about political _rallies_, and argues that masks are not useful (since the _eyes_ are exposed to the virus: _"Still, even with a mask, you are uncovering the mucous membranes in your eyes"_).
Finally, the two sides of the debate have about the same proportion of tweets with URLs (around 23%). Earlier works have shown a polarization in terms of news sources accompanying political polarization Garimella et al. (2021), however here we find differences in behavior beyond sharing news media. Table 4 shows the top domains of the URLs posted by pro- and anti-mask users, along with the counts. Pro-mask users overwhelmingly post URLs pointing to news websites or aggregators. YouTube and Instagram feature prominently in both lists, though anti-mask users favor YouTube more than twice the second most popular domain. Anti-mask tweets also link to a variety of business platforms, including Etsy and Ebay, and lesser-known ones such as Zazzle, a platform for custom-designed products. In addition, anti-mask users link to the governmental agency National Center for Biotechnology Information (NCBI), the New England Journal of Medicine (NEJM), and the Association of American Physicians and Surgeons (AAPS). These findings stand in contrast to a smaller recent study of geolocated-only tweets He et al. (2021) that finds that anti-mask tweets were less likely to share external information from public health authorities. An explicit comparison of the captured tweet sets would be necessary to resolve these findings.
## Discussion & Conclusions
Our analysis reveals that the government messaging about mask wearing provoked--instead of the intended focus on the benefits to the communities and society at large--a marked shift in the moral values towards higher politicization of the issue, with an increased focus on authority and fairness. We argue that interventions targeted to those resistant to
\begin{table}
\begin{tabular}{l l l} \hline \hline & Topic & Words \\ \hline \multirow{5}{*}{T1} & T1 & distance, wear, social, day, mandate, today, friend, \\ & (.35) & hope, state, stay \\ & T2 & wear, people, virus, covid, bad, protect, medical, \\ & (.33) & infect, spread, science \\ & T3 & wear, Trump, Cain, rally, covid, Herman, die, Tulsa, \\ & (.31) & death, kill \\ \hline \multirow{5}{*}{T1} & T1 & wear, people, covid, virus, work, distance, die, \\ & (.46) & thing, protect, spread \\ & T2 & wear, mandate, people, state, store, vote, business, \\ & (.31) & leave, today, order \\ \cline{1-1} & T3 & man, Trump, rally, school, eye, fuck, kid, hope, \\ \cline{1-1} & (.22) & stupid, big \\ \hline \hline \end{tabular}
\end{table}
Table 3: LDA topics with the most representative words, extracted separately from tweets for each mask debate side. The topic prevalence is reported in parenthesis.
mask-wearing should center around these values, instead of appealing to those valued less as the debate goes on.
It is no surprise that we find the mask debate captured on Twitter to be highly politically polarized. It is well-known that polarization in the U.S. political scene has been growing in the last decades,19 and this growing polarization has important effects on several areas, including public health. As polarization and health behaviors intersect, it is crucial to understand their interaction to design effective health policies. For instance, the ideological and moral facets of health intervention perceptions are closely related to the compliance in the population, and the resulting effectiveness, as shown for instance in recent statistics relating vaccination uptake to political leaning.20 However, shifting established associations between health-related beliefs and political leaning may be challenging. For instance, the attendees of Donald Trump's early in August 2021 booed him after he voiced support for the COVID-19 vaccination drive,21 thus illustrating that even the forerunners of the Republican party may encounter challenges in connecting with their constituents. As the pandemic develops, and more mandates are issued by the governments, constant polling and monitoring is essential in establishing the public response to these measures (as of mid-2021, the attitudes toward masking are still highly polarized).22
Footnote 21: [https://www.theguardian.com/us-news/2021/aug/22/donald-trump-rally-alabama-covid-vaccine](https://www.theguardian.com/us-news/2021/aug/22/donald-trump-rally-alabama-covid-vaccine)
Footnote 22: [https://www.politicico.com/news/2021/08/02/poll-americans-back-return-of-masking-502144](https://www.politicico.com/news/2021/08/02/poll-americans-back-return-of-masking-502144)
We find that different moral values underpin the reasoning emphasized by the two camps. Pro-mask arguments highlight loyalty and fairness while criticizing the opposing leadership. The anti-mask ones, on the other hand, focus on the authoritarian and oppressive aspects of the mandate and show a lack of concern for the effects of their actions on others. In accordance with these results, when examining this phenomenon through the lenses of the Collectivism-Individualism theory, we notice a decisive shift of the anti-mask community towards individualism, with more intense use of first-person personal pronouns. We note the lack of loyalty among the values emphasized by the anti-mask side, which tends to hold a conservative political view, and differs from the commonly observed ones associated with conservatism: authority, loyalty, and purity (Kivikangas et al., 2021; Fulgoni et al., 2016). This may be a response to the pro-social framing of the mask intervention, thus leading its opponents to de-emphasize the in-group narrative usually common to their side. This interpretation may point to motivated reasoning, wherein the desired conclusion modifies the worldview usually taken. Interestingly, the ITS analysis shows divergence over time between the two groups on the value of loyalty, which is alarming since such disagreement, if not adequately addressed, can lead to severe societal polarization.
This polarization seems to have been accompanied by a lively commercial activity. When examining the links posted by the two sides of the issue, we find a prominent existence of commerce-related platforms including _etsy.com_, _ebay.us_, as well as platforms for custom creation of merchandise such as _zazle.com_. Indeed, a brief search on these websites reveals an assortment of t-shirts, coffee mugs, face masks, and baby bibs with political messages from both sides of the debate. Sometimes historical symbols were used to make a stance, such as the use of the yellow star--like those forced on Jews by Nazi Germany--which was sold by a Nashville store protesting against the vaccination campaign. After community criticism, the item was removed23. Our findings suggest that there is an active development of symbolism and aesthetics of the resistance movement, and it would be a fascinating subject of research to uncover the non-verbal representations of the self and the group, expressions of values, and calls to action (McGarry et al., 2019; Awad and Wagoner, 2020). Awareness of such symbolism and self-conceptualization is vital for crafting appropriate messages and fostering communication between the two sides.
Footnote 23: [https://www.bbc.com/news/world-us-canada-57297902](https://www.bbc.com/news/world-us-canada-57297902)
Despite the volume and span of the data analyzed here,
\begin{table}
\begin{tabular}{r r r r} Pro-mask & \multicolumn{3}{c}{Anti-mask} \\ rawstory.com & 2317 & youtube.com & 3485 \\ cnn.com & 2000 & thegatewaypundit.com & 1341 \\ youtube.com & 1751 & ets.yme & 1210 \\ washingtonpost.com & 1393 & instagram.com & 912 \\ a.msm.com & 935 & foxnews.com & 903 \\ apple.news & 872 & zazzle.com & 796 \\ huffpost.com & 758 & breitbar.com & 781 \\ news.yahoo.com & 686 & nypost.com & 472 \\ flip.it & 630 & finetamerica.com & 453 \\ nytimes.com & 587 & fxn.ws & 393 \\ nbcnews.com & 573 & westernjournal.com & 362 \\ thehill.com & 527 & dlvr.it & 344 \\ dailykos.com & 486 & pixels.com & 317 \\ instagram.com & 458 & buff.ly & 288 \\ businessinsider.com & 449 & theblaze.com & 282 \\ thedailybeat.com & 402 & bizpcareview.com & 268 \\ newsweek.com & 371 & infowars.com & 250 \\ theguardian.com & 343 & ets.com & 246 \\ usatoday.com & 337 & ift.tt & 236 \\ yahoo.com & 333 & cnn.com & 231 \\ cnbc.com & 307 & twitchy.com & 217 \\ politicito.com & 301 & ebay.us & 202 \\ newsbreakapp.com & 295 & ncbi.nlm.nih.gov & 196 \\ buff.ly & 260 & facebook.com & 192 \\ npr.org & 252 & nejm.org & 191 \\ politictususa.com & 236 & newsbreakapp.com & 175 \\ apnews.com & 231 & a.msn.com & 173 \\ nypost.com & 223 & google.com & 162 \\ latimes.com & 209 & dennismichaellynch.com & 152 \\ mol.im & 208 & aapsonline.org & 142 \\ \end{tabular}
\end{table}
Table 4: Counts of the top 30 URL domains posted by pro- and anti-mask users. Domains colored by class: news and news aggregators (black), social media and social media automator/aggregators (red), business platforms (blue), medical organization (green).
this study has some limitations as most social media studies. Especially when concerning politically charged topics, often a minority of vocal users dominates the conversation (Mustafaraj et al., 2011), thus making the opinions of the "silent majority" challenging to discern. The observed relationships between the stances on masking, moral values, and political stance are limited to those who choose to express themselves vocally. In addition, we use a high-precision vocabulary to measure moral values, which has the side effect of pushing the average valence in our results close to the neutral point, due to a large number of tweets for which we cannot extract moral valence. Traditional surveys are necessary to reach those not as comfortable expressing their opinions online; however, even those have selection biases. The automated tools utilized in this study are not perfect: after manual examination, we find that the network-based classifier achieved an accuracy of 72.4% for the anti-mask class, thus introducing noise in the subsequent analysis. However, from the experience of manually labeling the users, we postulate that the complexities of human expression, including humor and sarcasm, may limit the best possible performance of such a classifier. Similarly, the inference of moral values from the text may struggle with vocabulary mismatch, sarcasm, and self-censorship. Finally, the generalizability of this study is limited by the unique circumstances of an unprecedented global pandemic happening in a hyper-connected world during a politically polarized social environment. These considerations must be taken into account when comparing our findings to new scenarios.
### Ethical Considerations
The dataset presented here contains only tweets which were publicly available at the time of the collection. We make the dataset publicly available to the research community in compliance with the Twitter Terms of Service,24 that is, sharing only the tweet IDs of the collected posts, which will have to be re-collected. In this paper, we have rephrased all quoted tweets to prevent re-identification of their authors. This practice ensures that the tweets which have been removed (either by the user or the platform) will not be available. Although large, this dataset does not include users with particular disabilities which may disallow them to interact with the platform, as well as minors and those blocked by Twitter. On the other hand, the content collected here affects not only those who have posted it, but also those who viewed or interacted with it, which may be orders of magnitude more users, since most users are "lurkers" who consume social media content without posting (Van Mierlo, 2014). Ultimately, the masking decisions made by people engaging in this deliberation may directly affect the health and life of vulnerable people, such as those with autoimmune disorders or other conditions making them especially vulnerable to COVID-19. Additionally, the sometimes aggressive rhetoric in this material may not be suitable for young Twitter users, or those dealing with mental health issues. Also, although the focus of this study is the moral dimension of the debate, we caution public health communicators not to overemphasize moral or emotional dimensions of their message (or attempt to emotionally manipulate their audience), but rather provide the clearest and most informative messaging possible. Further, we would like to discourage the tools used in this study to be used for targeting individuals espousing particular opinions for harassment or undue surveillance, and to follow the AAAI ethical guidelines25 in the application of these findings.
Footnote 24: [https://developer.twitter.com/en/developer-terms/](https://developer.twitter.com/en/developer-terms/)
Footnote 25: [https://www.aaai.org/Conferences/code-of-ethics-and-conduct.php](https://www.aaai.org/Conferences/code-of-ethics-and-conduct.php)
|
2301.03760 | Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting | Empowered by deep neural networks (DNNs), Wi-Fi fingerprinting has recently
achieved astonishing localization performance to facilitate many
security-critical applications in wireless networks, but it is inevitably
exposed to adversarial attacks, where subtle perturbations can mislead DNNs to
wrong predictions. Such vulnerability provides new security breaches to
malicious devices for hampering wireless network security, such as
malfunctioning geofencing or asset management. The prior adversarial attack on
localization DNNs uses additive perturbations on channel state information
(CSI) measurements, which is impractical in Wi-Fi transmissions. To transcend
this limitation, this paper presents FooLoc, which fools Wi-Fi CSI
fingerprinting DNNs over the realistic wireless channel between the attacker
and the victim access point (AP). We observe that though uplink CSIs are
unknown to the attacker, the accessible downlink CSIs could be their reasonable
substitutes at the same spot. We thoroughly investigate the multiplicative and
repetitive properties of over-the-air perturbations and devise an efficient
optimization problem to generate imperceptible yet robust adversarial
perturbations. We implement FooLoc using commercial Wi-Fi APs and Wireless
Open-Access Research Platform (WARP) v3 boards in offline and online
experiments, respectively. The experimental results show that FooLoc achieves
overall attack success rates of about 70% in targeted attacks and of above 90%
in untargeted attacks with small perturbation-to-signal ratios of about -18dB. | Fei Xiao, Yong Huang, Yingying Zuo, Wei Kuang, Wei Wang | 2023-01-10T02:37:23Z | http://arxiv.org/abs/2301.03760v1 | # Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting
###### Abstract
Empowered by deep neural networks (DNNs), Wi-Fi fingerprinting has recently achieved astonishing localization performance to facilitate many security-critical applications in wireless networks, but it is inevitably exposed to adversarial attacks, where subtle perturbations can mislead DNNs to wrong predictions. Such vulnerability provides new security breaches to malicious devices for hampering wireless network security, such as malfunctioning geofencing or asset management. The prior adversarial attack on localization DNNs uses additive perturbations on channel state information (CSI) measurements, which is impractical in Wi-Fi transmissions. To trans transced this limitation, this paper presents FooLoc, which fools Wi-Fi CSI fingerprinting DNNs over the realistic wireless channel between the attacker and the victim access point (AP). We observe that though uplink CSIs are unknown to the attacker, the accessible downlink CSIs could be their reasonable substitutes at the same spot. We thoroughly investigate the multiplicative and repetitive properties of over-the-air perturbations and devise an efficient optimization problem to generate imperceptible yet robust adversarial perturbations. We implement FooLoc using commercial Wi-Fi APs and Wireless Open-Access Research Platform (WARP) v3 boards in offline and online experiments, respectively. The experimental results show that FooLoc achieves overall attack success rates of about 70% in targeted attacks and of above 90% in untargeted attacks with small perturbation-to-signal ratios of about -18 dB.
Adversarial attack, indoor localization, deep learning
## I Introduction
In wireless networks, accurate device location information is increasingly desired to support many security-critical applications, such as device authentication and access control [1, 2]. To achieve this, Wi-Fi fingerprint based indoor localization recently has gained astonishing performance via benefiting from the advances in deep neural networks (DNNs) [3, 4, 5, 6], which, however, are shown to be susceptible to adversarial attacks [7, 8, 9]. In such attacks, minimal perturbations on genuine input samples can steer DNNs catastrophically away from true predictions. By exploiting these vulnerabilities, malicious devices have the potential to manipulate their localization results and cause the breakdown of wireless geofencing [10, 11], asset management, and so on. Thus, it is of great importance to investigate the extent to which DNN powered indoor localization is vulnerable to adversarial attacks in the real world.
Despite the great importance, no existing study explores over-the-air adversarial attacks on indoor localization DNNs in the physical world. The prior work [12] investigates adversarial attacks on indoor localization DNNs and simply adds perturbation signals to original signals likewise generating adversarial images in the computer vision domain. However, additive perturbations can not characterize the impact of Wi-Fi training signals on CSI measurements, thus rendering them infeasible in over-the-air attacks. Moreover, these approaches [13, 14] trigger attacks by directly converting genuine CSI fingerprints into targeted ones, which are suitable for attacking single-antenna APs. Yet, they are physically unrealizable in widely-used multi-antenna Wi-Fi systems due to the one-to-many relationship between transmitting and receiving signals. In addition, this study [15] proposes a CSI randomization approach to distort device location information. Though this approach can trigger untargeted adversarial attacks, it lacks the capability of misleading location predictions close to chosen spots, i.e., targeted attacks. In addition, the random perturbations are not smooth and will cause significant disturbance in the original signals, rendering them easy to be detected. Thus, no existing work is suitable for launching adversarial attacks on Wi-Fi fingerprinting DNNs in the real world.
In this paper, we investigate a new type of adversarial attack that deceives indoor localization DNNs over realistic wireless channels. In particular, our attack model includes a Wi-Fi AP and an attacker. The AP holds a well-trained DNN for indoor localization using uplink CSI signatures as inputs. The attacker, i.e., a malicious client device, manipulates its Wi-Fi training signals and transmits them to the AP over the air, with the purpose of fooling the localization DNN. In this way, the AP receives the falsified signals from the attacker, generates perturbed uplink CSI signatures, and feeds them into the DNN for device localization. As demonstrated in Fig. 1, over-the-air attacks can rise severe security issues in wireless networks. An outside attacker can be empowered to break the geofencing of a Wi-Fi AP by camouflaging itself within authorized areas to gain wireless connectivity. Moreover, an attacker can bypass Sybil attack detection to deplete valuable bandwidth by pretending multiple fake clients at the same location [16, 17].
We argue that the major obstacle to realizing such over-the-air adversarial attacks is that the uplink CSI estimated at the victim AP is unknown to the attacker and thus effective channel perturbations cannot be generated before each attack. To tackle this problem, we observe that the similarity between uplink and downlink CSIs can be exploited for launching adversarial |
2310.04996 | Experiences with CAMRE: Single-Device Collaborative Adaptive Mixed
Reality Environment | During collaboration in XR (eXtended Reality), users typically share and
interact with virtual objects in a common, shared virtual environment.
Specifically, collaboration among users in Mixed Reality (MR) requires knowing
their position, movement, and understanding of the visual scene surrounding
their physical environments. Otherwise, one user could move an important
virtual object to a position blocked by the physical environment for others.
However, even for a single physical environment, 3D reconstruction takes a long
time and the produced 3D data is typically very large in size. Also, these
large amounts of 3D data take a long time to be streamed to receivers making
real-time updates on the rendered scene challenging. Furthermore, many
collaboration systems in MR require multiple devices, which take up space and
make setup difficult. To address these challenges, in this paper, we describe a
single-device system called Collaborative Adaptive Mixed Reality Environment
(CAMRE). We build CAMRE using the scene understanding capabilities of HoloLens
2 devices to create shared MR virtual environments for each connected user and
demonstrate using a Leader-Follower(s) paradigm: faster reconstruction and
scene update times due to smaller data. Consequently, multiple users can
receive shared, synchronized, and close-to-real-time latency virtual scenes
from a chosen Leader, based on their physical position and movement. We also
illustrate other expanded features of CAMRE MR virtual environment such as
navigation using a real-time virtual mini-map and X-ray vision for handling
adaptive wall opacity. We share several experimental results that evaluate the
performance of CAMRE in terms of the network latency in sharing virtual objects
and other capabilities. | Hung-Jui Guo, Omeed Eshaghi Ashtiani, Balakrishnan Prabhakaran | 2023-10-08T03:48:04Z | http://arxiv.org/abs/2310.04996v1 | # Experiences with CAMRE: Single-Device Collaborative Adaptive Mixed Reality Environment
###### Abstract
During collaboration in XR (eXtended Reality), users typically share and interact with virtual objects in a common, shared virtual environment. Specifically, collaboration among users in Mixed Reality (MR) requires knowing their position, movement, and understanding of the visual scene surrounding their physical environments. Otherwise, one user could move an important virtual object to a position blocked by the physical environment for others. However, even for a single physical environment, 3D reconstruction takes a long time and the produced 3D data is typically very large in size. Also, these large amounts of 3D data take a long time to be streamed to receivers making real-time updates on the rendered scene challenging. Furthermore, many collaboration systems in MR require multiple devices, which take up space and make setup difficult. To address these challenges, in this paper, we describe a single-device system called Collaborative Adaptive Mixed Reality Environment (CAMRE). We build CAMRE using the scene understanding capabilities of HoloLens 2 devices to create shared MR virtual environments for each connected user and demonstrate using a Leader-Follower(s) paradigm: faster reconstruction and scene update times due to smaller data. Consequently, multiple users can receive shared, synchronized, and close-to-real-time latency virtual scenes from a chosen Leader, based on their physical position and movement. We also illustrate other expanded features of CAMRE MR virtual environment such as navigation using a real-time virtual mini-map and X-ray vision for handling adaptive wall opacity. We share several experimental results that evaluate the performance of CAMRE in terms of the network latency in sharing virtual objects and other capabilities.
**Index Terms:** Human-centered computing--Mixed / augmented reality; Human-centered computing--Collaborative interaction
## 1 Introduction
Multi-user collaboration in Virtual Reality (VR) and Mixed Reality (MR) has potential applications in a large variety of different fields, such as education [13] and industrial settings [24]. For physically distributed users, collaboration systems in Virtual Reality (VR) include networked, persistent, immersive, and virtual environments [26]. Although this system is primarily for VR, the concept has been extended to MR. For example, in [25], the authors built a collaborative system combining Augmented Reality (AR) and VR devices to enable collaboration among users accessing different devices. By utilizing a Kinect camera, [7] captured the user's motion and projected it onto a humanoid robot located in the collaborator's physical space to create an MR collaboration system. In the context of MR, users typically obtain views of their own surrounding physical environments while interacting with virtual objects. Therefore, unlike collaborative VR environments, MR collaboration systems will face several further issues.
Only a limited number of existing MR collaboration systems handle cases where some of the users' physical environments differ from the collaborators' current physical environments [38]. Under such conditions, during the collaboration process, one of the users may move or rotate the virtual object to a place or position where the collaborator cannot see or operate it, which might have a negative effect on the collaboration process; an example is shown in Figure 2.
### _Challenges for Creating Physical Environment-based MR Collaboration System_
Creating a collaborative mixed reality system that employs a shared virtual environment based on the user's physical environment could lead to further challenges:
Figure 1: High-level architecture overview of the single-device CAMRE framework with external unify networking framework server.
* Computationally expensive to construct a 3-dimensional virtual environment based on the physical environment due to large data size; for instance, it would take about 100 Megabytes for a complete 3D indoor environment.
* Constructing a shared virtual environment to build a collaboration system often requires setting up multiple devices, which could be difficult for users unfamiliar with MR to get started and use the collaboration system.
* Transferring large-scale 3D environments can result in large streaming and update latencies over the Internet.
* Users tend to only use the virtual contents within their line of sight for collaboration and may have a limited understanding of the entire environment in MR virtual environments. This limited understanding of the overall environment might restrict physical movements and constrain the usage of the collaboration system.
### Collaborative Adaptive Mixed Reality Environment (CAMRE)
To address the above challenges, we established a Collaborative Adaptive Mixed Reality Environment (CAMRE) system with one single MR device - Microsoft Hololens 2 [16] to not only share virtual objects but also share the virtual environment established based on one user's physical environment and share with other users that are connected to the same server, see Figure 1 for an overview. To address the first challenge of using large 3D environments, instead of constructing the whole physical environment into mesh data, we use the scene understanding feature in the Microsoft Mixed Reality Toolkit (MRTK) [17] to build virtual objects and a virtual environment based on the objects and geometry of the physical environment, which reduces data size from around 100 Megabytes (a living room size 3D mesh data) to approximately 0.3 Megabytes (a living room size scene understanding-based virtual environment). By utilizing the scene understanding feature to create a virtual environment, users are only required to deploy the CAMRE system on their HoloLens 2 device instead of setting up multiple sensors in the environment where they are located, which tackles the second challenge in terms of ease of use.
Based on the created small-sized virtual environment, when users physically move in their physical environment, CAMRE can update the virtual environment accordingly by transmitting a small amount of data. On the basis of this system structure, we incorporated a **Leader-Follower paradigm**; the Leader is responsible for observing and creating an MR virtual environment and sharing with multiple Followers through a networking framework. CAMRE helps Leader and Followers share the same knowledge of the virtual environment, thereby preventing users from moving virtual objects to places where other collaborators cannot see. With CAMRE, the Leader can stream virtual information to Followers with minimal data usage, addressing the third challenge listed above.
### CAMRE's Expanded Navigation Features
CAMRE with its Leader-Follower paradigm provides a method for one Leader and multiple Followers to collaborate in a shared virtual environment. However, since the occlusion of the wall objects generated from the Leader's side, users might stay in the initial room due to a limited understanding of other rooms, which will lead to the fourth challenge. Therefore, to tackle this challenge, we incorporated three expanded navigation features into the CAMRE system to provide an overview of the created virtual environment before physically moving to the destination to assist navigation and increase usability.
* **Dynamic X-ray vision**: Allows users to see through the surrounding obstacles to gain additional information about another room. (This feature was published separately in our demo paper; to ensure anonymity in reviewing process, we marked the authors' names as A. Anonymous in [2].)
* **Complete see-through virtual environment**: Virtual walls will become partially transparent whenever the user approaches within 3 meters to provide information about all other rooms within range.
* **Real-time mini-map**: Leader can observe and build the entire CAMRE MR virtual environment to share with Followers. Followers can explore either with the Leader or _independently_ without following the Leader. This is facilitated by the real-time mini-map that shows the bird's eye view of the whole virtual environment and provides a complete view separately for each user. This mini-map feature makes an explicit assumption that such a complete view is available before starting the collaboration process and is given to the Followers.
When users are immersed in a virtual environment, their movements and interactions are significantly influenced by human depth perception [8, 14]. Unlike real objects with fixed images such as size and color in the human brain, users in virtual environments frequently lack adequate references to make accurate judgments regarding the depth perception of virtual objects due to the Vergence-accommodation conflict. Therefore, providing users with additional depth information in the virtual environment could help users have a better understanding of the surroundings. For example, [9] presented a series of virtual environment underestimation experiments to suggest that visual information is an important source of information for the calibration of movement. In CAMRE, besides providing an overview of the virtual environment, the three expanded features could also provide additional depth information to assist users with navigation further. Dynamic X-ray vision can provide motion parallax since the X-ray vision window is moving with the user's eye gaze direction. A complete see-through virtual environment can provide distance perception due to the 3-meter setting that makes virtual walls partially transparent when users move within 3 meters of distance. Real-time mini-map provides camera position and field-of-view (FOV) options for users to adjust to provide relative distance and scale of the virtual objects. Here, we make an explicit assumption that the needed information such as the scene behind the obstacles is available (perhaps, through a pre-captured database) to the user. Detailed information will be provided in section 3.2.
### Contributions
We designed an exhaustive set of experiments with a primary objective of measuring latencies incurred during a Leader-multiple Followers collaboration over the Internet involving different distances among the collaborators. These experiments were conducted with varying factors such as room sizes, networking frameworks for sharing virtual environments, different distances between the Leader and Followers, and the number of simultaneous network connections. We make the following contributions through the created CAMRE framework:
Figure 2: (a) Example of one user may move or rotate the virtual object to a place or position where (b) the collaborator cannot see or operate it.
1. Implemented a user-friendly, single-device setup system for users who are unfamiliar with the MR system.
2. Dynamically update virtual environment based on the corresponding physical environment with low data scale and low building time.
3. Low streaming latency sharing virtual environment from Leader to multiple Followers to achieve Leader-Follower paradigm.
4. Provide expanded navigation features to help users gain an overview of the created virtual environment to provide additional scene information and depth information.
5. The extensive performance evaluation carried out on CAMRE involves two commonly used Unity networking frameworks and the performance results can serve as a benchmark network for other similar, future systems.
Although some previous works created collaborative systems across multiple AR/VR/MR systems, to the best of our knowledge, the CAMRE system may be one of the earliest MR collaborative systems that include dynamic environmental updates and real-time ability.
### _Using CAMRE_
We will make the CAMRE system software to be available as open source (after the paper is published). The CAMRE system along with the planned future work described in Section 6 can be very useful for the research community as well as application developers dealing with collaborative use cases such as training and tele-mentoring using MR. We will also make the experimental data reported in Section 5 publicly available. The research community can use this data as a benchmark for comparing similar approaches. The data pertaining to network latencies in Section 5 can also be used for trace-driven simulation for human subject studies in Internet-based collaborative MR applications.
### _Limitations of Our Work_
We also acknowledge some important limitations:
1. Some previous MR collaboration systems (reported above) handle cases where some of the users' physical environments differ from the collaborators' current physical environments. However, CAMRE specifically employs a single Leader-multiple Followers paradigm, resulting in a common, shared virtual environment for all the users. While this could be a limitation for some use cases, the shared common virtual environment could be advantageous for training or telementoring types of applications.
2. The performance studies reported in Sections 4 and 5 have been focused on network latencies in collaboration over the Internet. We have not carried out human subject studies to understand their perception of the effect of _degraded_ (or small-data sized) virtual environments, nor on the effect of varying Internet latencies.
3. In a similar manner, our work has not evaluated the human perception of the effect of such a _synchronized_ virtual environment as that of CAMRE. For instance, when the Leader moves to a different environment, the Followers' view/understanding of the virtual environment would also change accordingly even though they (the Followers) never move. This unexpected change in the environment might affect user experience and/or cause VR sickness.
The above aspects of human perceptions need to be evaluated thoroughly. Considering the need for detailed and exhaustive user perception studies, we plan to do this as a future, separate research work. As mentioned earlier, will use the network latencies reported in Section 5 to emulate Internet-level collaboration for these human perception studies.
## 2 Related Work
Many studies have been conducted to develop multi-user collaboration systems in AR/VR/MR that enable effective remote collaboration, particularly during the COVID-19 pandemic. One of the most common types of collaboration systems involves creating a virtual environment where users can immerse themselves and interact with other users' avatars to achieve collaborative outcomes. The concept of this type of system was proposed and discussed in 1998 [3] as "collaborative virtual environments," which used networked virtual reality systems to support group work. More recently, various techniques have been used to achieve collaborative virtual environments; for example, [32, 33] presented a 360Drops system to provide 360 video sharing and 3D reconstructed scenes with photo-bubble to provide environment details. Additionally, researchers have been working on developing cross-reality systems to enable multiplayer collaboration across various AR and VR devices [25, 30]. [40] developed a VRGit system to facilitate multi-user collaboration in VR, which helps users manage the different versions of 3D content in virtual environments, making it easier for them to collaborate effectively. With this system, users can easily keep track of the modifications made to the virtual environment and manage the different versions of the content. To conclude the development of collaborative virtual environments, a comprehensive survey on collaboration and communication systems was conducted by [5] and [28] to provide insights into the different functionalities, advantages, and disadvantages of each collaboration system.
However, existing works rarely focused on sharing the whole surrounding physical environment, which could lead to the occlusion issue addressed in Figure 2 and reduce collaboration efficiency and freedom of moving in the virtual environment. Still, some previous works have tried to share the surrounding physical environment to achieve collaboration; for instance, [6] proposed a model to include users' surrounding physical environment into the virtual environment to build a collaborative environment in the VR world by taking into account the physical features and embedding them in the virtual environment. The authors of [21] introduced a system called PLANWELL that utilized handheld AR devices for scanning outdoor geographical data by an exploretor to create a 3D model, which could be shared with an overseer for remote collaboration. Although this system is similar to our Leader-Follower paradigm, it has a higher data transfer time (2.4 seconds), which might not be suitable for real-time collaboration. [39] presented a DistanciAR system that captured and created a remote environment with a LiDAR camera for viewing from a different location and improved the interface by adding Dollhouse (bird's-eye view) and Peek modes. However, the time spent using the complete system took around 13 minutes, which may pose a challenge when used for collaboration purposes. Most recently, [34] presented a 3D MR remote collaboration prototype system by scanning the surrounding physical environment. However, to achieve real-time collaboration, this system utilized AR and VR head-mounted devices with three depth cameras to build 3D models by utilizing pre-scanned reconstructed 3D mesh models of a room-scale workspace, which could be challenging for regular users to set up.
In addition, other research works have tried to use humanoid robots to accomplish multi-player collaboration to perform actions captured by users that could potentially solve the occlusion issue. For example, [20] used humanoid robots to imitate users' activities as surrogates to achieve cross-country collaboration, and [22] proposed a system integrating humanoid robots and video streams to build an MR-like collaborative environment for remote collaboration. To achieve better human-robot collaboration, [7] suggested that robots should be capable of perceiving and parsing a scene's information in real-time. The authors claimed that such environmental parsing is typically divided into three categories: Scene graph, 2D map representation, and 3D map representation, which
echoes back to our scene understanding-based CAMRE system that performs scene understanding to build scene graphs with 3D map representation by reconstructing a corresponding 3D model from the understood information, and real-time mini-map to achieve 2D map representation.
The above summary of past and recent multi-user virtual environment works demonstrates a focus on sharing the same virtual environment in AR/VR. We believe that utilizing MR to immerse individuals in a common virtual environment based on their specific physical surroundings can provide additional information to facilitate collaboration. Although similar collaborative systems and expanded features have been presented in other works built in AR/VR, our CAMRE system is one of the few collaborative systems built in MR, enabling users to interact with both the virtual environment and their physical environment. Furthermore, CAMRE utilizes a low data scale virtual environment to achieve low data transfer latency while still providing real-time collaboration with expanded features to enhance user experience and provide user-friendliness by providing accessible setup attributes with a single device. In the following sections, we will focus on the detailed settings and evaluations of the CAMRE MR virtual environment with expanded low-latency sharing and expanded navigation features based on the surrounding physical environment.
## 2 CAMRE System Design
As mentioned earlier, in CAMRE, we employ a Leader-Follower paradigm through which MR environments are adaptively generated with low latency based on the physical environment of the Leader and shared using the open-source network frameworks in Unity (Unity Netcode for Gameboject [35] with Unity Relay [36] and Photon Unity Networking [23]) for collaboration among multiple users. Instead of making every user hold the same level of authority, the Leader-Follower paradigm is employed to avoid multiple users building and sharing their virtual environment to cause virtual environment overlays, and only the Leader is authorized to observe and create the virtual environment. Next, we add three expanded features (dynamic X-ray vision window, complete see-through virtual environment, and a real-time mini-map; see below) to help users navigate and gain depth cues of the adaptive MR environments. This system is designed and built on the Microsoft HoloLens 2 and HoloLens Unity emulator.
### _MR Virtual Environment Adapting to Physical Environment_
Scene understanding is a pre-built feature in the MRTK (Mixed Reality Tool-Kit from Microsoft) [17] that is often used to observe and understand the target physical environment to obtain information and analyze it. This feature utilizes the spatial mapping feature of HoloLens 2, which uses a long-throw depth camera to capture the structure of the physical environment when users walk around and scan the surroundings to create multiple virtual flat planes (the virtual flat planes will be referred to as "scene objects" in the following articles) that align with the corresponding physical flat planes to create a complete MR virtual environment, as shown in Figure 3 (a). In the CAMRE framework, we integrate this scene understanding feature to generate virtual environments that dynamically adapt to the changes in the Leader's physical environment in close-to-real-time. Users can operate the virtual environment update settings on the control panel to manually update or auto-update with specific time intervals (5 seconds in default, can be changed). The scene understanding feature in MRTK generates simple virtual planes to construct the virtual environment while preserving proper scene information. As a result, the data size of the created virtual environment is relatively small (about 0.3 Megabytes) compared to standard virtual room mesh data (around 100 Megabytes). For instance, in a recent study [10], an MR collaboration system was developed where user avatars were built and shared as mesh data, with the smallest data taking up 0.4 Megabytes, which is higher than the size of our scene understanding-based virtual environment. Therefore, system load and environment creation time can decrease when a user moves and updates the virtual environment with low latency. In addition, these dynamically generated MR environments are shared with a set of Followers to facilitate collaboration. Whenever the Leader moves to or faces a new and unobserved physical environment, our system will update the virtual environment dynamically. Each scene object created or updated in the virtual environment on the Leader's HoloLens 2 will immediately update to the server and forward to all Followers when Leader creates it. Due to the small data size, the sharing process from Leader to Followers will have close-to-real-time latency. Followers can see the exact same MR environments in their devices to gain the same environmental information as the Leader; scene objects received from the Leader's side are shown as gray color only, which indicates that the current user is not the creator of these objects to avoid confusion, as shown in Figure 3 (b).
#### 2.1.1 Network Framework
In CAMRE, we used two state-of-the-art Unity networking frameworks, including Unity Netcode for Gameboject with Unity Relay and Photon Unity Networking, to transfer observed scene objects from Leader to an external server and then to Followers to accomplish remote collaboration. We have built the CAMRE system on two different frameworks to compare the network latency and ease of use, which will be evaluated in a later section.
1. **Netcode for Gameboject** is the latest (first released in June 2022) and highly recommended package built by Unity for multiplayer networking that enables the system to synchronize virtual objects' position, rotation, and scale. Since Netcode for GameObject only supports local network connections without requiring modification of the user's router, to avoid complicated operations to ensure user-friendliness, we use the Unity Relay, a Unity-registered third-party package, for external network connections. The combined system allows up to 50 concurrent users for free ($0.16 per additional user) but requires Leader to send access codes to Followers externally.
2. Another networking framework we used is **Photon Unity Networking**, a primarily recommended multiplayer network framework for HoloLens 2 to perform multi-user collaboration. This system offers similar functionality as it allows virtual objects' position, rotation, and scale sharing through Photon server, which allows users to join a preset room without exchanging external messages. However, the free version of the system supports only 20 concurrent users. (up to 2000 concurrent users for $370).
### _CAMRE's Expanded Navigation Features_
In this section, we describe three expanded features of CAMRE to provide an overview and depth perception of the virtual environment that can assist in user navigation and understanding of the entire environment. Here, we make the following two assumptions for users before starting to use the expanded navigation features:
1. CAMRE MR virtual environment is observed and built completely by the HoloLens 2.
2. Information behind the obstacles is available to the user.
#### 2.2.1 Dynamic X-ray Vision Window
In order to provide additional information and depth perception (such as motion parallax) of the surrounding scene to the users, we built a dynamic X-ray vision window [2]. This feature allows users to directly see through the obstacles in front of them in the CAMRE MR virtual environment while still retaining a complete view of the surrounding environment to obtain information behind
obstacles by utilizing the clipping primitive feature in MRTK. By attaching the clipping primitive onto selected virtual objects to mimic a physical window and make the contact area partially transparent, users have the ability to gain information behind obstacles before physically moving to other rooms. To provide customization and avoid potential motion sickness, users can dynamically change the X-ray vision window's size with a slider to best fit their current viewing needs. Furthermore, we used the eye-tracking function in HoloLens 2 to allow the X-ray vision window to follow the eye-gaze direction, dynamically updating the window and making it move smoothly and quickly. Having the X-ray vision window updates based on the user's position, movement, and eye gaze can provide motion parallax to more closely resemble the real world. To prevent users from experiencing virtual motion sickness while using the eye gaze X-ray vision window, we offer an alternative option, head gaze version (window following head movement), that follows head movement, which enables users to choose the version they are comfortable with. With the help of X-ray vision in the CAMRE MR virtual environment, users can locate and perceive the distance of objects in adjacent rooms without physically moving. A depiction of this feature is shown in Figure 4 (a).
#### 3.2.2 Complete See-through Virtual Environment
We also provided an option for the users to have a complete direct view of the created MR virtual environment when they navigate the surroundings. As users approach the virtual wall objects created by CAMRE's scene understanding feature within a 3-meter radius (euclidean distance between the user's current location and wall object's location), the objects become 30% transparent, which allows users to view information about adjacent rooms before leaving the current one and also helps them perceive the distance between themselves and the edges of the virtual environment. Conversely, when the user moves away from the virtual wall objects beyond three meters, the objects will return to opaque. We ensure that users are aware of significant changes in the virtual environment by alerting them using spatial sound cues from the direction of wall objects they approach and becoming transparent. For instance, if a user moves towards a wall object on the right side, an alert sound will be heard on the right side to indicate the approaching movement. By generating sound cues from changing objects using the HoloLens 2's spatial sound capabilities, users can quickly notice where wall objects are within three meters and have changed. This can help the users acquire spatial information for additional depth cues while obtaining the information behind the obstacles. The wall object's transparent effect is shown in Figure 4 (b).
#### 3.2.3 Real-time Mini-map
We also include a mini-map capability to provide a bird's eye view of the entire CAMRE MR virtual environment to gain an overall understanding. Our assumption here is that the Leader's CAMRE MR virtual environment is observed and built completely beforehand and shared with the Followers. This allows Followers to explore the entire virtual environment independently with the real-time mini-map without following the Leader in real-time. This type of mini-map is a common feature in first-person shooter video games to help players navigate their surroundings. Similarly, including a mini-map feature in the CAMRE framework can allow users to maintain an awareness and understanding of their surrounding environment. Therefore, we create a track-up (this setting is a configurable option, where a north-up mini-map can also be chosen) mini-map that updates in real-time with the user's physical movement (position and rotation) and will follow and display at the bottom right corner of the user's FOV, as shown in Figure 5 (a). To identify users on the mini-map, we create a self-avatar following the user's position in real-time, shown on the mini-map to indicate the current position. Avatar on the Leader side will spawn at the origin point where the Leader starts the application, and avatars on Followers side will also spawn at the Leader's origin point whenever connected to the server. All users can locate the current location of other users to confirm whether the virtual object being shared is within the other user's FOV. Examples of the mini-map avatar and virtual objects displayed on the mini-map are shown in Figure 5 (b).
Our system offers users the ability to control the position of the camera and the field of view of the mini-map in real-time. Through two sliders, users can choose to view a close-up of a specific area to display detailed information or a full view of the entire CAMRE MR virtual environment to gather complete information about other rooms before physically moving to them. Furthermore, by combining different settings of the two sliders, the mini-map can display varying levels of detail to provide further information. Suppose the user chooses a high camera position value and a low FOV value. In this case, the mini-map will display a flatter and complete floor plan
Figure 3: Scene understanding generated virtual environment with network framework sharing between Leader and Followers with avatar indicating user’s location. Different colors are used to indicate different categories of scene objects: Yellow indicates walls, red indicates medium-sized platforms, navy blue indicates ceilings, bright blue indicates floors, grass green indicates large-sized horizontal platforms, and blue-green indicates unclassified scene objects. (a) Bird’s eye view and indoor view of virtual environment adapting to the physical environment created in Leader’s device. (b) Follower’s views of the MR environment with gray color scene objects indicate that the current user is not the creator.
to help users see the top view of the virtual objects located in the surrounding virtual environment and avoid scene objects (such as wall objects) affecting the judgment of the virtual object's position, as shown in Figure 6 (a). Conversely, if the user chooses a low camera position value and a high FOV value, the mini-map will present a higher perspective of the scene objects, allowing the user to see the three-dimensional view of the scene objects more clearly to help users locate and calculate the size of the scene objects, as shown in Figure 6 (b).
By combining the low-latency environmental update attribute of our CAMRE system, the above three expanded features can provide additional information for the users (including adjacent room settings and depth cues) when they physically move in the created virtual environment based on the surrounding physical environment. Typically, users have a compressed depth perception when immersed in a virtual environment; this may be partially due to the lack of certain types of contextual information typical of the physical environment (such as shadows of physical objects). By including the above three expanded features, CAMRE can help users to move smoothly to the desired location apart from providing depth perception to enhance immersion.
## 4 CAMRE System Evaluation Experiment Design
In this section, we designed multiple experiments to evaluate the latency of each feature to provide the capability details of the CAMRE system, including time taken to construct a virtual environment, scene object transfer latency and throughput (average bytes transferred per second), transfer packet loss, latency for X-ray vision, and latency for mini-map. The major factors we used for experiment design are:
* Room size difference.
* Testing different networking frameworks.
* Number of contemporary connections.
* Distance between Leader and Followers.
### CAMRE MR Virtual Environment Evaluation
#### 4.1.1 Virtual Environment Data Scale and Constructing Time
To comprehend the magnitude of data and time required for users to create and explore the virtual environment, we assessed the data size and construction time of Leader's CAMRE MR virtual environment. We conducted this evaluation by incorporating three rooms of varying sizes: a personal room (3.81m X 3.02m X 2.40m, containing approximately 30 scene objects), a living room (7m X 3.92m X 2.97m, containing approximately 90 scene objects), and a large classroom (13m x 9.2m x 3m, containing approximately 130 scene objects). This evaluation aimed to determine if the size of the room has an impact on the data size and construction time. During this experiment, we record the time span from pressing the "update scene" button to spawning the last scene object by directly recording the system timer, and we do the experiment 20 times for each room to account for any variation that might occur.
#### 4.1.2 Virtual Environment Transfer Latency
In CAMRE, we used Unity Netcode for Gameobject with Unity Relay or Photon Unity Networking to transfer the observed virtual environment between multiple devices. Therefore, to compare the pros and cons of the two selected networking frameworks, we measure them by using the following metrics:
* **Leader to Follower Data transfer latency/Standard Deviation**: average time difference between the Leader side creating each scene object and the Follower side receiving the scene object with the standard deviation across all observed time differences.
* **Room size scene (50 scene objects) transfer time**: During the experiment, we capture the average transfer time as the time difference between receiving the first and last scene objects with average bytes transferred from Leader to Follower and the total number of transferred scene objects as a benchmark. To ensure a fair comparison of transfer times among different Leaders while accounting for varying room sizes, we use the following equation to normalize the transfer time to a standard virtual room with 50-scene objects (a standard indoor room size according to other Leaders' created virtual environment in our medium distance scenario): \[TotalTransferTime/TotalSceneObject\_s*50\]
* **Average throughput(bytes per second)**: average bytes received per second on the Follower side during the process of transferring the entire virtual environment.
Figure 4: (a) Example dynamic X-ray vision window view on virtual environment (b) Complete see-through virtual environment to make virtual wall objects transparent to help users see objects inside the room when users move approach within 3 meters.
Figure 5: (a) Mini-map position in user’s FOV and (b) larger display to show user’s location and scene objects captured by CAMRE on mini-map.
Figure 6: (a) high camera position and low FOV result in a flatter floor plan, (b) low camera position and high FOV result in higher perspective
* **Packet loss**: packet loss percentage over the whole virtual environment transfer.
In this experiment, we investigated whether the number of concurrently connected users on the same server and the distance between the Leader and Followers affect transfer efficiency. Therefore, we divide the experiment into three different scenarios (each experiment is conducted 5 times to account for any variation); detailed settings are listed in Table 1:
To capture packet data transferring from Leader to Follower, we set up one Follower using a Unity emulator as the main evaluation target and used Wireshark [31] to capture the data. We do each experiment combination 5 times to share scene objects to account for any variation that might occur.
### CAMRE Expanded Features Evaluation
In addition to evaluating the base CAMRE MR virtual environment, we also assess the expanded features for real-time expression. The complete see-through virtual environment function is primarily designed to adjust the transparency of virtual wall objects when the user approaches the wall within 3 meters. However, due to the short latency of the transparency adjustment and the natural slight movement of the user's head while walking and wearing the HoloLens 2, it becomes challenging to precisely measure the distance accuracy in millimeter-scale between the user and the wall object from an external perspective. Therefore, in this subsection, we will only evaluate dynamic X-ray vision and mini-map features.
#### 4.2.1 Dynamic X-ray Vision Window Display Latency
We conducted an evaluation to determine if the dynamic X-ray vision window has low latency, providing users with a real-time experience. To calculate the display latency, we recorded the timestamp of when the X-ray vision enabling switch was pressed and when the X-ray vision window was displayed, measuring the time gap between them. This experiment is repeated 100 times to account for any variation that might occur.
#### 4.2.2 Dynamic X-ray Vision Window Moving Latency
To evaluate whether the dynamic X-ray vision window consistently follows the user's eye-gaze direction when physically moving in the CAMRE MR virtual environment, we evaluate the moving latency of the dynamic X-ray vision window by calculating the time difference between the user's eye-gaze movement captured by HoloLens 2 (direction can be digitized using eye-gaze ray intersections with wall objects) and the time X-ray vision window position updated to the same position by recording the timestamp. This experiment is also repeated 100 times to account for any variation that might occur.
#### 4.2.3 Mini-map Moving Latency
Consistently following the user's movement is essential for the Mini-map to help determine their current location in the virtual environment and related locations from other collaborators since Followers can move independently without following the Leader in real-time. Therefore, we conducted an evaluation to determine if the mini-map accurately tracks the user's physical movements. Specifically, we measured the mini-map moving latency by recording the timestamp to calculate the time difference between when the user rotated the display by 180 degrees (ensuring that there is a noticeable angle difference between the direction displayed by the mini-map and the user's current facing direction) and when the mini-map rotation caught up to the same rotation. This experiment is also repeated 100 times to account for any variation that might occur.
## 5 CAMRE Evaluation Results and Discussion
With the evaluation experiments proposed in the previous section, we collected various experimental results to further analyze and discuss in detail to the performance of CAMRE.
### CAMRE MR Virtual Environment Data Size and Construction Time
In this experiment, we built three different-sized rooms into 30, 90, and 130 scene objects to explore virtual environment construction time. By saving three different scene objects into a bytes file, their respective data sizes are in order: 0.18, 0.33, 0.75 megabytes. According to the experiment results in Table 2, the personal room took an average of 0.96 seconds with a standard deviation of 0.13 to fully build the virtual environment, the living room took an average of 2.53 seconds with a standard deviation of 0.28, and large classroom took average 3.69 seconds with a standard deviation of 0.10, which shows a low construction time attribute to build a complete 3D indoor environment comparing to [11] with 12 seconds and [41] with a 60 seconds indoor scene to 3D mesh reconstruction computation time. Low data size and construction time can further benefit CAMRE updating and streaming virtual information between Leader and Followers when Leader physically moves in the surrounding environment.
### CAMRE MR Virtual Environment Transfer Latency
To evaluate and prove the low transfer latency sharing virtual environments with two different networking frameworks, we conducted three experiment scenarios with different distances and different numbers of concurrent users between Leaders and Followers. We also consider internet bandwidth as a potential factor affecting transfer latency; therefore, we asked all Leaders and Followers to provide their internet bandwidth showing in [https://fast.com/](https://fast.com/). Before starting the experiments, we calibrated all connected machines with Network Time Protocol (NTP) to confirm the accuracy of the latency with milliseconds preciseness. Before the primary observing Follower connects to the server, we launched Wireshark to capture internet packets and stop capturing after receiving all the scene objects transferred from the Leader. Since HoloLens 2 does not
\begin{table}
\begin{tabular}{|l|l|} \hline
**Short Distance (SD)** & (Distances below are straight-line distances) \\ \hline
5D1: & 1 leader (HoloLens 2) \\ & 1 Follower (Unity Emulator), same location \\ & 1 Follower (HoloLens 2) \\ & 2 Followers (Unity Emulator), same location \\ \hline
**Long Distance (LD)** & \\ LD1: & 1 Leader (HoloLens 2) \\
1(12400 km = 7705 miles) & 1 Follower (Unity Emulator), 12400 km distance \\ \hline LD2: & 1 Leader (HoloLens 2) \\ & 1 Follower (HoloLens 2) \\ & 1 Follower (Unity Emulator), same location \\
1(12400 km = 7705 miles) & 1 Follower (Unity Emulator), 12400 km distance \\ \hline
**Medium Distance (MD)** & \\ MD1: & 1 Leader (HoloLens 2) \\
85 km = 53 miles & 1 Follower (Unity Emulator), 85 km distance \\ \hline MD2: & 1 Leader (HoloLens 2) \\
1(1230 km = 696 miles) & 1 Follower (Unity Emulator), 1120 km distance \\ \hline MD3: & 1 Leader (HoloLens 2) \\
1(1780 km = 1106 miles) & 1 Follower (Unity Emulator), 1780 km distance \\ \hline MD4: & 1 Leader (HoloLens 2) \\
1(1780 km = 1106 miles) & 1 Follower (HoloLens 2), 1780 km distance \\
1(1150 km = 715 miles) & 1 Follower (HoloLens 2), 1150 km distance \\
1(85 km = 53 miles) & 1 Follower (Unity Emulator), 85 km distance \\ \hline \end{tabular}
\end{table}
Table 1: Experiment Scenarios
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline & Personal & Living & Large & [11] & [41] \\ & Room & Room & Classroom & \\ \hline Time (s) & 0.96s & 2.53s & 3.69s & 12s & 60s \\ \hline \end{tabular}
\end{table}
Table 2: CAMRE MR Virtual Environment Construction Time
provide software for users to capture internet packet data, we can only capture and analyze packet data transferred from networking servers in the Unity emulator Follower side. According to the packet data, both Photon networking and Netcode for Gameobject use User Datagram Protocol (UDP) to transfer data, and there is no packet loss in every scenario discussed in the following.
#### 5.2.1 Scenario 1: Short Distance (SD)
The Leader and Followers in this scenario all physically stay in the same location with an internet speed of 240 Mb per second. The data shown in Table III indicates that there is no significant difference in the three evaluation metrics when connecting with one Leader and one Follower (SD1) and one Leader and three Followers (SD2) under the two networking frameworks, which reflects the low-latency stability of the system even when connecting with multiple users. During our testing, we discovered that when transferring a room-sized scene, Netcode for Gameobject took longer than Photon networking, but had a higher throughput. Our analysis of the captured packet data revealed that Netcode encrypts the transferred data, which could lead to larger data size, while Photon transfers data directly.
#### 5.2.2 Scenario 2: Long Distance (LD)
In this scenario, only the primary observing Follower is located at another place with about 12,400 km distance with 530 Mbps internet speed, Leader and the other two Followers are located at the same location with 240 Mbps internet speed. Based on Table III, connecting a Leader with one Follower (LD1) and three Followers (LD2) shows no significant difference in the data transfer latency category, still exhibiting a stable attribute for each scene object. However, transferring a room-sized scene with the LD2 scenario takes a little longer to complete the process, which means that if there is only one user located farther away from the location where other users are located, the scene transfer time will be affected. Furthermore, the average throughput in LD1 and LD2 scenarios while using Photon networking displays no significant difference when compared to the short-distance scenario. However, Netcode for Gameobject exhibits a higher throughput, suggesting that internet speed might affect the throughput of Netcode, but does not provide a significant difference for the Photon networking framework.
#### 5.2.3 Scenario 3: Medium Distance (MD)
In this scenario, Leader and all other Followers are located at different locations. MD1, MD2, and MD3 have different Leaders with internet speeds in order: 90, 250, and 90 Mbps, and the Follower is the same with 240 Mbps internet speed. MD4, MD5, and MD6 have the Leader with 90 Mbps internet speed and other Followers with 240, 250, and 90 Mbps in order. The experiment results shown in Table IV indicate that data transfer latency is slightly higher in MD4, MD5, and MD6 scenarios compared to MD1, MD2, and MD3 scenarios respectively. Similarly, we observe a similar pattern when transferring a room-sized scene over the Netcode server, suggesting that connecting from multiple locations may impact the transfer performance. Due to the use of HoloLens 2 devices by the other two Followers to connect with Leader, we were unable to capture internet packets for MD5 and MD6 scenarios; therefore, we marked N/A to MD5 and MD6 in the average throughput section of Table IV. After analyzing the average throughput results, we observed that MD4 has a higher throughput compared to MD1, MD2, and MD3 scenarios, which could indicate that Photon and Netcode servers require higher throughput to efficiently transfer data with multiple users across different locations.
Based on the above experiment results, all of the Leader to Follower data transfer latencies are sub-0.15 seconds, and room size scene transfer time is lower than 1.6 seconds in both networking frameworks, which indicates that the CAMRE system can transfer 3D virtual environments with low latency by utilizing small data size to increase collaboration consistency between Leader and Followers.
#### 5.2.4 Comparison with Existing Systems
We also provide comparisons with other existing collaboration systems; however, only one AR collaboration system, PLANWELL [21], measures data transfer latency. Therefore, we found multiple state-of-the-art VR multiuser platforms, including VRChat [37], AltspaceVR [18], Rec Room [27], Mozilla Hubs [19], and Horizon Worlds [15], that transfer data to multiple users, which are measured in [4] as client-to-server and back-to-client round-trip time that is similar to a Leader transferring packets to a server and then to a Follower. The comparison table (Table V) shows that PLANWELL has a data transfer latency of 2.4 seconds, which is higher than the latency of CAMRE in any scenario. All other VR systems have better data transfer latency than the CAMRE system. However, it is important to note that CAMRE is an MR collaboration system that requires capturing information from the physical environment, which could increase system load. The fact that CAMRE has similar data transfer latency as two other VR systems indicates that its low data
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **MD1** & **MD2** & **MD3** & **MD4** & **MD5** & **MD6** \\ \hline
**Photon** & 0.05/0.04 & 0.09/0.03 & 0.07/0.04 & 0.07/0.02 & 0.09/0.02 & 0.12/0.09 \\ \hline
**Netcode** & 0.04/0.02 & 0.03/0.01 & 0.06/0.02 & 0.09/0.01 & 0.12/0.01 & 0.12/0.02 \\ \hline \end{tabular}
\end{table}
Table IV: Medium Distance (MD) Scenario
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **SD1** & **SD2** & **LD1** & **LD2** \\ \hline
**Photon** & 0.12/0.02 & 0.12/0.04 & 0.12/0.02 & 0.05/0.02 \\ \hline
**Netcode** & 0.14/0.01 & 0.14/0.01 & 0.07/0.01 & 0.06/0.01 \\ \hline \multicolumn{5}{|c|}{**Room size scene [50 scene objects] transfer time [s]**} \\ \hline & **SD1** & **SD2** & **LD1** & **LD2** \\ \hline
**Photon** & 1.33 & 1.21 & 1.02 & 1.30 \\ \hline
**Netcode** & 1.42 & 1.50 & 1.31 & 1.56 \\ \hline \multicolumn{5}{|c|}{**Average throughput (bytes per second)**} \\ \hline
**Photon** & 4.8k & 4.3k & 3.6k & 4.2k \\ \hline
**Netcode** & 9.1k & 7.9k & 13.4k & 14.4k \\ \hline \end{tabular}
\end{table}
Table III: Short Distance (SD) and Long Distance (LD) Scenario
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & **SD1** & **SD2** & **LD1** & **LD2** \\ \hline
**Photon** & 0.12/0.02 & 0.12/0.04 & 0.12/0.02 & 0.05/0.02 \\ \hline
**Netcode** & 0.14/0.01 & 0.14/0.01 & 0.07/0.01 & 0.06/0.01 \\ \hline \multicolumn{5}{|c|}{**Room size scene [50 scene objects] transfer time [s]**} \\ \hline & **SD1** & **SD2** & **LD1** & **LD2** \\ \hline
**Photon** & 1.33 & 1.21 & 1.02 & 1.30 \\ \hline
**Netcode** & 1.42 & 1.50 & 1.31 & 1.56 \\ \hline \multicolumn{5}{|c|}{**Average throughput (bytes per second)**} \\ \hline & **SD1** & **SD2** & **LD1** & **LD2** \\ \hline
**Photon** & 4.8k & 4.3k & 3.6k & 4.2k \\ \hline
**Netcode** & 9.1k & 7.9k & 13.4k & 14.4k \\ \hline \end{tabular}
\end{table}
Table III: Short Distance (SD) and Long Distance (LD) Scenario
size design has a significant impact. In addition, [4] also measured the average throughput of the five state-of-the-art VR collaboration systems; therefore, we have compared the average throughput of our CAMRE system with other VR collaboration systems. Even though the two network frameworks we have employed have lesser throughput than the state-of-the-art VR systems, the CAMRE system archives low data transfer latency, as the server resources we possess are not as high as those used by large companies.
### Dynamic X-ray Vision Window Display Latency
During the experiment process, we conducted the switch-enabling procedure 100 times. The results are displayed in Figure 7. The average display time was 6.81 milliseconds, with a standard deviation of 2.63 milliseconds. Our findings indicate that the latency period for the X-ray vision window to appear on the display after the switch is pressed is consistently small enough to be considered a real-time feature, which improves usability and reduces the likelihood of virtual motion sickness [29].
### Dynamic X-ray Vision Window Moving Latency
Similarly, we conducted the eye-gaze movement (move 45 degrees from left to right) process 100 times, and the results are shown in Figure 8. The average displaying time was 6.57 milliseconds, with a standard deviation of 2.92 milliseconds. Results show a consistently low latency that can be considered a real-time feature that instantly provides information inside adjacent rooms when users move their eye-gaze direction. By utilizing the physical window-like X-ray vision that follows the user's eye-gaze direction in real-time, the X-ray vision window is capable of providing motion parallax, a feature that is normally available in a real environment but is unavailable in the virtual environment.
### Mini-map Moving Latency
We evaluate this feature by performing the rotation task 100 times, and the results are shown in Figure 9. The average displaying latency was 5.99 milliseconds, with a standard deviation of 2.42 milliseconds. Results indicate that this feature has a consistent latency of under 10 milliseconds, which could show the real-time attribute of this feature. Having a mini-map displays the user's surrounding virtual environment in real-time, and with the camera options (Figure 6) that allow the user to gain a broader view of the environment, users can quickly locate the location of the target virtual objects.
## 6 Conclusion and Future Work
In the CAMRE framework, we demonstrate dynamically generated MR virtual environments with low latency and small data sizes in the HoloLens 2 device by utilizing the scene understanding feature. With the small data sizes of the virtual environment, we employ a Leader-Follower paradigm, representing the Leader's surrounding physical environment and transferring these environments through networking frameworks to the Followers in close-to-real-time. This permits remote connections and collaboration, including real-time expanded features to assist users with navigation. As part of our research, we evaluated the performance of the CAMRE framework, which showed that users can construct virtual environments in a short amount of time. Our tests revealed that it takes around 2.5 seconds to build a living room using the framework. We also evaluated two networking frameworks for sharing a typical room size scene and found that their latencies were below 1.6 seconds in all evaluated scenarios. We then assessed the X-ray vision and mini-map display and found that their update latencies were all below 12 milliseconds, which suggests that these features can be used in real-time to help users navigate through the virtual environment.
In the future, to address the limitations described in Section 1.6, we plan to design and conduct an exhaustive set of behavioral studies to understand how users perceive CAMRE as a means of collaboration as well as the efficacy of the MR navigation features. Furthermore, we would also like to investigate the networking performance of CAMRE with more than four concurrent users. Currently, we do not allow the roles of Leader and Followers to be swapped in real-time, but we plan to investigate such a role swapping in the future. Lastly, we aim to enhance the collaborative capabilities of the CAMRE system by implementing real-time frame analysis for the creation of virtual object mesh and dynamic color adaptation of virtual objects to their surrounding environment. At present, the system generates a virtual environment with basic geometric scene objects. However, by fitting primitive geometry [12, 1] to these objects, we could potentially create more detailed virtual objects without overburdening computational resources. These new features may further increase the effectiveness of the CAMRE framework.
## Acknowledgments
This research was sponsored by the DEVCOM U.S. Army Research Laboratory under Cooperative Agreement Number W911NF-21-2-0145 to B.P.
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the DEVCOM Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation.
Figure 8: Dynamic X-ray vision window moving latency in milliseconds that is repeated 100 times to account for any variation that might occur. (Average: 6.57 milliseconds with standard deviation: 2.92)
Figure 7: Dynamic X-ray vision window display latency in milliseconds that is repeated 100 times to account for any variation that might occur. (Average: 6.81 milliseconds with standard deviation: 2.63)
Figure 9: Mini-map moving latency in milliseconds that is repeated 100 times to account for any variation that might occur. (Average: 5.99 milliseconds with standard deviation: 2.42) |
2306.08692 | The generalized hyperbolic family and automatic model selection through
the multiple-choice LASSO | We revisit the generalized hyperbolic (GH) distribution and its nested
models. These include widely used parametric choices like the multivariate
normal, skew-t, Laplace, and several others. We also introduce the
multiple-choice LASSO, a novel penalized method for choosing among alternative
constraints on the same parameter. A hierarchical multiple-choice LASSO
penalized likelihood is optimized to perform simultaneous model selection and
inference within the GH family. We illustrate our approach through a simulation
study. The methodology proposed in this paper has been implemented in R
functions which are available as supplementary material. | Luca Bagnato, Alessio Farcomeni, Antonio Punzo | 2023-06-14T18:27:20Z | http://arxiv.org/abs/2306.08692v2 | # The generalized hyperbolic family and automatic model selection through the multiple-choice LASSO
###### Abstract
We revisit the generalized hyperbolic (GH) distribution and its nested models. These include widely used parametric choices like the multivariate normal, skew-\(t\), Laplace, and several others. We also introduce the multiple-choice LASSO, a novel penalised method for choosing among alternative constraints on the same parameter. A hierarchical multiple-choice LASSO penalised likelihood is optimised to perform simultaneous model selection and inference within the GH family. We illustrate our approach through a simulation study. The methodology proposed in this paper has been implemented in R functions which are available as supplementary material.
_Keywords: Hyperbolic family, kurtosis, penalised likelihood, skewness._
Introduction
As stated by Cox (1990), "choice of an appropriate family of distributions may be the most challenging phase of analysis". Researchers always face a trade-off between goodness of fit and simplicity of the distributional assumptions. A particularly convenient family is provided by the generalized hyperbolic (GH) distribution (e.g., McNeil et al., 2005). It has flexible tails, spanning from Gaussian to exponential tails. Applications of the GH family are widespread (e.g., Eberlein and Keller, 1995; McNeil et al., 2005), and more importantly, the family contains as special cases several widely used parametric distributions. A contribution of this work indeed is that we outline a precise taxonomy of the GH family and its many nested models. The main novelty with respect to previous works is that we do not compare the GH and alternatives by separately fitting each model, but we specify a unified penalised likelihood framework that successfully performs simultaneous parameter estimation and model choice.
To proceed in this direction, we introduce the multiple-choice LASSO, a new type of LASSO penalty. Indeed, LASSO-type penalties (Tibshirani, 1996) are commonly used to shrink parameters to a single specific value (typically, zero). Nested models within the GH family are selected by fixing certain shape parameters at one of the different alternative values. The multiple-choice LASSO is devised precisely for this purpose: to allow shrinkage of the same parameter towards one of several alternative values. To restrict the possible choices, we will also build on the hierarchical LASSO (as introduced by Bien et al., 2013, see also Lim and Hastie, 2015) so that certain constraints can be activated only conditionally.
The rest of the paper is as follows: in the next section, we review the GH distribution and provide a map of its nested models. After reviewing LASSO and hierarchical LASSO we then introduce the multiple-choice LASSO. In Section 3 we use the hierarchical and multiple-choice LASSO to define penalised objective functions that can yield any model within the GH family, and describe how to optimise those in Section 4. In Section 5 we illustrate through a brief simulation study. Some concluding remarks are given in Section 6.
The methodology proposed in this paper has been implemented in R (R Core Team, 2020) functions which are available as supplementary material.
## 2 Setup
### The generalised hyperbolic distribution and its special cases
The joint probability density function of a \(d\)-variate random variable \(\mathbf{X}\) following the generalised hyperbolic (GH) distribution can be written as
\[f\left(\mathbf{x};\mathbf{\theta}\right)= \frac{\exp\left[(\mathbf{x}-\mathbf{\mu})^{\prime}\mathbf{\Sigma}^{-1}\mathbf{ \gamma}\right]}{(2\pi)^{\frac{d}{2}}|\mathbf{\Sigma}|^{\frac{1}{2}}K_{\lambda} \left(\sqrt{\chi\psi}\right)}\left[\frac{\chi+\delta(\mathbf{x};\mathbf{\mu},\mathbf{ \Sigma})}{\psi+\rho(\mathbf{\gamma},\mathbf{\Sigma})}\right]^{\frac{\lambda-\frac{d}{2} }{2}}K_{\lambda-\frac{d}{2}}\left(\sqrt{\left[\chi+\delta(\mathbf{x};\mathbf{\mu},\mathbf{ \Sigma})\right]\left[\psi+\rho(\mathbf{\gamma},\mathbf{\Sigma})\right]}\right), \tag{1}\]
where \(\mathbf{\mu}\in I\!\!R^{d}\) is the location parameter, \(\mathbf{\Sigma}\) is a \(d\times d\) scale matrix, such that \(|\mathbf{\Sigma}|=1\) for identifiability purposes (see McNeil et al., 2005, for details), \(\mathbf{\gamma}\in I\!\!R^{d}\) is the skewness parameter, \(\lambda\in I\!\!R\) is the index parameter, and \(\chi,\psi>0\) are concentration parameters; compactly, we adopt the notation \(\mathbf{X}\sim\mathcal{GH}_{d}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma},\lambda, \chi,\psi\right)\). In (1), \(\mathbf{\theta}=\left\{\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma},\lambda,\chi,\psi\right\}\) contains all the parameters of the model, \(\delta(\mathbf{x};\mathbf{\mu},\mathbf{\Sigma})=(\mathbf{x}-\mathbf{\mu})^{\prime}\mathbf{\Sigma}^{-1} \left(\mathbf{x}-\mathbf{\mu}\right)\) is the squared Mahalanobis distance between \(\mathbf{x}\) and \(\mathbf{\mu}\) (with covariance matrix \(\mathbf{\Sigma}\)), \(\rho(\mathbf{\gamma},\mathbf{\Sigma})=\mathbf{\gamma}^{\prime}\mathbf{\Sigma}^{-1}\mathbf{\gamma}\), and \(K_{\lambda}\) is the modified Bessel function of the third kind with index \(\lambda\).
It is of practical importance to note that \(\mathbf{X}\sim\mathcal{GH}_{d}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma},\lambda, \chi,\psi\right)\) has the normal mean-variance mixture (NMVM) representation
\[\mathbf{X}=\mathbf{\mu}+W\mathbf{\gamma}+\sqrt{W}\mathbf{U}, \tag{2}\]
where \(W\) has a generalised inverse Gaussian (GIG) distribution, in symbols \(W\sim\mathcal{GIG}\left(\lambda,\chi,\psi\right)\) (see Appendix A), and \(\mathbf{U}\sim\mathcal{N}_{d}\left(\mathbf{0},\mathbf{\Sigma}\right)\), where \(\mathcal{N}_{d}\left(\mathbf{\mu},\mathbf{\Sigma}\right)\) denotes a \(d\)-variate normal distribution with mean \(\mathbf{\mu}\) and covariance matrix \(\mathbf{\Sigma}\). As a related alternative, we can refer to the following hierarchical representation of \(\mathbf{X}\sim\mathcal{GH}_{d}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma},\lambda, \chi,\psi\right)\) as
\[W \sim\mathcal{GIG}\left(\lambda,\chi,\psi\right)\] \[\mathbf{X}|W=w \sim\mathcal{N}_{d}\left(\mathbf{\mu}+w\mathbf{\gamma},w\mathbf{\Sigma} \right), \tag{3}\]
where \(w\) is a realization of \(W\). The hierarchical representation in (3) is useful for random data generation and for the implementation of the ECME algorithm discussed in Section 4.
Figure 1 gives a hierarchical representation of all the existing models the GH distribution nests as special or limiting cases by varying the values/ranges of \(\boldsymbol{\gamma}\), \(\lambda\), \(\chi\), and \(\psi\). Such a hierarchy is easily derived by using the representation of the GH distribution given in (2). Appendix B illustrates how to obtain some of these special and limiting cases, those we believe are more difficult to be derived and about which there is more confusion in the literature due to the use of different identifiability constraints. On the left/right of Figure 1 we have the models related to negative/positive values of \(\lambda\). Instead, on the bottom (below the dashed line) we have the symmetric models (those with \(\gamma=0\)); as we can see, the symmetric counterpart of each model on the top is available. The diagram in Figure 1 can be considered as a contribution of this paper. It provides, for the first time to our knowledge, a complete and organised taxonomy of all the models nested within the GH family.
Summarising we have: 2 possibilities for \(\boldsymbol{\gamma}\) (\(\boldsymbol{\gamma}\) free or \(\boldsymbol{\gamma}=\mathbf{0}\)), 6 possibilities for \(\lambda\) (\(\lambda\rightarrow-\infty\), \(\lambda<0\), \(\lambda=-1/2\), \(\lambda=\left(d+1\right)/2\), \(\lambda=1\) or \(\lambda>0\)), 3 possibilities for \(\chi\) (\(\chi\) free, \(\chi\to 0\) or \(\chi\rightarrow\infty\)), and 2 possibilities for \(\psi\) (\(\psi\) free and \(\psi\to 0\)). Combining all these possibilities would generate \(2\cdot 6\cdot 3\cdot 2=72\) models. However, many of them are not of practical interest. Just as two examples, the combination \(\left\{\boldsymbol{\gamma}=\mathbf{0},\lambda<0,\chi\to 0,\psi\to 0\right\}\) would generate a degenerate \(t\) distribution on \(\boldsymbol{\mu}\), while the combination \(\left\{\boldsymbol{\gamma}=\mathbf{0},\lambda=1,\chi\to 0,\psi\to 0\right\}\) would generate a degenerate Laplace distribution on \(\boldsymbol{\mu}\).
### Preliminaries about LASSO and hierarchical LASSO
Suppose to be interested to a particular configuration/value of \(\boldsymbol{\theta}\), say \(\boldsymbol{\theta}_{0}\). The LASSO (Least Absolute Shrinkage and Selection Operator) involves specification of an \(L_{1}\) penalty for (possibly, a subset of) the parameter vector \(\boldsymbol{\theta}\), so that the estimate \(\hat{\boldsymbol{\theta}}\) is exactly equal to \(\boldsymbol{\theta}_{0}\) if the likelihood at \(\boldsymbol{\theta}_{0}\) is not too far from the maximum. More formally, given a random sample \(S_{n}=\left\{\boldsymbol{x}_{i};i=1,\ldots,n\right\}\) (observed data) from \(\boldsymbol{X}\sim\mathcal{GH}_{d}\left(\boldsymbol{\mu},\boldsymbol{\Sigma},\boldsymbol{\gamma},\lambda,\chi,\psi\right)\), estimation proceeds through optimisation of the penalised log-likelihood
\[\sum_{i=1}^{n}\log\left[f\left(\boldsymbol{x}_{i};\boldsymbol{\theta}\right) \right]-P_{h}\left(\boldsymbol{\theta}\right) \tag{4}\]
for an appropriate penalty function \(P_{h}\left(\mathbf{\theta}\right)\), with \(f\left(\cdot;\mathbf{\theta}\right)\) being defined in (1). In classical LASSO, \(P_{h}\left(\mathbf{\theta}\right)=h||\mathbf{\theta}-\mathbf{\theta}_{0}||_{L_{1}}\), where \(||\cdot||_{L_{1}}\) indicates the \(L_{1}\)-norm (the sum of absolute values) and \(h>0\) is a fixed penalty parameter. In linear models, often times \(\mathbf{\theta}_{0}=\mathbf{0}\).
The resulting estimator is less efficient than the MLE, but superefficient at \(\mathbf{\theta}_{0}\) (see, e.g., Wu and Zhou, 2019 and references therein). It is well known that any superefficient estimator may improve efficient estimators at most on a subset of the parameter space of zero Lebesgue measure.
In our work we will also make use of the hierarchical LASSO (Bien et al., 2013), which is devised for structured sparsity: some constraints can be activated only if others are simultaneously
active. Without loss of generality assume we allow \(\theta_{c}=0\) only if \(\theta_{d}=0\), with \(\theta_{c}\) and \(\theta_{d}\) being two elements of \(\mathbf{\theta}\). This can be obtained expressing
\[P_{h}\left(\mathbf{\theta}\right)=h\left[|\theta_{d}|+\frac{\max(|\theta_{c}|,| \theta_{d}|)}{2}\right].\]
In words, some shrinkage for \(\theta_{c}\) is allowed if \(|\theta_{c}|>|\theta_{d}|\), but the constraint on \(|\theta_{c}|\) can be exactly activated only as soon as \(\theta_{d}=0\); see Bien et al. (2013) on this point.
### The multiple-choice LASSO
We introduce in this section the multiple-choice LASSO, which can be used to enforce one of several constraints on the same parameter. For simplicity assume we have a one-dimensional parameter \(\theta\) and several possible constraints on it, i.e., we require superefficiency not only at a single point \(\theta_{0}\) in the parameter space, but at a finite collection of points \(\{\theta_{1},\ldots,\theta_{C}\}\). Our proposal is to specify
\[P_{h}\left(\theta\right)=h\min\left(|\theta-\theta_{1}|,|\theta-\theta_{2}|, \ldots,|\theta-\theta_{C}|\right). \tag{5}\]
In words, only the smallest among all possible \(L_{1}\) norms contribute to the penalty. The idea is that if the MLE is close enough to \(\theta_{j}\) for some \(j=1,\ldots,C\), then \(\hat{\theta}=\theta_{j}\) as the remaining \(L_{1}\) norms are simply ignored due to the minimum operator.
For illustration, in Figure 2(a)-2(b) we show the penalty function for LASSO and multiple-choice LASSO, respectively, for a one-dimensional problem with \(h=0.5\) in both cases. For the LASSO we set \(\theta_{0}=0\), while for multiple-choice LASSO we set \(\theta_{0}\in\{-3,-2,-1,0,1,2,3\}\). The sawtooth shape of the penalty function for the multiple-choice LASSO is what allows objective functions to be optimised exactly at \(\theta_{j}\), \(j=1,\ldots,C\).
The resulting penalised objective function is clearly non-convex. While in some cases specific algorithms might be exploited to optimise it, since the parameter space is low dimensional in our context, we propose to simply use a numerical method like the Constrained Optimisation BY Linear Approximation (COBYLA) algorithm (Powell, 1994).
## 3 Shape detection through penalised likelihood maximization
As discussed at the end of Section 2.1, all possible combinations of the discussed constraints on the parameters \(\boldsymbol{\gamma}\), \(\lambda\), \(\chi\), and \(\psi\) would lead to 72 parametric distributions, nested within the GH distribution. Of these, only 16 have a clear interpretation as outlined in Section 2.1 and Figure 1.
In the following, we show how to specify a multiple-choice LASSO-type penalised likelihood function which can possibly lead to any of the 72 models nested in the GH distribution. We then specify a multiple-choice hierarchical LASSO-type penalised likelihood which restricts the possible solutions only to the sixteen models in Figure 1.
The penalised likelihood specification is as in (4). A simple way to proceed is to specify
Figure 2: The penalty function for LASSO (left panel), with \(\theta_{0}=0\); and the penalty function for multiple-choice LASSO (right panel), with \(\theta_{0}\in\{-3,-2,-1,0,1,2,3\}\).
\(P_{h}\left(\mathbf{\gamma},\lambda,\chi,\psi\right)\) as a multiple-choice LASSO penalty of the kind
\[P_{h}\left(\mathbf{\gamma},\lambda,\chi,\psi\right)=h\left\{\min\left[\left|\lambda- \frac{d+1}{2}\right|,\left|\lambda+\frac{1}{2}\right|,\left|\lambda-1\right|,I( \lambda<0)\left|\frac{1}{\lambda}\right|\right]+\min\left(\left|\chi\right|, \left|\frac{1}{\chi}\right|\right)+\left|\psi\right|+\left\|\mathbf{\gamma}\right\| _{L_{2}}\right\}. \tag{6}\]
We use here a penalty on \(\left\|\mathbf{\gamma}\right\|_{L_{2}}\) to constrain all \(d\) elements of \(\mathbf{\gamma}\) to be zero, in the spirit of group LASSO (see, e.g., Yuan and Lin, 2006 and Lim and Hastie, 2015). In case \(\lambda\rightarrow-\infty\) and \(\chi\rightarrow\infty\), define \(c=-\chi/2\lambda\) as scale parameter of the resulting Gaussian distribution. Note that the constraint \(\left|1/\lambda\right|\) is satisfied by \(\lambda\rightarrow\pm\infty\).
Penalty (6) will allow the user to select any of the 72 possible parametric distributions obtained through appropriate constraints. Many of these models might fit well, but do not have a direct interpretation. In order to restrict the list of possible models to the sixteen ones listed in Figure 1 we must exclude several possible combinations of constraints on the parameters. To this end, we combine the hierarchical LASSO and the multiple-choice LASSO frameworks and specify the penalty as
\[P_{h}\left(\mathbf{\gamma},\lambda,\chi,\psi\right)= h\left\{\frac{\left\|\mathbf{\gamma}\right\|_{L_{2}}}{\sqrt{d}}+I\left( \lambda\leq 0\right)\min\left[\left|\lambda+\frac{1}{2}\right|+\frac{1}{2} \max\left(\left|\lambda+\frac{1}{2}\right|,\left|\psi\right|\right),\right.\right. \tag{7}\] \[\left.\left.\left|\psi\right|+\frac{1}{2}\max\left(\left|\lambda+ \frac{1}{2}\right|,\left|\psi\right|\right),\frac{1}{4}\max\left(\frac{\left\| \mathbf{\gamma}\right\|_{L_{2}}}{\sqrt{d}},\left|\frac{1}{\lambda}\right|,\left| \psi\right|,\left|\frac{1}{\chi}\right|\right)\right]+\] \[\left.+I(\lambda>0)\min\left[\left|\lambda-\frac{d+1}{2}\right|, \left|\chi\right|+\frac{1}{2}\max(\left|\lambda-1\right|,\left|\chi\right|), \frac{1}{2}\max\left(\frac{\left\|\mathbf{\gamma}\right\|_{L_{2}}}{\sqrt{d}}, \left|\lambda-1\right|\right)\right]\right\},\]
where \(I\left(A\right)\) denotes the indicator function of \(A\subseteq I\!\!R\) and \(h>0\) is a penalty parameter. In the expression above we divide by \(\sqrt{d}\) to normalize the \(L_{2}\) norm with respect to the number of elements of the vector involved.
To fix the ideas we discuss how the GH and Gaussian models are obtained. If the MLE is far from any of the special cases in Figure 1 and the penalty parameter is not too large, no constraint will be activated and the resulting model will be a GH. Suppose now the MLE is close enough to the case \(\mathbf{\gamma}=\mathbf{0}\), with sufficiently small \(\lambda\), large \(\chi\), and \(\psi\) close to zero. The low \(||\mathbf{\gamma}||_{L_{2}}\) will make it advantageous to activate the constraint leading to symmetric models. The negative \(\lambda\) will remove the third addend of the penalty, which is multiplied by \(I(\lambda>0)\). For the second addend, the minimum among the three elements listed will be the third, as \(\lambda\) at the MLE will
definitely be much smaller than \(.5\). Hence the penalty will essentially reduce to
\[\frac{h}{4}\max\left(\frac{\left\|\boldsymbol{\gamma}\right\|_{L_{2}}}{\sqrt{d}}, \left|\frac{1}{\lambda}\right|,\left|\psi\right|,\left|\frac{1}{\chi}\right| \right),\]
and the max operator will lead all the constraints to activate (\(\lambda\rightarrow-\infty\), \(\psi\to 0\), \(\chi\rightarrow\infty\), \(||\boldsymbol{\gamma}||_{L_{2}}\rightarrow\boldsymbol{0}\)), leading to the Gaussian model.
## 4 Penalised maximum likelihood estimation
We consider a penalised maximum likelihood (ML) approach, with the penalty term given in (6) or (7), to estimate \(\boldsymbol{\theta}\) in model (1). Given both the random sample \(S_{n}\) and a value for \(h\), the penalised ML estimation method is based on the maximization of the penalised (observed-data) log-likelihood function
\[\ell_{\text{pen}}\left(\boldsymbol{\theta}|h\right)=\sum_{i=1}^{n}\ln f\left( \boldsymbol{x}_{i};\boldsymbol{\theta}\right)-P_{h}\left(\boldsymbol{\gamma},\lambda,\chi,\psi\right). \tag{8}\]
However, the problem of directly maximising \(\ell_{\text{pen}}\left(\boldsymbol{\theta}|h\right)\) over \(\boldsymbol{\theta}\) is not particularly easy. The penalised ML fitting is simplified considerably by the application of algorithms based on the expectation-maximization (EM) principle (Dempster et al., 1977). These algorithms are the classical way to compute ML estimates for parameters of distributions which are defined as a mixture.
Regardless of the particular variant of the EM algorithm used, it is convenient to view the observed data as incomplete. The complete-data are \(\left\{\left(\boldsymbol{x}_{i},w_{i}\right);i=1,\ldots,n\right\}\), where the missing variables \(w_{1},\ldots,w_{n}\) are defined - based on the hierarchical representation given in (3) - so that
\[\boldsymbol{X}_{i}|W_{i}=w_{i}\sim\mathcal{N}_{d}\left(\boldsymbol{\mu}+w_{i} \boldsymbol{\gamma},w_{i}\boldsymbol{\Sigma}\right),\]
independently for \(i\in\left\{1,\ldots,n\right\}\), and
\[W_{i}\sim\mathcal{GIG}\left(\lambda,\chi,\psi\right).\]
Because of this conditional structure, the penalised complete-data log-likelihood function can be written as
\[\ell_{\text{pen},c}\left(\boldsymbol{\theta}|h\right)=\ell_{1c}\left( \boldsymbol{\mu},\boldsymbol{\Sigma},\boldsymbol{\gamma}\right)+\ell_{2c} \left(\lambda,\chi,\psi\right)-P_{h}\left(\boldsymbol{\gamma},\lambda,\chi, \psi\right), \tag{9}\]
where
\[\ell_{1c}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma}\right)= \sum_{i=1}^{n}\biggl{[}-\frac{d}{2}\ln\left(2\pi\right)-\frac{d}{2} \ln\left(w_{i}\right)-\frac{1}{2}\ln|\mathbf{\Sigma}|-\frac{\delta\left(\mathbf{x}_{i}; \mathbf{\mu},\mathbf{\Sigma}\right)}{2w_{i}}+\] \[+\left(\mathbf{x}_{i}-\mathbf{\mu}\right)^{\prime}\mathbf{\Sigma}^{-1}\mathbf{ \gamma}-\frac{w_{i}}{2}\mathbf{\gamma}^{\prime}\mathbf{\Sigma}^{-1}\mathbf{\gamma}\biggr{]}, \tag{10}\]
and
\[\ell_{2c}\left(\lambda,\chi,\psi\right)=\sum_{i=1}^{n}\left\{ \left(\lambda-1\right)\ln\left(w_{i}\right)-\frac{1}{2}\frac{\chi}{w_{i}}- \frac{1}{2}\psi w_{i}-\frac{1}{2}\lambda\ln\left(\chi\right)+\frac{1}{2} \lambda\ln\left(\psi\right)-\ln\left[2K_{\lambda}\left(\sqrt{\chi\psi}\right) \right]\right\}. \tag{11}\]
Working on \(\ell_{\mathrm{pen},c}\left(\mathbf{\theta}|h\right)\), we adopt the expectation-conditional maximization either (ECME) algorithm (Liu and Rubin, 1994). The ECME algorithm is an extension of the expectation-conditional maximum (ECM) algorithm which, in turn, is an extension of the EM algorithm (McLachlan and Krishnan, 2007). The ECM algorithm replaces the M-step of the EM algorithm by a number of computationally simpler conditional maximization (CM) steps. The ECME algorithm generalizes the ECM algorithm by conditionally maximising on some or all of the CM-steps the incomplete-data (penalised) log-likelihood. In our case, the ECME algorithm iterates between three steps, one E-step and two CM-steps, until convergence. The two CM-steps arise from the partition of \(\mathbf{\theta}\) as \(\left\{\mathbf{\theta}_{1},\mathbf{\theta}_{2}\right\}\), where \(\mathbf{\theta}_{1}=\left\{\mathbf{\mu},\mathbf{\Sigma}\right\}\) and \(\mathbf{\theta}_{2}=\left\{\mathbf{\gamma},\lambda,\chi,\psi\right\}\). The partition is chosen in such a way that all the parameters in the penalization function \(P_{h}\left(\cdot\right)\) belongs to \(\mathbf{\theta}_{2}\).
Below, we outline the generic iteration of the ECME algorithm. As in Melnykov and Zhu (2018, 2019), quantities/parameters marked with one dot will correspond to the previous iteration and those marked with two dots will represent the estimates at the current iteration.
### E-Step
The E-step is only needed for the first CM-step of the algorithm - where we update \(\mathbf{\theta}_{1}\) - and requires the calculation of
\[Q\left(\mathbf{\theta}_{1},\dot{\mathbf{\theta}}_{2}|\dot{\mathbf{\theta}} \right)=Q_{1}\left(\mathbf{\mu},\mathbf{\Sigma},\dot{\mathbf{\gamma}}|\dot{\mathbf{\theta}} \right)+C, \tag{12}\]
the conditional expectation of \(\ell_{\mathrm{pen},c}\left(\mathbf{\theta}\left|h\right.\right)\) given the observed data, using the current fit \(\dot{\mathbf{\theta}}\) for \(\mathbf{\theta}\), with \(\mathbf{\theta}_{2}\) fixed at \(\dot{\mathbf{\theta}}_{2}\) and where \(C\) is a constant not involving parameters inside \(\mathbf{\theta}_{1}\). In (12),
\(Q_{1}\left(\mathbf{\mu},\mathbf{\Sigma},\dot{\mathbf{\gamma}}|\dot{\mathbf{\theta}}\right)\) is the conditional expectation of \(\ell_{1c}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma}\right)\) in (9).
To compute \(Q\left(\mathbf{\theta}_{1},\dot{\mathbf{\theta}}_{2}|\dot{\mathbf{\theta}}\right)\) we need to replace any function \(m\left(W_{i}\right)\) of the latent variable \(W_{i}\) which appears in (10), provided that it is related with either \(\mathbf{\mu}\) or \(\mathbf{\Sigma}\), by \(E_{\dot{\mathbf{\theta}}}\left[m\left(W_{i}\right)|\mathbf{X}_{i}=\mathbf{x}_{i}\right]\), where the expectation (as it can be noted by the subscript) is taken using the current fit \(\dot{\mathbf{\theta}}\) for \(\mathbf{\theta}\), \(i=1,\ldots,n\). In particular, the functions satisfying these requirements, involved in (10), are \(m_{1}(w)=w\) and \(m_{2}(w)=1/w\). To calculate the expectations of \(m_{1}\) and \(m_{2}\) we first note that
\[W_{i}|\mathbf{X}_{i}=\mathbf{x}_{i}\sim\mathcal{GIG}\left(\lambda-\frac{d}{2},\delta \left(\mathbf{x}_{i};\mathbf{\mu},\mathbf{\Sigma}\right)+\chi,\mathbf{\gamma}^{\prime}\mathbf{ \Sigma}^{-1}\mathbf{\gamma}+\psi\right).\]
Therefore, according to (20) and (21), respectively, we need to compute the following quantities
\[\dot{v}_{i} \coloneqq\mathrm{E}_{\dot{\mathbf{\theta}}}\left(W_{i}|\mathbf{X}_{i}= \mathbf{x}_{i}\right)\] \[=\sqrt{\frac{\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{ \Sigma}}\right)+\dot{\chi}}{\dot{\psi}}}\frac{K_{\dot{\lambda}-\frac{d}{2}+1} \left\{\sqrt{\dot{\psi}\left[\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{ \Sigma}}\right)+\dot{\chi}\right]}\right\}}{K_{\dot{\lambda}-\frac{d}{2}}\left\{ \sqrt{\dot{\psi}\left[\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{\Sigma} }\right)+\dot{\chi}\right]}\right\}} \tag{13}\] \[\dot{u}_{i} \coloneqq\mathrm{E}_{\dot{\mathbf{\theta}}}\left(W_{i}^{-1}|\mathbf{X}_{i }=\mathbf{x}_{i}\right)\] \[=\sqrt{\frac{\dot{\psi}}{\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}}, \dot{\mathbf{\Sigma}}\right)+\dot{\chi}}}\frac{K_{\dot{\lambda}-\frac{d}{2}+1} \left\{\sqrt{\dot{\psi}\left[\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{ \Sigma}}\right)+\dot{\chi}\right]}\right\}}{K_{\dot{\lambda}-\frac{d}{2}}\left\{ \sqrt{\dot{\psi}\left[\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{\Sigma}} \right)+\dot{\chi}\right]}\right\}}-\frac{2\left(\dot{\lambda}-\frac{d}{2} \right)}{\delta\left(\mathbf{x}_{i};\dot{\mathbf{\mu}},\dot{\mathbf{\Sigma}}\right)+\dot {\chi}}. \tag{14}\]
Then, by substituting \(w_{i}\) with \(\dot{v}_{i}\) and \(1/w_{i}\) with \(\dot{u}_{i}\) in \(\ell_{1c}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma}\right)\), we obtain
\[Q_{1}\left(\mathbf{\mu},\mathbf{\Sigma},\dot{\mathbf{\gamma}}|\dot{\mathbf{\theta}}\right)= \sum_{i=1}^{n}\left[-\frac{1}{2}\ln|\mathbf{\Sigma}|-\frac{\dot{u}_{i}}{2}\delta \left(\mathbf{x}_{i};\mathbf{\mu},\mathbf{\Sigma}\right)+\left(\mathbf{x}_{i}-\mathbf{\mu}\right) ^{\prime}\mathbf{\Sigma}^{-1}\dot{\mathbf{\gamma}}-\frac{\dot{v}_{i}}{2}\dot{\mathbf{ \gamma}}^{\prime}\mathbf{\Sigma}^{-1}\dot{\mathbf{\gamma}}\right], \tag{15}\]
where we dropped the terms which are constant with respect to \(\mathbf{\mu}\) and \(\mathbf{\Sigma}\).
### CM-step 1
The first CM-step requires the calculation of \(\ddot{\mathbf{\theta}}_{1}\) as the value of \(\mathbf{\theta}_{1}\) that maximizes \(Q_{1}\left(\mathbf{\mu},\mathbf{\Sigma},\dot{\mathbf{\gamma}}|\dot{\mathbf{\theta}}\right)\) in (15), with \(\mathbf{\theta}_{2}\) fixed at \(\dot{\mathbf{\theta}}_{2}\). After simple algebra, we obtain the following updates
\[\ddot{\mathbf{\mu}}= \frac{1}{n\ddot{\mathbf{u}}}\left(\sum_{i=1}^{n}\dot{u}_{i}\mathbf{x}_{i }-\dot{\mathbf{\gamma}}\right),\] \[\ddot{\mathbf{\Sigma}}= \left|\ddot{\mathbf{\Sigma}}^{*}\right|^{-\frac{1}{d}}\ddot{\mathbf{ \Sigma}}^{*} \tag{16}\]
where
\[\ddot{\mathbf{\Sigma}}^{*}=\frac{1}{n}\sum_{i=1}^{n}\dot{u}_{i}\left(\mathbf{x}_{i}-\ddot{ \mathbf{\mu}}\right)\left(\mathbf{x}_{i}-\ddot{\mathbf{\mu}}\right)^{\prime}-\left(\bar{\bm {x}}-\ddot{\mathbf{\mu}}\right)\dot{\mathbf{\gamma}}^{\prime}-\dot{\mathbf{\gamma}}\left( \bar{\mathbf{x}}-\ddot{\mathbf{\mu}}\right)^{\prime}+\dot{\bar{v}}\dot{\mathbf{\gamma}}\dot {\mathbf{\gamma}}^{\prime}, \tag{17}\]
\(\dot{\bar{u}}=\sum_{i=1}^{n}\dot{u}_{i}/n\), \(\dot{\bar{v}}=\sum_{i=1}^{n}\dot{v}_{i}/n\), and \(\bar{\mathbf{x}}=\sum_{i=1}^{n}\mathbf{x}_{i}/n\). In (16), the scalar \(\left|\ddot{\mathbf{\Sigma}}^{*}\right|^{-\frac{1}{d}}\) is needed to ensure the identifiability constraint \(\left|\ddot{\mathbf{\Sigma}}\right|=1\).
### CM-step 2
In the second CM-step, given \(h\), we choose the value of \(\mathbf{\theta}_{2}\) that maximizes \(\ell_{\text{pen}}\left(\mathbf{\theta}|h\right)\) in (8), with \(\mathbf{\theta}_{1}\) fixed at \(\ddot{\mathbf{\theta}}_{1}\). As a closed-form solution for \(\ddot{\mathbf{\theta}}_{2}\) is not analytically available, numerical optimization is needed, and any general-purpose optimizer can be used with this aim. Operationally, we perform an unconstrained maximization on \(I\!\!R^{d+3}\), based on a (log/exp) transformation/back-transformation approach for \(\chi\) and \(\phi\), via the general-purpose optimizer optim() for R, included in the **stats** package. In analogy with Bagnato and Punzo (2021), we try two different commonly used algorithms for maximization: Nelder-Mead, which is derivatives-free, and BFGS which uses (numerical) second-order derivatives. They can be passed to optim() via the argument method. Once the two algorithms are run, we take the best solution in terms of \(\ell_{\text{pen}}\left(\mathbf{\theta}|h\right)\); see, e.g., Punzo and Bagnato (2021) for a comparison of the two algorithms, in terms of parameter recovery and computational time, for ML estimation. The choice to run both the algorithms is motivated by two facts: 1) sometimes the algorithms do not provide the same solution, and 2) it can happen that an algorithm does not reach convergence.
### Selecting the penalty parameter
The choice of the penalty parameter \(h\) has got direct consequences on the estimation of \(\mathbf{\theta}\) and, as a sub-product, on the selection of the best model in Figure 1. As a data-driven method to select \(h\), we consider a simple grid-search partial leave-one-out likelihood cross-validation (LCV) strategy (Stone, 1974); where the term "grid-search" refers to the fact that the LCV statistic is only evaluated on a convenient grid of values, while the term "partial" refers to the fact that we only allow to a proportion \(p\) of the sample to be left out one unit at a time. These choices
are motivated by the need to speed-up the computation that, otherwise, would be too much computationally cumbersome.
In detail, we consider the LCV statistic
\[\text{LCV}_{p}\left(h\right)=\frac{1}{\left\lfloor pn\right\rfloor}\sum_{ \boldsymbol{x}_{i}\in S_{\left\lfloor pn\right\rfloor}}\ln\left[f\left( \boldsymbol{x}_{i};\widehat{\boldsymbol{\theta}}_{h,S_{n}\setminus\{ \boldsymbol{x}_{i}\}}\right)\right], \tag{18}\]
where \(S_{\left\lfloor pn\right\rfloor}\subseteq S_{n}\) is the sub-sample, of size \(\left\lfloor pn\right\rfloor\), which is allowed to be left out, and \(\widehat{\boldsymbol{\theta}}_{h,S_{n}\setminus\{\boldsymbol{x}_{i}\}}\) is the penalised ML estimate of \(\boldsymbol{\theta}\), with penalty parameter \(h\), obtained on \(S_{n}\backslash\{\boldsymbol{x}_{i}\}\) (refer to Section 4). For each value of \(h\) in a pre-specified grid \(G\), we first compute \(\text{LCV}_{p}\left(h\right)\); then, we select the value of \(h\) in correspondence to the maximum value of this statistic.
## 5 Simulation study
In this section, we describe the results of a simulation study conducted with the aim of investigating the ability of our multiple-choice LASSO procedure in discovering the true data generating model (DGM) among those in Figure 1.
For each of the following DGMs we consider 50 randomly generated datasets, of size \(n=1000\), with \(d=2\) dimensions. The DGMs considered are: normal (N), \(t\), Cauchy (C), Laplace (L), symmetric generalised hyperbolic (SGH), skew-\(t\) (S\(t\)), variance gamma (VG), and asymmetric Laplace (AL). The DGMs share the same location parameter \(\boldsymbol{\mu}=\boldsymbol{0}\) and scale matrix \(\boldsymbol{\Sigma}=\boldsymbol{I}\), with \(\boldsymbol{I}\) denoting the identity matrix. We fix \(\boldsymbol{\gamma}=\left(-0.5,0.8\right)^{\prime}\) for the skewed DGMs (S\(t\), VG, and AL). Parameters \(\lambda\), \(\chi\), and \(\psi\) vary according to the considered DGM; Table 1 provides the precise values of these parameters for each.
We use our penalised ML procedure on each generated dataset. We select the penalty parameter \(h\) with the LCV strategy described in Section 4.4, using the grid \(G=\{0,5,10,15,20,25,30,35,40,45,50,60,70,80,100\}\) and a proportion \(p=0.1\) of observations which are allowed to be left out one at a time.
Table 2 shows the number of times our multiple-choice LASSO method selects each model in our family of models. Here, there are some models that are fitted to the data but they are not used as DGMs; these models are the normal-inverse Gaussian (NIG), hyperbolic (H), hyperbolic
univariate marginals (HUM), symmetric normal-inverse Gaussian (SNIG), symmetric variance gamma (SVG), symmetric hyperbolic (SH), skew-Cauchy (SC), and generalized hyperbolic (GH). Results are organised as a contingency table where the true DGM is given by column and the models in the GH-family by row. The shadowed cells report the true positive count (TPC), measuring the number of times over the replicates that the multiple-choice LASSO approach discovers the true DGM. We can note how, regardless of the DGM, our approach is able enough to recognize the true underlying DGM, being the counts mainly concentrated on the shadowed cells. The best results are obtained for the \(t\)-DGM, where the TCP is the maximum possible (50). On the opposite side, the worst results are obtained for the N-DGM, where TCP \(=42\); in the remaining 8 cases, the more general skew-\(t\) distribution is selected.
## 6 Concluding remarks
In this work we have put forward a taxonomy of the GH family, and showed how one can perform simultaneous estimation and selection of nested models within the family. We argue that the GH family is flexible enough to fit well a wide range of distributions in real applications, and that the model selection procedure is effective in providing a simple and interpretable model class without sacrificing goodness of fit. We also have introduced the multiple-choice LASSO. We believe adaptive choice of the shape parameters within the GH family is only one of the possible applications of the multiple-choice LASSO, and that its theoretical properties deserve further investigation. Additionally, there are other flexible and general parametric families of
\begin{table}
\begin{tabular}{l c c c c c c} \hline & \multicolumn{6}{c}{DGM} \\ \cline{2-7} Parameter & N & \(t\), St & C & L, AL & SGH & VG \\ \hline \(\lambda\) & \(-20\) & \(-1\) & \(-0.5\) & \(1\) & \(-1\) & \(1.5\) \\ \(\chi\) & \(100\) & \(2\) & \(2\) & \(0.001\) & \(2\) & \(0.001\) \\ \(\psi\) & \(0.001\) & \(0.001\) & \(0.001\) & \(0.5\) & \(3\) & \(0.5\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters \(\lambda\), \(\chi\) and \(\psi\) of the DGMs used in the simulation study.
distributions that might benefit from an approach similar to the one proposed in this work (e.g., Geraci and Farcomeni, 2020).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{8}{c}{DGM} \\ \cline{2-10} Fitted & N & \(t\) & C & L & SGH & S\(t\) & AL & VG \\ \hline N & **42** & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(t\) & 0 & **50** & 1 & 0 & 4 & 0 & 0 & 0 \\ C & 0 & 0 & **49** & 0 & 0 & 0 & 0 & 0 \\ L & 0 & 0 & 0 & **46** & 0 & 0 & 0 & 0 \\ SGH & 0 & 0 & 0 & 0 & **45** & 0 & 0 & 0 \\ S\(t\) & 8 & 0 & 0 & 0 & 0 & **49** & 0 & 0 \\ AL & 0 & 0 & 0 & 0 & 0 & **44** & 0 \\ VG & 0 & 0 & 0 & 0 & 0 & 0 & 3 & **48** \\ NIG & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ H & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ HUM & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ SNIG & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ SVG & 0 & 0 & 0 & 3 & 1 & 0 & 0 & 0 \\ SH & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ SC & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ GH & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of times the multiple-choice LASSO approach selects each model. The true DGM is shown by column, while the models in the GH-family are given by row.
## Appendix A Generalised inverse Gaussian distribution
The random variable \(W\) has a generalised inverse Gaussian (GIG) distribution if its pdf is
\[f_{\mbox{\tiny{GIG}}}\left(w;\lambda,\chi,\psi\right)=\left(\frac{\psi}{\chi} \right)^{\frac{\lambda}{2}}\frac{w^{\lambda-1}}{2K_{\lambda}\left(\sqrt{\psi} \chi\right)}\exp\left[-\frac{1}{2}\left(\psi w+\frac{\chi}{w}\right)\right], \qquad w>0, \tag{19}\]
where the parameters satisfy the conditions: \(\chi>0\) and \(\psi\geq 0\), if \(\lambda<0\); \(\chi>0\) and \(\psi>0\), if \(\lambda=0\); \(\chi\geq 0\) and \(\psi>0\), if \(\lambda>0\). If \(W\) has the pdf in (19), then we simply write \(W\sim\mathcal{GIG}\left(\lambda,\chi,\psi\right)\). The expectations of \(W\) and \(1/W\), used in Section 4.1, are
\[\mbox{E}\left(W\right)=\sqrt{\frac{\chi}{\psi}}\frac{K_{\lambda+1}(\sqrt{\psi \chi})}{K_{\lambda}(\sqrt{\psi\chi})} \tag{20}\]
and
\[\mbox{E}\left(\frac{1}{W}\right)=\sqrt{\frac{\psi}{\chi}}\frac{K_{\lambda+1}( \sqrt{\psi\chi})}{K_{\lambda}(\sqrt{\psi\chi})}-\frac{2\lambda}{\chi}. \tag{21}\]
## Appendix B Special and limiting cases of the GH distribution
### \(\mbox{GH}\rightarrow\mbox{Skew-}t\rightarrow\mbox{$t\rightarrow$ Gaussian}\)
If \(\lambda<0\) and \(\psi\to 0\), then \(W\sim\mathcal{GIG}\left(\lambda,\chi,\psi\right)\) tends to \(W\sim\mathcal{IG}\left(-\lambda,\frac{\chi}{2}\right)\), where \(\mathcal{IG}\left(\cdot\right)\) denotes the inverse gamma distribution. Therefore, the NMVM representation in (2) becomes
\[\mathbf{X}=\mathbf{\mu}-V\frac{\chi}{2\lambda}\mathbf{\gamma}+\sqrt{V}\mathbf{U},\]
where \(V=-\frac{2\lambda}{\chi}W\sim\mathcal{IG}\left(-\lambda,-\lambda\right)\) and \(\mathbf{\bar{U}}\sim\mathcal{N}_{d}\left(\mathbf{0},-\frac{\chi}{2\lambda}\mathbf{\Sigma}\right)\), with \(\left|\mathbf{\Sigma}\right|=1\). Note that, thanks to the multiplicative factor \(-\chi/\left(2\lambda\right)\), \(\left|\mbox{Cov}\left(\mathbf{\bar{U}}\right)\right|=\left[-\chi/\left(2\lambda \right)\right]^{d}\left|\mathbf{\Sigma}\right|=\left[-\chi/\left(2\lambda\right) \right]^{d}\) can be any positive real number. Under this setting, \(\mathbf{X}\sim\mathcal{S}t_{d}\left(\mathbf{\mu},-\frac{\chi}{2\lambda}\mathbf{\Sigma},- \frac{\chi}{2\lambda}\mathbf{\gamma},-2\lambda\right)\), which represents a skew-\(t\) distribution with location parameter \(\mathbf{\mu}\), scale matrix \(-\frac{\chi}{2\lambda}\mathbf{\Sigma}\), skewness parameter \(-\frac{\chi}{2\lambda}\mathbf{\gamma}\), and \(\nu=-2\lambda\) degrees of freedom (Hu, 2005; Murray et al., 2014). Compared to the GH-parametrization adopted by McNicholas (2016), in our case, because of the identifiability constraint \(\left|\mathbf{\Sigma}\right|=1\), there is no reason to force \(\chi\) and \(\lambda\) to be related as \(\chi=\nu=-2\lambda\). In other words,
with our parametrization, \(\chi\) is unconstrained. Indeed, if we impose the constraint \(\chi=\nu=-2\lambda\) with our parametrization, then we would get \(\left|\mathrm{Cov}\left(\bar{\mathbf{U}}\right)\right|=1\). If, in addition, \(\mathbf{\gamma}=\mathbf{0}\), then \(\mathbf{X}\sim t_{d}\left(\mathbf{\mu},-\frac{\chi}{2\lambda}\mathbf{\Sigma},-2\lambda\right)\), which represents a \(t\) distribution with location parameter \(\mathbf{\mu}\), scale matrix \(-\frac{\chi}{2\lambda}\mathbf{\Sigma}\), and \(\nu=-2\lambda\) degrees of freedom. Finally, if we further consider \(\lambda=-\chi/\left(2c\right)\), with \(c>0\), and \(\chi\rightarrow\infty\), then we obtain \(\mathbf{X}\sim\mathcal{N}_{d}\left(\mathbf{0},c\mathbf{\Sigma}\right)\) as a limiting case.
(\mathbf{GH}\rightarrow\mathbf{Variance\ Gamma}\rightarrow\mathbf{ Asymmetric\ Laplace}\rightarrow\mathbf{ Laplace}\)
If \(\lambda>0\) and \(\chi\to 0\), then \(W\sim\mathcal{GIG}\left(\lambda,\chi,\psi\right)\) tends to \(W\sim\mathcal{G}\left(\lambda,\frac{\psi}{2}\right)\), where \(\mathcal{G}\left(\cdot\right)\) denotes the gamma distribution. Then, the NMVM representation in (2) becomes
\[\mathbf{X}=\mathbf{\mu}+V\frac{\psi}{2\lambda}\mathbf{\gamma}+\sqrt{V}\bar{\mathbf{U}},\]
where \(V=\frac{2\lambda}{\psi}W\sim\mathcal{G}\left(\lambda,\lambda\right)\) and \(\bar{\mathbf{U}}\sim\mathcal{N}_{d}\left(\mathbf{0},\frac{\psi}{2\lambda}\mathbf{\Sigma}\right)\), with \(\left|\mathbf{\Sigma}\right|=1\). Note that, thanks to the multiplicative factor \(\psi/\left(2\lambda\right)\), can be any positive real number. Under this setting, \(\mathbf{X}\sim\mathcal{VG}_{d}\left(\mathbf{\mu},\frac{\psi}{2\lambda}\mathbf{\Sigma}, \frac{\psi}{2\lambda}\mathbf{\gamma},\lambda\right)\), which represents a variance gamma distribution with location parameter \(\mathbf{\mu}\), scale matrix \(\frac{\psi}{2\lambda}\mathbf{\Sigma}\), skewness parameter \(\frac{\psi}{2\lambda}\mathbf{\gamma}\), and shape parameter \(\lambda\)(Nitithumbundit and Chan, 2020). Compared to the VG-parametrization adopted by Nitithumbundit and Chan (2020) and McNicholas (2016), in our case, because of the identifiability constraint \(\left|\mathbf{\Sigma}\right|=1\), there is no reason to force \(\psi\) and \(\lambda\) to be related as \(\psi=2\lambda\). In other words, with our parametrization, \(\psi\) is unconstrained. Indeed, if we impose the constraint \(\psi=2\lambda\) with our parametrization, then we would get \(\left|\mathrm{Cov}\left(\bar{\mathbf{U}}\right)\right|=1\). If, in addition, \(\lambda=1\), then \(V\sim\mathcal{E}\left(1\right)\), which is a standard exponential distribution, and \(\mathbf{X}\sim\mathcal{AL}_{d}\left(\mathbf{\mu},\frac{\psi}{2}\mathbf{\Sigma},\frac{\psi }{2}\mathbf{\gamma}\right)\), which represents an asymmetric Laplace distribution with location parameter \(\mathbf{\mu}\), scale matrix \(\frac{\psi}{2}\mathbf{\Sigma}\), and skewness parameter \(\frac{\psi}{2}\mathbf{\gamma}\); see Kozubowski and Podgorski (2000) and Morris et al. (2019). Finally, if we further consider \(\mathbf{\gamma}=\mathbf{0}\), then \(\mathbf{X}\sim\mathcal{L}_{d}\left(\mathbf{\mu},\frac{\psi}{2}\mathbf{\Sigma}\right)\), which represents a Laplace distribution with location parameter \(\mathbf{\mu}\) and scale matrix \(\frac{\psi}{2}\mathbf{\Sigma}\); see Kozubowski and Podgorski (2000).
### GH \(\rightarrow\) Normal-Inverse Gaussian \(\rightarrow\) Skew-Cauchy \(\rightarrow\) Cauchy
If \(\lambda=-1/2\), then \(\mathbf{X}\sim\mathcal{NIG}_{d}\left(\mathbf{\mu},\mathbf{\Sigma},\mathbf{\gamma},\chi,\psi\right)\), which denotes the normal-inverse Gaussian distribution with location parameter \(\mathbf{\mu}\), scale matrix \(\mathbf{\Sigma}\), skewness parameter \(\mathbf{\gamma}\), and concentration parameters \(\chi\) and \(\psi\)(O'Hagan et al., 2016). If, in addition, \(\psi\to 0\), then \(\mathbf{X}\sim\mathcal{SC}_{d}\left(\mathbf{\mu},\chi\mathbf{\Sigma},\chi\mathbf{\gamma}\right)\), which represents the skew-Cauchy distribution with with location parameter \(\mathbf{\mu}\), scale matrix \(\chi\mathbf{\Sigma}\), and skewness parameter \(\chi\mathbf{\gamma}\)(Cabral et al., 2012). Note that, \(\mathcal{SC}_{d}\left(\mathbf{\mu},\chi\mathbf{\Sigma},\chi\mathbf{\gamma}\right)\) can be also obtained as a special case of \(\mathcal{S}t_{d}\left(\mathbf{\mu},-\frac{\chi}{2\lambda}\mathbf{\Sigma},-\frac{\chi }{2\lambda}\mathbf{\gamma},-2\lambda\right)\) when \(\lambda=-1/2\); refer to Section B.1. Finally, if we further consider \(\mathbf{\gamma}=\mathbf{0}\), then \(\mathbf{X}\sim\mathcal{C}_{d}\left(\mathbf{\mu},\chi\mathbf{\Sigma}\right)\), which represents a Cauchy distribution with location parameter \(\mathbf{\mu}\) and scale matrix \(\chi\mathbf{\Sigma}\).
|
2303.02207 | Lightweight, Uncertainty-Aware Conformalized Visual Odometry | Data-driven visual odometry (VO) is a critical subroutine for autonomous edge
robotics, and recent progress in the field has produced highly accurate point
predictions in complex environments. However, emerging autonomous edge robotics
devices like insect-scale drones and surgical robots lack a computationally
efficient framework to estimate VO's predictive uncertainties. Meanwhile, as
edge robotics continue to proliferate into mission-critical application spaces,
awareness of model's the predictive uncertainties has become crucial for
risk-aware decision-making. This paper addresses this challenge by presenting a
novel, lightweight, and statistically robust framework that leverages conformal
inference (CI) to extract VO's uncertainty bands. Our approach represents the
uncertainties using flexible, adaptable, and adjustable prediction intervals
that, on average, guarantee the inclusion of the ground truth across all
degrees of freedom (DOF) of pose estimation. We discuss the architectures of
generative deep neural networks for estimating multivariate uncertainty bands
along with point (mean) prediction. We also present techniques to improve the
uncertainty estimation accuracy, such as leveraging Monte Carlo dropout
(MC-dropout) for data augmentation. Finally, we propose a novel training loss
function that combines interval scoring and calibration loss with traditional
training metrics--mean-squared error and KL-divergence--to improve
uncertainty-aware learning. Our simulation results demonstrate that the
presented framework consistently captures true uncertainty in pose estimations
across different datasets, estimation models, and applied noise types,
indicating its wide applicability. | Alex C. Stutts, Danilo Erricolo, Theja Tulabandhula, Amit Ranjan Trivedi | 2023-03-03T20:37:55Z | http://arxiv.org/abs/2303.02207v1 | # Lightweight, Uncertainty-Aware Conformalized Visual Odometry
###### Abstract
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics, and recent progress in the field has produced highly accurate point predictions in complex environments. However, emerging autonomous edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties. Meanwhile, as edge robotics continue to proliferate into mission-critical application spaces, awareness of model's the predictive uncertainties has become crucial for risk-aware decision-making. This paper addresses this challenge by presenting a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands. Our approach represents the uncertainties using flexible, adaptable, and adjustable prediction intervals that, on average, guarantee the inclusion of the ground truth across all degrees of freedom (DOF) of pose estimation. We discuss the architectures of generative deep neural networks for estimating multivariate uncertainty bands along with point (mean) prediction. We also present techniques to improve the uncertainty estimation accuracy, such as leveraging Monte Carlo dropout (MC-dropout) for data augmentation. Finally, we propose a novel training loss function that combines interval scoring and calibration loss with traditional training metrics-mean-squared error and KL-divergence-to improve uncertainty-aware learning. Our simulation results demonstrate that the presented framework consistently captures true uncertainty in pose estimations across different datasets, estimation models, and applied noise types, indicating its wide applicability.
## I Introduction
As machine learning (ML) becomes more prevalent in privacy, safety, and mission-critical robotics, the ability to quantitatively and visually assess predictive uncertainties of ML models is becoming essential for risk-aware control and decision-making. Two types of uncertainty can result from data-driven learning: _epistemic_ and _aleatoric_. Epistemic uncertainty arises from the variance in the training data and can often be reduced with more data. On the other hand, aleatoric uncertainty arises from random distortions in the data, such as blurriness, occlusions, overexposure, _etc._, which additional training data cannot mitigate.
Although mitigating epistemic uncertainty is challenging, detecting, explaining, and handling aleatoric uncertainty is even more difficult. Therefore, for mission-critical risk-aware robotics, it is desired that predictive models under aleatoric uncertainty must provide confidence measures and express input/model-dependent predictive uncertainties, along with the point (mean) prediction. Additionally, the uncertainty estimates must be extracted with minimal additional computing cost since many edge robotic systems have limited computing and storage capacity due to cost, footprint, legacy design, battery power, and other factors.
Meanwhile, given their computational intensity, traditional methods for uncertainty-aware predictions, such as Bayesian ML models, are unsuited for edge robotics. For example, in Bayesian ML models, a posterior distribution of weights is learned from the training data using Bayesian principles. Weights are then sampled from the posterior, and the network output is statistically computed and weighed against each sample's probability. Generating uncertainty-aware predictions, thus, involves sampling many model parameters from the posterior and evaluating predictions at each sample, which can be impractical under typical time and resource constraints for various edge robotics applications.
A more computationally-efficient alternative to Bayesian inference is Variational inference (VI) [1]. VI considers a family of approximate densities \(F\), from which a member \(q^{(w)}\) is learned by minimizing the Kullback-Leibler (KL) divergence to the posterior. However, selecting a flexible family that closely matches the true posterior in practice can be challenging. Additionally, VI struggles to approximate complex posteriors with multiple modes or sharp peaks.
Recently, conformal inference (CI) of ML models, also known as conformal prediction, was developed to overcome the above limitations of traditional uncertainty-aware prediction frameworks [2, 3, 4, 5, 6, 7]. Unlike classical statistical inference, which relies on data distribution to capture pre
Fig. 1: **Lightweight uncertainty-aware visual odometry (VO):** In this paper, we present a lightweight framework for uncertainty-aware visual odometry for edge robotics where computing resources are limited while the predictions need to be made in real time. Our framework exploits _conformal inference_ and presents four novel methodologies to extract point prediction and upper/lower bounds under the designated uncertainty coverage.
diction uncertainty and can be sensitive to model misspecification, CI is a distribution-free approach that guarantees the statistical validity of uncertainty intervals given a finite training sample. CI can also assess the degree of conformity of each new observation to the available data and uses this information to construct an uncertainty interval calibrated to a user-specified coverage rate. Moreover, CI can be combined with any underlying model with an inherent notion of uncertainty to provide uncertainty quantification that is both statistically valid and model-agnostic. One of the challenges in making CI practical is that the uncertainty estimates are often conservative, and our work overcomes this and other limitations, providing a compelling case for their use in robotics applications with stringent resource usage requirements.
In particular, leveraging these advantages of CI for edge robotics, in this work, we investigate the conformalization of visual odometry (VO), an essential task for autonomous navigation that estimates the position and orientation (i.e., pose) of a camera mounted on a moving vehicle relative to the environment as the camera moves. The resulting motion estimates from VO are used for typical autonomy objectives such as three-dimensional (3D) reconstruction, mapping, localization, _etc._ Especially since VO relies solely on cameras, which are passive, low power, and have a small footprint, VO-based motion estimates are suitable for ego-motion tracking under stringent footprint and battery power constraints. Fig. 1 overview the proposed techniques.
Exploring CI for lightweight uncertainty estimates in VO, our work makes the following key contributions:
* We present _four frameworks_ for extracting VO-uncertainty bands by univariate and multivariate conformalization. The presented frameworks vary in computing workload, uncertainty accuracy, interval adaptiveness, data dependency, _etc._, aiming to provide an entire spectrum of solutions for extracting VO's uncertainty estimates under varying computing resources, training data, and timing constraints for edge robotics.
* We present a novel loss function for joint training of point (mean) prediction and heteroskedastic uncertainty bounds. The loss function combines an interval score function and combined calibration loss function with mean squared error (MSE) loss and Kullback-Leibler (KL) divergence. Our novel loss function improves the accuracy of uncertainty coverage and enables computationally efficient predictive architecture for the simultaneous extraction of point predictions and uncertainties.
* We also discuss data augmentation for uncertainty-aware VO by providing additional training set by Monte Carlo Dropout (MC-Dropout), thus enabling the distillation of uncertainty estimates from sophisticated, computationally expensive frameworks to conformal quantile bounds.
## II Related Works
Under VO, the camera's ego motion is estimated based on the changes in visual information captured by the camera. The traditional methods utilized techniques that involved identifying and tracking distinctive visual features, such as corners or edges, for VO [8]. Direct methods [9] estimated the camera's motion by analyzing the changes in the intensity values of the pixels in the images captured by the camera. Hybrid methods for VO combined feature-based and direct methods; for example, feature-based techniques were used for initial camera motion and then refined using direct methods [10]. Structure from motion (SfM) is popularly used for simultaneously estimating the 3D structure of the environment and camera's motion [11].
A recent trend for VO is to use deep neural networks (DNN) to directly learn the relation between the camera's visual field and its ego-motion from data. PoseNet [12] demonstrated the feasibility of training convolutional neural networks for end-to-end tracking of the ego-motion of monocular cameras. PoseLSTM [13] utilized long-short-term memories (LSTM) to improve ego-motion accuracy compared to frame-based methods such as PoseNet. DeepVO [14] demonstrated the combined strength of recurrent convolutional neural networks (RCNNs) in learning features and modeling sequential relationships between consecutive camera frames. UnDeepVO [15] proposed an unsupervised deep learning approach capable of absolute scale recovery.
While many prior studies have focused on improving the accuracy of VO, only a few have addressed the challenge of extracting its predictive uncertainties, especially under time and computing resource constraints; for instance, integrated deep learning-based depth predictions with particle filtering to achieve uncertainty-aware visual localization [16]. However, particle filtering-based uncertainty quantification can be computationally expensive for large or high-dimensional state spaces. Kalman filters are a more computationally efficient alternative to particle filters, but their usage is limited to linear systems with Gaussian noise assumptions. By integrating Monte-Carlo Dropout (MC-dropout) [17] with deep learning-based predictors, such as PoseNet, [18] showed uncertainty-aware VO estimations. However, the method requires performing a sufficient number of dropout iterations and evaluating predictions for each, making it challenging to implement under time and computing resource constraints. Besides, MC-dropout, as a basis for uncertainty, lacks statistical rigor and ultimately leads to conservative prediction intervals that are not particularly adaptable to the data. D3VO [19] also utilized Bayesian techniques for uncertainty quantification, demonstrating state-of-the-art accuracy but with a large computational expense. With similar computational overhead, MDN-VO [20] combined an RCNN and mixture density network for uncertainty estimations based on maximizing the likelihood of pose estimations across a sequence of images. In contrast, the authors in [21] presented a lightweight deterministic uncertainty estimator as a small neural network that could be applied to VO or any data-driven deep neural network model. The method uncovers spatial and semantic model uncertainty with significantly less computation but lacks statistical guarantees.
Unlike the above, our methods build on conformal inference, which can provide statistically valid uncertainty
intervals while not requiring heavy computational budgets. Combining CI with an underlying model with an inherent notion of uncertainty (e.g., conditional quantile regression) guarantees true value coverage within the uncertainty intervals even when the model's predictions are poor. However, CI alone is not highly adaptable to heteroskedastic data, as prediction sets can appear fixed and weakly dependent on the model's predictors. For example, conformal classification can produce rather conservative prediction sets given their sole reliance on softmax scores, as mentioned in [5]. For regression problems, conformalized quantile regression (CQR) was introduced in [22]. CQR produces adaptive and distribution-free prediction intervals with a guaranteed marginal coverage rate, such as 90%. While typical full conformal prediction assumes that samples are drawn exchangeably (i.e., conditionally, i.i.d, on the joint probability distribution function between input and labels), CQR departs from this assumption to achieve a finite sample coverage guarantee through a technique called split conformal prediction. Various forms of CI, including conformal classification and CQR, were initially developed for one-dimensional (1D) data. They have since been extended to multivariate cases, making them particularly valuable for our application to VO.
## III Extracting Univariate Bands of Uncertainties for Visual Odometry (VO)
This section introduces two methods for extracting predictive uncertainty bands in VO: univariate conformalized quantile regression and conformalized set prediction. The methods presented here are lightweight but produce rectangular uncertainty bands, which can be too conservative. The first method uses quantile regression on each coordinate of the position and orientation output vectors of VO, which are combined to produce an overall quantile region. The second method maps the regression problem in VO to a classification problem and performs a set prediction, which can produce disjoint uncertainty bands. The presented techniques are compared in Table I and discussed below:
### _VO-Uncertainty by Univariate CQR_
We employ conformalized quantile regression (CQR) [22] to extract univariate uncertainty bands for each output coordinate in VO. To test our approach, we use PoseNet [12] with a ResNet34-based feature extractor for conformalization. However, the presented techniques are generalizable to other feature-based and direct predictors for VO. The PoseNet model outputs translational elements \(x\), \(y\), and \(z\), as well as rotational elements \(p\), \(q\), and \(r\) of angle \(w\) in radians. To reduce the number of response variables from seven to six, we convert the orientation quaternion (\(w,p,q,r\)) to Euler angles (roll \(\phi\), pitch \(\theta\), yaw \(\psi\)). After pre-training, we apply CQR to obtain prediction intervals along each one-dimensional variable, which reveals the predictive uncertainty. Fig. 2 depicts the process for this method.
The main goal of CQR is to produce prediction intervals where the probability of true realizations falling within these intervals is guaranteed to be equivalent to or greater than some chosen coverage rate \(p\), i.e., \(\mathbb{P}\{Y\in C(X)\mid X=x\}\geq p\). This property is known as _marginal coverage_, and is calculated over all training and test set samples. To compute \(C(X)\), we use Random Forest-based quantile regression, which is less prone to over-fitting, requires fewer data and computing, and results in lower and upper bound percentiles that capture heteroskedasticity by expanding and contracting based on the underlying uncertainty in the prediction. The quantiles are then conformalized (corrected) by a calibration set that is split a priori from the training set. The following equations represent the three steps of the procedure:
\[\{Q_{l},Q_{h}\}\gets QR(\{(X_{i},Y_{i})\}),\ i\in I_{train} \tag{1}\]
\[E_{i}\leftarrow\max\{Q_{l}(X_{i})-Y_{i},Y_{i}-Q_{h}(X_{i})\},\ i\in I_{cal} \tag{2}\]
\[C_{cal}(X)=[Q_{l}(X)-Q_{cal}(E,I_{cal}), \tag{3a}\] \[Q_{h}(X)+Q_{cal}(E,I_{cal})],\ \text{where}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Method** & **CQR** (Sec. III-A) & **CSP** (Sec. III-B) &
\begin{tabular}{c} **MCQR w/ MC-dropout** \\ (Sec. IV-A) \\ \end{tabular} & **CJP** (Sec. IV-B) \\ \hline Uncertainty Bands & Univariate & Univariate & Multivariate & Multivariate \\ \hline Disjoint Bands & No & Yes & No & No \\ \hline Computational Complexity & Low & Low & High & Medium \\ \hline Interval Calibration & Fixed & Fixed & Fixed & Tunable \\ \hline Interval Adaptiveness & Conservative & Semi-tunable & Flexible & Tunable \\ \hline Uncertainty Accuracy & Low & Medium & High & High \\ \hline Training Time & Short & Medium & Long & Medium \\ \hline Data Dependency & Low & High & High & Low \\ \hline \hline \end{tabular}
\end{table}
Table I: Qualitative comparison of proposed techniques on VO-uncertainty band extraction
Figure 2: **VO uncertainty estimation using univariate CQR:** Features of the input image and extracted and regressed to six-dimensional pose. Subsequently, each pose dimension’s predictions are conformalized using the held-out calibration data. Finally, multivariate uncertainty regions are predicted by multiplying 1D uncertainty bands.
\[Q_{cal}(E,I_{cal}):=(1-\alpha)(1+1/|I_{cal}|)-\text{th} \tag{3b}\] \[\text{quantile of }\{E_{i}:i\in I_{cal}\}\]
Here, given a certain miscoverage rate \(\alpha\), the quantile regression algorithm (\(QR\)) fits two conditional quantile functions \(Q_{l}\) and \(Q_{h}\) on the training set \(I_{train}\), which contains samples \(X\) and labels \(Y\). The effectiveness of the initial prediction interval \([Q_{l}(X),Q_{h}(X)]\) in covering \(Y_{i}\) is then measured using conformal scores \(E_{i}\), which are evaluated on the held out calibration set \(I_{cal}\). If \(Y_{i}\) is outside the boundaries, \(E_{i}\) measures the distance from the nearest boundary. If \(Y_{i}\) falls within the desired boundaries, \(E_{i}\) measures the larger of the two distances, accounting for both undercoverage and overcoverage. Finally, the calibrated prediction interval \(C_{cal}\) is formed using \(Q_{cal}(E,I_{cal})\), which is the empirical \((1-\alpha)(1+1/|I_{cal}|)\)-th quantile of \(E_{i}\), \(i\in I_{cal}\). Using a calibration set ensures that the resulting intervals have the desired miscoverage rate.
### _VO-Uncertainty by Conformalized Set Prediction (CSP)_
This section describes our method, conformalized set prediction (CSP), for converting each of the multiple univariate pose regression problems in VO to pose classification using one-hot encoding and generating conformalized prediction sets by discretizing the flying space. We use the same feature extractor as discussed in the previous section. However, instead of relying on quantile regression to determine the model's uncertainties, we leverage the model's normalized exponential scores (i.e., softmax scores) for conformal set prediction. A notable advantage of this approach is the ability to predict disjoint regions of uncertainty, unlike the previous one that only predicts contiguous regions of uncertainty.
The approach begins by discretizing the drone's flying space into \(K\) sets along each dimension \(x\), \(y\), and \(z\). To achieve this, a non-uniform space discretization is followed, wherein the histograms of the training set trajectories along each dimension are divided into \(K\) quantiles to determine class boundaries. The result is a one-hot encoding matrix based on \(K\), and a lightweight neural network classifier is built with a programmable output layer of size \(K\). For each dimension of the pose, a network is trained to output reliable softmax scores that can be conformalized. Fig. 3 illustrates the softmax classifiers for each dimension sharing their feature extraction head for parameter sharing and computational efficiency. The product of the predicted set of classes along each dimension produces the net uncertainty regions. Notably, this method can represent frequently visited locations with higher precision, represented by a class with a narrower gap between upper and lower boundaries. Conversely, less frequently visited locations are represented with lower precision.
Under the above classification treatment of VO, to perform conformal set prediction, a calibration set is first separated from the training set to compute conformal scores. It is essential to ensure the statistical guarantee of conformal prediction by ensuring that the average probability of correct classifications within the prediction sets \(C(X)\subset 1,...,K\) is almost exactly \(1-\alpha\), where \(\alpha\) denotes an arbitrary miscoverage rate (e.g., 10%). This marginal coverage property can be expressed mathematically as \(1-\alpha\leq\mathbb{P}Y\in C(X)\mid X=x\leq 1-\alpha+\frac{1}{n+1}\), where \(n\) is the size of the calibration set. Conformal scores are then obtained by subtracting the softmax of the correct class for each input from one. Finally, \(\hat{q}\) is computed as the \(ceil(\frac{(n+1)(1-\alpha)}{n})\) empirical quantile of these conformal scores, and the conformalized prediction set \(C(X)\mid X=x\) is formed by incorporating classes with softmax scores greater than \((1-\hat{q})\).
_Importantly_, since the predicted set along each dimension may include classes that are not proximal, the above method has the ability to produce disjoint uncertainty regions. In contrast, the previous method in Sec. III-A (and multivariate methods presented in the later discussion) can only output contiguous uncertainty regions. The class labels in the above conformal set prediction can also be determined by discretizing the entire 3D space; however, the necessary classes for matching precision to dimension-wise discretization would grow exponentially. We also found that the above conformalization procedure may sometimes be too strict, especially when the conditional softmax outputs are not representative. However, it can be improved by using the softmax outputs of all classes in gathering the conformal scores as in [5].
## IV Extracting Multivariate Bands of Predictive Uncertainties in Visual Odometry (VO)
This section presents two more approaches for extracting predictive uncertainty bands in VO: multivariate conformal
Fig. 3: **Uncertainty-aware VO by conformal set prediction (CSP):****(a)** The architecture of CSP for VO. The output of the feature extraction unit is fed to parallel 1D softmax classifiers. Subsequently, classifier outputs are conformalized for the predictive set generation along each dimension. Next, the sets are multiplied and projected to 3D space to generate uncertainty-aware rectangular predictive regions. **(b)** Our approach also leverages density-aware classification boundaries along each dimension. Based on the training set, frequently visited locations are packed into classes with a narrow gap between upper and lower boundaries for high precision determination when projecting to 3D space.
ized quantile regression with MC-dropout, and conformalized joint prediction. The first approach demonstrates the novel usage of MC-dropout as a data augmentation technique in improving the performance of a conditional variational autoencoder (CVAE). The second approach jointly trains predictive uncertainty bands and pose estimation using a new loss function, thus reducing computational resources for applications where uncertainty estimates are always required.
### _VO-Uncertainty by Multivariate CQR (MCQR)_
To generate more informative VO-uncertainty bands that consider the correlation among pose dimensions, we adopt the multivariate conformalized quantile regression algorithm in [23] and enhance it with MC-dropout (MCQR w/ MC-dropout). This method entails utilizing a CVAE to acquire a proper latent representation of the sample distribution \(Y\mid X\) and applying an extension of directional quantile regression (DQR) to create quantile regions. The architecture of the CVAE, shown in Fig. 4(a), is a variant of a variational autoencoder (VAE) that conditions the generative model on labels during both the encoding and decoding steps. This additional information improves the model's ability to produce targeted outputs. Instead of directly fitting quantile functions to the data with simple regression, a separate neural network is developed to learn the best conditions for sampling the latent representation \(Z\mid X\) of the complete multivariate response variable based on \(Y\mid X\). At the same time, a quantile region is generated in this dimension-reduced latent space, which has a more informative, flexible, and arbitrary shape when propagated through the decoder.
Compared to the procedure for extracting univariate bands in Sec. III-A, extracting multivariate bands of uncertainty here using a VAE predictor requires more training data due to the model's complexity and higher number of parameters. Therefore, to improve the accuracy of VAE-based uncertainty bands, we utilize MC-dropout to generate additional training data. Training VAEs with the additional data enables uncertainty distillation from the MC-dropout procedure into lightweight conformalized quantile bands. However, the underlying pose estimation model's performance heavily influences this approach's effectiveness. Poor predictions of mean estimates can hinder learning with additional data.
### _VO-Uncertainty by Conformalized Joint Prediction (CJP)_
This section discusses the joint training of multivariate VO-uncertainty bands and the mean pose, i.e., conformalized joint prediction (CJP). The approach reduces the computational costs of uncertainty-aware VO by sharing parameters and network layers for mean prediction and uncertainty bands, which is particularly useful for platforms where uncertainty estimation must always be ON.
Fig. 4(b) shows the proposed VAE architecture for simultaneously extracting pose predictions and upper/lower uncertainty bounds based on true conditional quantiles. In the network, we adopt a parametric rectified linear unit (PReLU) as an activation function to introduce additional learnable parameters, better handle negative values and avoid the dying ReLU problem (values become zero for any input). In Fig. 4, the network first extracts features from the input image using a pre-trained convolutional neural network (e.g., MobileNet [24]), followed by encoding to a 3D latent space with a spherical Gaussian probability distribution. Next, a sample is taken from the latent space and propagated through the decoder to generate the multivariate pose and uncertainty-bound estimations at the desired coverage rate.
The pinball loss function, a.k.a. quantile loss, used for conformalized quantile regression in [22, 23] is inadequate for such integrated training of the mean prediction and uncertainty bands. Thus, we propose a new training loss function combining the traditional mean-squared error and Kullback-Leibler (KL) divergence losses of a VAE, MSE\({}_{loss}\) and KL\({}_{loss}\), with two additional loss components INTSCORE\({}_{loss}\) and COMCAL\({}_{loss}\). This total loss function, \(\mathcal{L}_{Total}\), capitalizes on two unique findings reported in [25].
The first finding in the proposed loss function is the unconventional use of the interval score function (i.e., Winkler score [26, 27]) to train an uncertainty quantification model to consistently output centered prediction intervals within a specified quantile range. Notably, the interval score function is typically used to evaluate prediction intervals, not optimize them. This approach is advantageous for generating informative prediction intervals over pose trajectories. We choose two percentiles (e.g., \(\alpha_{l}\) = 5% and \(\alpha_{h}\) = 95%) as uncertainty bounds, compute the interval score loss shown below for each percentile, and then take their expected value:
\[\begin{split}\text{INTSCORE}_{loss}&=(Q_{h}-Q_{l})+ \frac{2}{\alpha}(Q_{l}-y)\mathbb{I}\{y<Q_{l}\}\\ &+\frac{2}{\alpha}(y-Q_{h})\mathbb{I}\{y>Q_{h}\}\end{split} \tag{4}\]
Here, \(y\) represents the pose label, \(Q_{l}\) and \(Q_{h}\) are each dimension's lower and upper quantile estimates, and \(\mathbb{I}\) is the indicator function. Including the interval score in \(\mathcal{L}_{Total}\) is instrumental in linking pose reconstruction optimization via
Fig. 4: **VAE architecture for joint training of point (mean) prediction and uncertainty bounds:****(a)** Predictions from a pose estimation network (such as PoseNet) are fed to a conditional VAE for pose reconstruction and uncertainty estimation by first encoding to the latent space and then generatively passing through a decoder. **(b)** Comparatively, the network here jointly learns point prediction and uncertainty estimates directly from the features extracted from camera images. The novel loss function in Eq. (6) is used for the integrated training. The simultaneous extraction of mean and uncertainty bounds provides better computational efficiency for applications where uncertainty awareness must always be ON.
\(\text{MSE}_{loss}\) to quantile optimization over each dimension while honoring the dimensions' covariance and correlation.
The second finding is combined calibration loss, denoted as \(\text{COMCAL}_{loss}\). \(\text{COMCAL}_{loss}\) establishes an explicit and controllable balance between prediction interval calibration and sharpness rather than relying on the loosely implicit balance in basic pinball loss. \(\text{COMCAL}_{loss}\) comprises two objectives: one that minimizes the difference between \(p_{avg}^{cov}\) and the chosen marginal coverage rate \(p\) (\(\text{CAL}_{obj}\)), and another that minimizes the distance between \(Q_{l}\) and \(Q_{h}\) (\(\text{SHARP}_{obj}\)). \(p_{avg}^{cov}\) is the estimated average probability that the pose label values lie within \([Q_{l}\), \(Q_{h}]\). The two objectives may conflict with each other since one aims to minimize over-coverage and under-coverage, while the other aims to minimize the length of the prediction interval. This necessitates including a hyper-parameter \(\lambda\) to strike the appropriate balance. Thus, \(\text{COMCAL}_{loss}\) is defined as follows:
\[\text{CAL}_{obj}=\] \[\mathbb{I}\{p_{avg}^{cov}<p\}\times\frac{1}{N}\sum_{i=1}^{N}[(y_{ i}-Q_{l,h}(x_{i}))\mathbb{I}\{y_{i}>Q_{l,h}\}]+\] \[\mathbb{I}\{p_{avg}^{cov}>p\}\times\frac{1}{N}\sum_{i=1}^{N}[(Q_{ l,h}(x_{i})-y_{i})\mathbb{I}\{y_{i}<Q_{l,h}\}] \tag{5a}\] \[\text{SHARP}_{obj}=\frac{1}{N}\sum_{i=1}^{N}\begin{cases}Q_{l}(x_{ i})-Q_{h}(x_{i}),&p\leq 0.5\\ Q_{h}(x_{i})-Q_{l}(x_{i}),&p>0.5\end{cases}\] (5b) \[\text{COMCAL}_{loss}=(1-\lambda)\times\text{CAL}_{obj}+\lambda \times\text{SHARP}_{obj} \tag{5c}\]
where \(x_{i}\) and \(y_{i}\) are individual images and labels of a training batch and \(Q_{l,h}\) is meant to denote that \(\text{CAL}_{obj}\) is computed separately for both \(Q_{l}\) and \(Q_{h}\) and then summed.
The complete loss function combining MSE loss, KL divergence, interval score, and calibration loss in our approach is thus given as:
\[\mathcal{L}_{Total} =\text{MSE}_{loss}(y,\hat{y})+\text{KL}_{loss}(\mu,log(\sigma^{2 }))\] \[+\text{INTSCORE}_{loss}(y,Q_{l},Q_{h},\{\alpha_{l},\alpha_{h}\}) \tag{6}\] \[+\text{COMCAL}_{loss}(y,p_{avg}^{cov},Q_{l},Q_{h})\]
where \(\hat{y}\) is the reconstructed pose. \(\mu\) and \(log(\sigma^{2})\) are the hyper-parameters of the VAE's latent space. We estimate \(p_{avg}^{cov}\) for each pose dimension with every training batch to enable flexible calibration during training.
Traditional CQR involves splitting the training set into two subsets and conformalizing the quantiles afterward (i.e., split conformal prediction). This is done to avoid full conformal prediction, which requires many more calibration steps than just one, sacrificing speed for statistical efficiency. However, cross-conformal prediction provides a viable alternative for conformalization by striking a reasonable balance between computational and statistical efficiency, utilizing multiple calibration steps across the entire training set [28]. Our approach is similar, representing a unique form of cross-conformal prediction. We assume that \(p_{avg}^{cov}\), which is averaged over randomized training batches, is sufficiently close to an average over the entire training set.
As will be demonstrated later in Sec. V, this approach yields superior outcomes compared to the other methods with a small increase in model complexity and size. The results underscore the method's accuracy, adaptability, and flexibility in producing conformalized prediction intervals for multivariate pose data. The method maintains its strength even when swapping the underlying model from a larger, more accurate one, such as ResNet34, to one better suited for lightweight edge applications like MobileNetV2. Furthermore, its advantages are consistent across multiple datasets, implying its robustness to numerous applications.
## V Results and Discussions
This section presents a chronological review of the results from the four frameworks proposed for conformalized visual odometry (CVO) and compared in Table I. The aim of these methods is to develop reliable prediction intervals that can capture true uncertainty in data-driven pose estimation with guaranteed confidence. While each approach is based on PoseNet [12], they can be applied to other pose estimation models. In highlighting the differences between the methods, we use the data from scene 2 of RGB-D Scenes Dataset v.2 [29] as visuals of its pose trajectory are easy to comprehend. All results illustrate uncertainty bands that exhibit at least 90% guaranteed marginal coverage through CI.
In Sec. III-A, we introduced univariate conformalized quantile regression (CQR) to generate uncertainty bands. However, as shown in Fig. 5, the resulting prediction interval is overly cautious and box-shaped despite its ability to adjust to the data's heteroscedasticity. This is because the method employs quantile regression on the distinct variables of the multivariate pose response and then merges them without considering their covariance and correlation. As a result, this method lacks sufficient information and is not particularly useful in most cases, despite its low computational complexity. This drawback is consistent across all datasets.
In Sec. III-B, we presented conformalized set prediction (CSP) to generate uncertainty bands. As illustrated in Fig. 5, this approach adapts well to the data, even with non-continuous uncertainty boundaries. The method's adaptiveness is deeply dependent on the model's softmax scores and the proper discretization of pose values, which can be adjusted by changing the number of classes \(K\). Like the first method, this approach has difficulty handling multivariate data, which limits its ability to estimate true uncertainty accurately. Although the average interval length is significantly shorter, the conservativeness of the rectangular uncertainty region has only scaled linearly, limiting its usefulness.
In Sec. IV-A, we introduced the third method, which utilizes multivariate conformalized quantile regression with MC-dropout (MCQR w/ MC-dropout) to generate uncertainty bands. This approach showcases the novel use of MC-dropout as a data augmentation technique, in addition to regularization and uncertainty estimation, to enhance learning. This method is the first step towards addressing the limitations of the previous methods by considering the relationships between pose dimensions. As shown in Fig. 5,
this method has the tightest uncertainty estimates, but they represent the opposite extreme of the conservatism observed in the univariate methods. However, this is inconsistent across all datasets; the method performs much better in some other datasets, indicating that it is highly data-dependent. This inconsistency in uncertainty accuracy is particularly noticeable in smaller datasets, as is the case here. Furthermore, this method has the highest computational complexity, requiring training and optimizing multiple networks. Although it can potentially produce flexible and highly accurate uncertainty intervals, its inconsistency across datasets and computational inefficiency limit its wide applicability.
The final method, described in Sec. IV-B demonstrates a novel combination of multivariate conformalized quantile regression and the mean prediction to produce uncertainty bounds using a new loss function that remarkably achieves multiple objectives simultaneously. We deem this approach conformalized joint prediction (CJP). Fig. 5 demonstrates its significant improvements over the other methods. Firstly, the prediction interval optimally expands and contracts in response to the data, being neither too dispersed nor sharp in each plot. The increase in interval length (and hence uncertainty) at the middle and end points in the trajectory corresponds to worse lighting conditions in the scene's image sequence. Secondly, with the tunable calibration parameter \(\lambda\)\(\in\)\(\{0,1\}\), we can optimize the network for sharpness and/or coverage. Here, \(\lambda\) was set to 0.5 to showcase the balanced standard. Additionally, Fig. 6 shows results of this method when switching the feature extraction model from ResNet34 to MobileNetV2, effectively reducing its learning capacity by over 80%. It handles the same image sequence with comparable accuracy even when introducing different forms of noise, such as gaussian, salt and pepper, and speckle. Notably, the uncertainty increases with this transition and even more with added noise. The speckle noise exasperates light variation the most and thus has the largest average interval length. An increase in the average uncertainty intervals indicates that the framework can capture the sensor's uncertainties along with the model's predictive uncertainties. The optimality of this method in extracting true pose estimation uncertainty is consistent across several datasets and conditions, indicating that it can fit many applications.
## VI Conclusion
This paper introduced and compared four novel frameworks for capturing aleatoric uncertainty in data-driven VO through conformal inference while considering computational resource limitations. The presented framework trade-off essential metrics of uncertainty-aware inference, such as computational workload, adaptiveness of uncertainty interval,
Fig. 5: **Comparison of uncertainty-aware prediction intervals over 1D and 2D pose estimations from the four proposed conformalized visual odometry (CVO) methods in Secs. III and IV: Differences in average interval length, contiguity, precision, accuracy, coverage, and adaptiveness are highlighted across the methods using Scene 2 of RGB-D Scenes Dataset v.2.**
training time, _etc._, aiming to provide a spectrum of solutions against widely varying robustness and resource constraints on edge robotics. We discussed the architectures of generative deep neural networks for estimating multivariate uncertainty bands along with point (mean) prediction, data augmentation techniques to improve the accuracy of uncertainty estimation, and novel training loss functions that combined interval scoring and calibration loss with traditional training metrics such as KL-divergence to improve uncertainty-aware learning.
|
2302.03230 | The gap equations of background field invariant Refined Gribov-Zwanziger
action proposals and the deconfinement transition | In earlier work, we set up an effective potential approach at zero
temperature for the Gribov-Zwanziger model that takes into account not only the
restriction to the first Gribov region as a way to deal with the gauge fixing
ambiguity, but also the effect of dynamical dimension-two vacuum condensates.
Here, we investigate the model at finite temperature in presence of a
background gauge field that allows access to the Polyakov loop expectation
value and the Yang-Mills (de)confinement phase structure. This necessitates
paying attention to BRST and background gauge invariance of the whole
construct. We employ two such methods as proposed elsewhere in literature: one
based on using an appropriate dressed, BRST invariant, gluon field by the
authors and one based on a Wilson-loop dressed Gribov-Zwanziger auxiliary field
sector by Kroff and Reinosa. The latter approach outperforms the former, in
estimating the critical temperature for N=2, 3 as well as correctly predicting
the order of the transition for both cases. | David Dudal, David Vercauteren | 2023-02-07T03:24:53Z | http://arxiv.org/abs/2302.03230v1 | The gap equations of background field invariant Refined Gribov-Zwanziger action proposals and the deconfinement transition
###### Abstract
In earlier work, we set up an effective potential approach at zero temperature for the Gribov-Zwanziger model that takes into account not only the restriction to the first Gribov region as a way to deal with the gauge fixing ambiguity, but also the effect of dynamical dimension-two vacuum condensates. Here, we investigate the model at finite temperature in presence of a background gauge field that allows access to the Polyakov loop expectation value and the Yang-Mills (de)confinement phase structure. This necessitates paying attention to BRST and background gauge invariance of the whole construct. We employ two such methods as proposed elsewhere in literature: one based on using an appropriate dressed, BRST invariant, gluon field by the authors and one based on a Wilson-loop dressed Gribov-Zwanziger auxiliary field sector by Kroff and Reinosa. The latter approach outperforms the former, in estimating the critical temperature for \(N=2,3\) as well as correctly predicting the order of the transition for both cases.
## I Introduction
It is well accepted from non-perturbative Monte Carlo lattice simulations that SU(\(N\)) Yang-Mills gauge theories in the absence of fundamental matter fields undergo a deconfining phase transition at a certain critical temperature [1; 2]. This transition corresponds to the breaking of a global \(\mathbb{Z}_{N}\) center symmetry when the Euclidean temporal direction is compactified on a circle, with circumference proportional to the inverse temperature [3; 4]. The vacuum expectation value of the Polyakov loop [5] serves as an order parameter for this symmetry, and has as such inspired an ongoing research activity into its dynamics, see for example [6; 7; 8; 9; 10].
Even in the presence of dynamical quark degrees of freedom (which explicitly break the center symmetry) the Polyakov loop remains the best observable to capture the cross-over transition, see [11; 12] for ruling lattice QCD estimates. Since the transition temperature is of the order of the scale at which these gauge theories (which include QCD) become strongly coupled, it is a highly challenging endeavour to get reliable estimates for the Polyakov loop correlators, including its vacuum expectation value, analytically. This is further complicated by the non-local nature of the loop. These features highlight the sheer importance of lattice gauge theories to allow for a fully non-perturbative computational framework. Nonetheless, analytical takes are still desirable to offer a complementary view at the same physics, in particular as lattice simulations do also face difficulties when the physically relevant small quark mass limit must be taken, next to the issue of potentially catastrophic sign oscillations at finite density [13; 14].
Over the last two decades, a tremendous effort has been put into the development and application of Functional Methods to QCD, including the respective hierarchies of Dyson-Schwinger and Functional Renormalization Group equations [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] as well as variational approaches based on the Hamiltonian formulation or on \(N\)-particle-irreducible effective actions [34; 35; 36; 37; 38; 39; 40] or alternatives [41]. These methods are quite successful in describing the vacuum properties of the theory as well as various aspects at finite temperature and/or density. They all rely, in one way or another, on the decoupling behavior of gluons in the Landau gauge, as dictated by results from lattice simulations [42; 43; 44; 45; 46; 47; 48; 49]. More recently, a more phenomenological approach has been put forward based on the Curci-Ferrari model [50; 51; 52; 53; 54].
One particular way to deal with non-perturbative physics at the level of elementary degrees of freedom is by dealing with the Gribov issue [55; 56]: the fact that there is no unique way of selecting one representative configuration of a given gauge orbit in covariant gauges [57]. As there is also no rigorous way to deal properly with the existence of gauge copy modes in the path integral quantization procedure, in this paper we will use a well-tested formalism available to deal with the issue, which is known as the Gribov-Zwanziger (GZ) formalism: a restriction of the path integral to a smaller subdomain of gauge fields [55; 58; 59].
This approach was first proposed for the Landau and the Coulomb gauges. It long suffered from a serious drawback: its concrete implementation seemed to be inconsistent with BRST (Becchi-Rouet-Stora-Tyutin) invariance [60; 61; 62] of
the gauge-fixed theory, which clouded its interpretation as a gauge (fixed) theory. Only more recently was it realized by some of us and colleagues how to overcome this complication to get a BRST-invariant restriction of the gauge path integral. As a bonus, the method also allowed the generalization of the Gribov-Zwanziger approach to the linear covariant gauges, amongst others [63; 64; 65; 66].
Another issue with the original Gribov-Zwanziger approach was that some of its major leading-order predictions did not match the corresponding lattice output. In the case of the Landau gauge, the Gribov-Zwanziger formalism by itself predicts, at tree level, a gluon propagator vanishing at momentum \(p=0\), next to, more importantly, a ghost propagator with a stronger than \(1/p^{2}\) singularity for \(p\to 0\). Although the latter fitted well in the Kugo-Ojima confinement criterion [67], it was at odds with large volume lattice simulations [68; 69]. By now, several analytical takes exist on this, all compatible, qualitatively and/or quantitatively, with lattice data, not only for elementary propagators but also for vertices [63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108].
In the Gribov-Zwanziger formalism in particular, the situation can be remedied by incorporating the effects of certain mass dimension-two condensates, the importance of which was already stressed before in papers like [109; 110; 111; 112; 113]. For the Gribov-Zwanziger formalism, this idea was first put on the table in [70; 71] with the condensate \(\langle\bar{\varphi}\varphi-\bar{\omega}\omega\rangle\) (the fields here are Gribov localizing ghosts, see section II). Later, a self-consistent computational scheme was constructed in [76] based on the effective action formalism for local composite operators developed in [112; 114], the renormalization of which was proven in [115]. This construction is more natural with condensates like \(\langle\bar{\varphi}\varphi\rangle\), \(\langle\bar{\varphi}\bar{\varphi}\rangle\), and \(\langle\varphi\varphi\rangle\). As the most promising candidate for a full description of the vacuum in this so-called "refined Gribov-Zwanziger" (RGZ) approach, the condensate \(\langle\bar{\varphi}\varphi\rangle\) was considered in [116] at zero temperature; this paper was meant as a jumping board for the present one. In the present work we consider both this last condensate and \(\langle\bar{\varphi}\varphi-\bar{\omega}\omega\rangle\).
In [117], the authors found that introducing a gluon background field into the Gribov-Zwanziger formalism (which is necessary to compute the vacuum expectation value of the Polyakov loop) is not as straightforward as one may naively be led to believe. A correct formalism was proposed in [117], with a competing formalism later proposed by Kroff and Reinosa in [118]. In the present work, we again consider both these formalisms.
The structure of the paper is as follows. In Section II, we briefly sketch the original Gribov-Zwanziger approach at zero temperature in the Landau gauge, followed by a short reminder how to make this BRST invariant in Section III. Section IV deals with adding an appropriate background gauge to couple the Polyakov loop to the model and we summarize several approaches to do this in a BRST and background invariant fashion. In Section V, the addition of the dimension-two condensates is done, followed by preparatory computations at zero temperature in Section VI, needed to come to our finite temperature predictions in Section VII. We end with conclusions in Section VIII. Several technical results are relegated to a series of Appendices, including a constructive proof of a statement made in [118].
## II A Brief Overview of the Gribov-Zwanziger Formalism
Let us start by giving a short overview of the Gribov-Zwanziger framework [55; 58; 59; 119]. As already mentioned in the Introduction, the basic Gribov-Zwanziger action arises from the restriction of the domain of integration in the Euclidean functional integral to the Gribov region \(\Omega\), which is defined as the set of all gauge field configurations fulfilling the Landau gauge, \(\partial_{\mu}A^{a}_{\mu}=0\), and for which the Faddeev-Popov operator \({\cal M}^{ab}=-\partial_{\mu}(\partial_{\mu}\delta^{ab}-gf^{abc}A^{c}_{\mu})\) is strictly positive, namely
\[\Omega\;=\;\{A^{a}_{\mu}\;;\;\;\partial_{\mu}A^{a}_{\mu}=0\;;\;\;{\cal M}^{ ab}=-\partial_{\mu}(\partial_{\mu}\delta^{ab}-gf^{abc}A^{c}_{\mu})\;>0\;\}\;.\]
The boundary \(\partial\Omega\) of the region \(\Omega\) is the (first) Gribov horizon.
One starts with the Faddeev-Popov action in the Landau gauge
\[S_{\rm FP}=S_{\rm YM}+S_{\rm Lg}\;,\] (1a) where \[S_{\rm YM}\] and \[S_{\rm Lg}\] denote, respectively, the Yang-Mills and the Landau gauge-fixing terms, namely \[S_{\rm YM}=\frac{1}{4}\int d^{d}x\;F^{a}_{\mu\nu}F^{a}_{\mu\nu}\;, \tag{1b}\] \[S_{\rm Lg}=\int d^{d}x\left(b^{a}\partial_{\mu}A^{a}_{\mu}+\bar{c}^{a}\partial_{ \mu}D^{ab}_{\mu}c^{b}\right)\;, \tag{1c}\]
where \((\bar{c}^{a},c^{a})\) are the Faddeev-Popov ghosts, \(b^{a}\) is the Lagrange multiplier implementing the Landau gauge, \(D^{ab}_{\mu}=(\delta^{ab}\partial_{\mu}-gf^{abc}A^{c}_{\mu})\) is the covariant derivative in the adjoint representation of \(SU(N)\), and \(F^{a}_{\mu\nu}\) denotes the field strength:
\[F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+gf^{abc}A^{ b}_{\mu}A^{c}_{\nu}\;. \tag{1d}\]
Following [55; 58; 59; 119], the restriction of the domain of integration in the path integral is achieved by adding an additional term \(H(A)\), called the horizon term, to the Faddeev-Popov action \(S_{\rm FP}\). This \(H(A)\) is given by the following non-local expression
\[H(A,\gamma)=g^{2}\int d^{d}xd^{d}y\;f^{abc}A^{b}_{\mu}(x)\left[{\cal M}^{-1}( \gamma)\right]^{ad}(x,y)f^{dec}A^{e}_{\mu}(y)\;, \tag{2}\]
where \({\cal M}^{-1}\) stands for the inverse of the Faddeev-Popov operator. The partition function can then be written as [55; 58; 59; 119]:
\[Z_{\rm GZ}=\int_{\Omega}[{\cal D}A{\cal D}c{\cal D}\overline{c}{\cal D}b]e^{-S _{\rm FP}}=\int[{\cal D}A{\cal D}c{\cal D}\overline{c}{\cal D}b]e^{-(S_{\rm FP }+\gamma^{4}H(A,\gamma)-dV\gamma^{4}(N^{2}-1))}\;, \tag{3}\]
where \(V\) is the Euclidean space-time volume. The parameter \(\gamma\) has the dimension of a mass and is known as the Gribov parameter. It is not a free parameter of the theory. It is a dynamical quantity, being determined in a self-consistent way through a gap equation called the horizon condition [55; 58; 59; 119], given by
\[\langle H(A,\gamma)\rangle_{\rm GZ}=dV(N^{2}-1)\;, \tag{4}\]
where the notation \(\langle\cdots\rangle_{\rm GZ}\) means that the vacuum expectation value is to be evaluated with the measure defined in equation (3). An equivalent all-order proof of equation (4) can be given within the original Gribov no-pole condition framework [55], by looking at the exact ghost propagator in an external gauge field [120].
Although the horizon term \(H(A,\gamma)\) in equation (2) is non-local, it can be cast in local form by means of the introduction of a set of auxiliary fields \((\bar{\omega}^{ab}_{\mu},\omega^{ab}_{\mu},\bar{\varphi}^{ab}_{\mu},\varphi^{ ab}_{\mu})\), where \((\bar{\varphi}^{ab}_{\mu},\varphi^{ab}_{\mu})\) are a pair of bosonic fields and \((\bar{\omega}^{ab}_{\mu},\omega^{ab}_{\mu})\) are anti-commuting. It is not difficult to show that the partition function \(Z_{\rm GZ}\) in eq.(3) can be rewritten as [58; 59; 119]
\[Z_{\rm GZ}=\int[{\cal D}\Phi]e^{-S_{\rm GZ}[\Phi]}\;, \tag{5}\]
where \(\Phi\) accounts for the quantizing fields, \(A\), \(\bar{c}\), \(c\), \(b\), \(\bar{\omega}\), \(\omega\), \(\bar{\varphi}\), and \(\varphi\), while \(S_{\rm GZ}[\Phi]\) is the Yang-Mills action plus gauge fixing and Gribov-Zwanziger terms, in its localized version,
\[S_{\rm GZ}=S_{\rm YM}+S_{\rm gf}+S_{0}+S_{\gamma}\;,\] (6a) with \[S_{0}=\int d^{d}x(\bar{\varphi}^{ac}_{\mu}(-\partial_{\nu}D^{ab}_{\nu}) \varphi^{bc}_{\mu}-\bar{\omega}^{ac}_{\mu}(-\partial_{\nu}D^{ab}_{\nu})\omega^ {bc}_{\mu})\;, \tag{6b}\] \[S_{\gamma}=\gamma^{2}g\int d^{d}x\;f^{abc}A^{a}_{\mu}(\varphi^{bc}_{\mu}+ \bar{\varphi}^{bc}_{\mu})-d\gamma^{4}V(N^{2}-1)\;. \tag{6c}\]
It can be seen from (3) that the horizon condition (4) takes the simpler form
\[\frac{\partial{\cal E}_{v}}{\partial\gamma^{2}}=0\;, \tag{7}\]
which is called the gap equation. The quantity \({\cal E}_{v}(\gamma)\) is the vacuum energy defined by
\[e^{-V{\cal E}_{v}}=Z_{\rm GZ}\;\;. \tag{8}\]
The local action \(S_{\rm GZ}\) in equation (6a) is known as the Gribov-Zwanziger action. It has been shown to be renormalizable to all orders [58; 59; 70; 71; 121; 122; 123]. There are several issues with this action, though:
* Its BRST invariance is softly broken. This has found a solution in [65] through the \(A^{h}\) formalism; this is reviewed in section III.
* The propagators of both gluons and ghosts are not in agreement with the lattice. This is remedied in the refined Gribov-Zwanziger (RGZ) formalism, which adds local composite operators (LCOs). This is reviewed in section V.
## III BRST-invariant gluon field \(A^{h}\)
For a BRST-invariant formalism, it turns out to be most straightforward to introduce BRST-invariant projections of the gluon fields. This section gives a quick overview of the construction, which will be generalized in the following sections.
We start from the Yang-Mills action in a linear covariant gauge and in \(d\) Euclidean space dimensions:
\[S_{\text{LC}}=S_{\text{YM}}+S_{\alpha}\] (9a) where \[S_{\alpha}\] is now the gauge-fixing term in the linear covariant gauges: \[S_{\alpha}=\int d^{d}x(\tfrac{\alpha}{2}b^{a}b^{a}+ib^{a}\partial_{\mu}A^{a}_{ \mu}+\bar{c}^{a}\partial_{\mu}D^{ab}_{\mu}c^{b})\;,\] (9b) with \[\alpha\] the gauge parameter. As we are eventually interested in imposing the Gribov restriction and introducing the dimension two gluon condensate \[\langle A^{2}_{\mu}\rangle\] while preserving BRST invariance, we need a BRST invariant version of the \[A^{a}_{\mu}\] field. In order to construct this, we insert the following unity into the path integral [123; 124]: \[1=\mathcal{N}\int[\mathcal{D}\xi\mathcal{D}\tau\mathcal{D}\bar{ \eta}\mathcal{D}\eta]e^{-S_{h}}\;,\] (10a) \[S_{h}=\int d^{d}x\left(i\tau^{a}\partial_{\mu}(A^{h})^{a}_{\mu}+ \bar{\eta}^{a}\partial_{\mu}(D^{h})^{ab}_{\mu}\eta^{b}\right)\;,\] (10b) where \[\mathcal{N}\] is a normalization and \[(D^{h})^{ab}_{\mu}\] is the covariant derivative containing only the composite field \[(A^{h})^{a}_{\mu}\]. This local but non-polynomial composite field object is defined as: \[(A^{h})_{\mu}=h^{\dagger}A_{\mu}h+\tfrac{i}{g}h^{\dagger}\partial_ {\mu}h\;,\] (10c) \[h=e^{ig\xi}=e^{ig\xi^{a}T^{a}}\;,\] (10d) where the \[T^{a}\] are the generators of the gauge group SU( \[N\] ). The \[\xi^{a}\] are similar to Stueckelberg fields, while \[\eta^{a}\] and \[\bar{\eta}^{a}\] are additional (Grassmannian) ghost and anti-ghost fields. They serve to account for the Jacobian arising from the functional integration over \[\tau^{a}\] to give a Dirac delta functional of the type \[\delta(\partial_{\mu}(A^{h})^{a}_{\mu})\]. That Jacobian is similar to the one of the Faddeev-Popov operator, and is supposed to be positive which amounts to removing a large class of infinitesimal Gribov copies, see [63]. In mere perturbation theory, this is not the case, but the restriction to the Gribov region to be discussed will be sufficient to ensure it dynamically [55; 58].
Expanding (10c), one finds an infinite series of local terms:
\[(A^{h})^{a}_{\mu}=A^{a}_{\mu}-\partial_{\mu}\xi^{a}-gf^{abc}A^{b}_{\mu}\xi^{c }-\tfrac{a}{2}f^{abc}\xi^{b}\partial_{\mu}\xi^{c}+\cdots\;. \tag{11}\]
The unity (10a) can be used to stay within a local setup for an on-shell non-local quantity \((A^{h})^{a}_{\mu}\) that can be added to the action. Notice that the multiplier \(\tau^{a}\) implements \(\partial_{\mu}(A^{h})^{a}_{\mu}=0\) which, when solved iteratively for \(\xi^{a}\)
\[\xi_{*}=\frac{1}{\partial^{2}}\partial_{\mu}A_{\mu}+ig\frac{1}{\partial^{2}} \left[\partial_{\mu}A_{\mu},\frac{1}{\partial^{2}}\partial_{\nu}A_{\nu} \right]+\cdots\;,\] (12a) gives the (transversal) on-shell expression \[(A^{h})_{\mu}=\left(\delta_{\mu\nu}-\frac{\partial_{\mu}\partial_{\nu}}{ \partial^{2}}\right)\left(A_{\nu}+ig\left[A_{\nu},\frac{1}{\partial^{2}} \partial_{\lambda}A_{\lambda}\right]+\cdots\right)\;, \tag{12b}\]
clearly showing the non-localities in terms of the inverse Laplacian. One can see that \(A^{h}\to A\) when \(A^{a}_{\mu}\) is in the Landau gauge \(\partial_{\mu}A^{a}_{\mu}=0\). We refer to _e.g._[10; 123; 124; 125; 126] for more details. It can be shown that \(A^{h}\) is gauge invariant order per order, which is sufficient to establish BRST invariance. We will have nothing to say about large gauge transformations.
Mark that \((A^{h})^{a}_{\mu}\) is formally the value of \(A^{a}_{\mu}\) that (absolutely) minimizes the functional
\[\int d^{d}x\;A^{a}_{\mu}A^{a}_{\mu} \tag{13}\]
under (infinitesimal) gauge transformations \(\delta A^{a}_{\mu}=D^{ab}_{\mu}\omega^{b}\), see e.g. [63; 125; 126]. As such,
\[\int d^{d}x(A^{h})^{a}_{\mu}(A^{h})^{a}_{\mu}=\min_{\text{gauge orbit}}\int d^{d}x \ A^{a}_{\mu}A^{a}_{\mu}\;, \tag{14}\]
In practice, we are only (locally) minimizing the functional via a power series expansion (11) coming from infinitesimal gauge variations around the original gauge field \(A^{a}_{\mu}\), whereas the extremum being a minimum is accounted for if the Faddeev-Popov operator (second order variation that is) is positive. This is discussed in [63].
This field \(A^{h}\) can be used to construct a BRST-invariant modification of the Gribov-Zwanziger formalism. To do so, one replaces \(S_{0}\) in (6b) with
\[S_{0h}=\int d^{d}x(\bar{\varphi}^{ac}_{\mu}(-\partial_{\nu}(D^{h})^{ab}_{\nu} )\varphi^{bc}_{\mu}-\bar{\omega}^{ac}_{\mu}(-\partial_{\nu}(D^{h})^{ab}_{\nu}) \omega^{bc}_{\mu})\;,\] (15a) where \[D^{h}\] is the covariant derivative with \[A^{h}\] instead of \[A\], and one replaces \[S_{\gamma}\] in ( 6c ) with \[S_{\gamma h}=\gamma^{2}g\int d^{d}x\ f^{abc}(A^{h})^{a}_{\mu}(\varphi^{bc}_{ \mu}+\bar{\varphi}^{bc}_{\mu})-d\gamma^{4}V(N^{2}-1)\;. \tag{15b}\]
The action \(S_{\text{GZ}h}=S_{\text{YM}}+S_{\alpha}+S_{h}+S_{0h}+S_{\gamma h}\) enjoys the following exact BRST invariance, \(sS_{\text{GZ}h}=0\) and \(s^{2}=0\)[63]:
\[sA^{a}_{\mu}= -D^{ab}_{\mu}c^{b}\;, sc^{a}= \frac{g}{2}f^{abc}c^{b}c^{c}\;, \tag{16}\] \[s\bar{c}^{a}= ib^{a}\;, sb^{a}= 0\;,\] \[s\varphi^{ab}_{\mu}= 0\;, s\omega^{ab}_{\mu}= 0\;,\] \[s\bar{\omega}^{ab}_{\mu}= 0\;, s\bar{\varphi}^{ab}_{\mu}= 0\;,\] \[s\varepsilon^{a}= 0\;, s(A^{h})^{a}_{\mu}= 0\;,\] \[sh^{ij}= -igc^{a}(T^{a})^{ik}h^{kj}.\]
## IV Including the Polyakov loop
Our aim is to investigate the confinement/deconfinement phase transition of Yang-Mills theory. The standard way to achieve this goal is by probing the Polyakov loop order parameter,
\[\mathcal{P}=\frac{1}{N}\operatorname{tr}\left\langle Pe^{ig\int_{0}^{\beta}dt \ A_{0}(t,x)}\right\rangle\;, \tag{17}\]
where \(P\) denotes path ordering, needed in the non-Abelian case to ensure the gauge invariance of \(\mathcal{P}\). In analytical studies of the phase transition involving the Polyakov loop, one usually imposes the so-called "Polyakov gauge" on the gauge field, in which case the time-component \(A_{0}\) becomes diagonal and independent of (imaginary) time: \(\langle A_{\mu}(x)\rangle=\langle A_{0}\rangle\delta_{\mu 0}\), with \(\langle A_{0}\rangle\) belonging to the Cartan subalgebra of the gauge group. In the SU(2) case for instance, the Cartan subalgebra is one-dimensional and can be chosen to be generated by \(T^{3}\equiv\sigma^{3}/2\), so that \(\langle A^{a}_{0}\rangle=\delta^{a3}\langle A^{3}_{0}\rangle\equiv\delta^{a3} \langle A_{0}\rangle\). More details on Polyakov gauge can be found in [127; 6; 128]. Besides the trivial simplification of the Polyakov loop, when imposing the Polyakov gauge it turns out that the quantity \(\langle A_{0}\rangle\) becomes a good alternative choice for the order parameter instead of \(\mathcal{P}\), see [127] for an argument using Jensen's inequality for convex functions, see also [129; 130; 131]. For other arguments based on the use of Weyl chambers and within other gauges (see below), see [132; 52; 133].
As explained in [127; 134; 129], in the SU(2) case at leading order we then simply find, using the properties of the Pauli matrices,
\[\mathcal{P}=\cos\frac{\langle r\rangle}{2}\;, \tag{18}\]
where we defined
\[r=g\beta A_{0}\;, \tag{19}\]
with \(\beta\) the inverse temperature. This way, \(r=\pi\) corresponds to the "unbroken symmetry phase" (confined or disordered phase), equivalent to \(\langle\mathcal{P}\rangle=0\); while \(r\neq\pi\) (modulo \(2\pi\)) corresponds to the "broken symmetry phase"
(deconfined or ordered phase), equivalent to \(\langle{\cal P}\rangle\neq 0\). Since \({\cal P}\propto e^{-F/T}\) with \(T\) the temperature and \(F\) the free energy of a heavy quark, it is clear that in the unbroken phase (where the center symmetry is manifest: \(\langle{\cal P}\rangle=0\)), an infinite amount of energy would be required to free a quark. The broken/restored symmetry referred to is the \(\mathbb{Z}_{N}\) center symmetry of a pure gauge theory (no dynamical matter in the fundamental representation). With a slight abuse of language, we will refer to the quantity \(r\) as the Polyakov loop hereafter.
It is however a highly non-trivial job to actually compute \(r\). An interesting way around was worked out in [127; 134; 135], where it was shown that similar considerations apply in Landau-DeWitt gauges, a generalization of the Landau gauge in the presence of a background. The background needs to be seen as a field of gauge-fixing parameters and, as such, can be chosen at will _a priori_. However, specific choices turn out to be computationally more tractable while allowing one to unveil more easily the center-symmetry breaking mechanism. For the particular choice of self-consistent backgrounds which are designed to coincide with the thermal gluon average at each temperature, it could be shown that the background becomes an order parameter for center-symmetry as it derives from a center-symmetric background effective potential. An important assumption for this procedure to work is the underlying BRST invariance of the action, see [10; 135]).
In the presence of a gluon background field, the total gluon field is split into the background and the quantum fluctuations. We use the notation
\[a_{\mu}^{a}=\bar{A}_{\mu}^{a}+A_{\mu}^{a}\;, \tag{20}\]
where \(a_{\mu}^{a}\) is the full gluon field, \(\bar{A}_{\mu}^{a}\) is the background (which will correspond to the Polyakov loop), and \(A_{\mu}^{a}\) are the quantum fluctuations around the background. Furthermore will write \(\bar{D}_{\mu}^{ab}=\delta^{ab}\partial_{\mu}-gf^{abc}\bar{A}_{\mu}^{c}\) for the covariant derivative using only the background field \(\bar{A}\). The gauge is fixed by replacing \(S_{\rm Lg}\) in (1c) by
\[S_{\rm LdW}=\int d^{d}x(b^{a}\bar{D}_{\mu}^{ab}(\bar{A}_{\mu}^{b}+A_{\mu}^{b}) +\bar{c}^{a}\bar{D}_{\mu}^{ab}(\bar{D}_{\mu}^{bc}-gf^{bcd}A_{\mu}^{d})c^{c})\;. \tag{21}\]
Two ways to add a background field to the Gribov-Zwanziger formalism have appeared in the literature: one that introduces a gauge-invariant background field \((\bar{A}^{h})_{\mu}^{a}\)[117; 136], and one that ensures background gauge invariance by introducing non-local Wilson lines in the action [118]. We give a short review of both approaches in the subsections below.
### \(\bar{A}^{h}\) approach
In the \(\bar{A}^{h}\) approach, the action is \(S_{h}=S_{\rm YM}+S_{\rm LdW}+S_{\rm 0LdWh}+S_{\gamma{\rm LdW}h}+S_{\rm LdW }h\) with
\[S_{\rm 0LdWh}=\int d^{d}x(\bar{\varphi}_{\mu}^{ad}(\bar{D}^{h})_{ \mu}^{ab}(D^{h})_{\mu}^{bc}\varphi_{\mu}^{cd}-\bar{\omega}_{\mu}^{ad}(\bar{D} ^{h})_{\mu}^{ab}(D^{h})_{\mu}^{bc}\omega_{\mu}^{cd})\;, \tag{22a}\] \[S_{\gamma{\rm LdW}h}=\gamma^{2}g\int d^{d}x\;f^{abc}[(a^{h})_{ \mu}^{a}-(\bar{A}^{h})_{\mu}^{a}](\varphi_{\mu}^{bc}+\bar{\varphi}_{\mu}^{bc}) -dV(N^{2}-1)\gamma^{4}\;,\] (22b) \[S_{\rm LdW}h=\int d^{d}x\left(i\tau^{a}(\bar{D}^{h})_{\mu}^{ab}(( a^{h})_{\mu}^{b}-(\bar{A}^{h})_{\mu}^{b})+\bar{\eta}^{a}(\bar{D}^{h})_{\mu}^{ ab}(D^{h})_{\mu}^{bc}\eta^{c}\right)\;. \tag{22c}\]
In these expressions, \(a^{h}\) is a transversal projection of the gluon field, \((D^{h})_{\mu}^{ab}=\delta^{ab}\partial_{\mu}-gf^{abc}(a^{h})_{\mu}^{c}\) is the covariant derivative using this \(a^{h}\) field, and \(\bar{D}^{h}\) is the covariant derivative containing \(\bar{A}^{h}\), the background in the minimal Landau gauge (_i.e._ in the absolute minimum of (23) 1). Notice that, when coupling the gauge transformed gauge field \(a^{h}\) to the localizing auxiliary fields \((\bar{\varphi},\varphi)\), we used \(a^{h}-\bar{A}^{h}\). This is because we are only interested in imposing the Gribov condition on the quantum fields, which are the fields we integrate over. This way the series of \(a^{h}-\bar{A}^{h}\) starts at first order in the quantum gauge fields. For the rationale hereof, see [117]. Furthermore, mark that this approach applies the Gribov construction to the operator \(-(\bar{D}^{h})_{\mu}^{ab}(D^{h})_{\mu}^{bc}\). The proof that this is sufficient is analogous to the one given in [117] and is for our case worked out in Appendix A.
Footnote 1: Mark that any \(\bar{A}_{\mu}^{a}=\delta_{\mu 0}\delta^{ai}rT/g\) for \(i\) in the Casimir obeys the Landau gauge \(\partial_{\mu}\bar{A}_{\mu}^{a}=0\), but this is not the _minimal_ Landau gauge aimed for.
Let us start with the background and put it in the minimal Landau gauge. This means we minimize
\[\int d^{d}x\;\bar{A}_{\mu}^{a}\bar{A}_{\mu}^{a} \tag{23}\]
over the gauge orbit. If (for SU(2)) we start from a constant \(\bar{A}_{0}^{3}=rT/g\), this means we need to bring \(r\) to a value \(-2\pi<r<2\pi\). The case for more that two colors is analogous.
The quantum fields are to be put in the Landau background gauge. To construct \((A^{h})^{a}_{\mu}\), we will use the background in its minimal Landau gauge form \((\bar{A}^{h})^{a}_{\mu}\), such that we will require \((\bar{D}^{h})^{ab}_{\mu}(a^{b}_{\mu}-(\bar{A}^{h})^{b}_{\mu})=0\). This can be obtained from minimization of
\[\int d^{d}x\Big{(}a^{a}_{\mu}-(\bar{A}^{h})^{a}_{\mu}\Big{)}\Big{(}a^{a}_{\mu} -(\bar{A}^{h})^{a}_{\mu}\Big{)}\;. \tag{24}\]
This corresponds to the recipe used in [117], with the important remark that for this paper we still worked at \(T=0\) with constant background fields \(\bar{A}^{h}\) in mind, effectively leading to \(\bar{A}^{h}=0\). At \(T>0\) and for the type of background gauge fields that interests us here, this is no longer true.
In [135], the case was made to keep working with \(a^{h}\) coming from minimizing \(\int a^{2}\), as this leads to both BRST and background gauge invariance of the Gribov-Zwanziger action. This is true, but a price is paid: the classical (background) sector enters the Gribov construction, not only the quantum fields. It is not yet clear how the approach outlined in [135] would deal with the terms that are linear in the quantum fields and which will enter the effective action due to this setup. We will therefore not consider the framework of [135] for what follows.
To minimize (24), let us work in a series in the quantum field. Starting from \(a^{a}_{\mu}\) we can perform a gauge transform
\[a_{\mu}\to h^{\dagger}a_{\mu}h+\frac{i}{g}h^{\dagger}\partial_{\mu}h\;, \tag{25}\]
where \(a_{\mu}=a^{a}_{\mu}\tau^{a}/2\). Expand the matrix of the gauge transform as \(h=h_{0}+h_{1}+\cdots\), where \(h_{0}\) is the gauge transform matrix bringing \(\bar{A}^{a}_{\mu}\) to \((\bar{A}^{h})^{a}_{\mu}\), \(h_{1}\) is first order in the quantum fields, and so on. Going to first order in the quantum fields, we have that
\[a^{h}_{\mu}-\bar{A}^{h}_{\mu}=h^{\dagger}_{0}A_{\mu}h_{0}+\frac{i}{g}\bar{D}^ {h}_{\mu}(h^{\dagger}_{0}h_{1})+\cdots\;. \tag{26}\]
Applying the gauge condition yields
\[\frac{i}{g}h^{\dagger}_{0}h_{1}=-\frac{1}{\bar{D}^{2}_{h}}\bar{D}^{h}_{\mu}(h^ {\dagger}_{0}A_{\mu}h_{0})+\cdots\;, \tag{27}\]
and some more algebra gives
\[a^{h}_{\mu}-\bar{A}^{h}_{\mu}=\left(\delta_{\mu\nu}-\bar{D}^{h}_{\mu}\frac{1} {\bar{D}^{2}_{h}}\bar{D}^{h}_{\nu}\right)(h^{\dagger}_{0}A_{\nu}h_{0})+\cdots\;. \tag{28}\]
We thus see that \(a^{h}\) is attained by first gauge transforming \(A^{a}_{\mu}\) using the adjoint of the gauge transform that set the background \(\bar{A}^{a}_{\mu}\) equal to its lowest value, after which a certain projection operator must be applied.
Let us now look at what the result (28) entails for the physics of the theory. We can always do a background gauge transformation on \(\bar{A}_{\mu}\), \(A_{\mu}\), \(\bar{c}\), \(c\), and \(b\) using the gauge matrix \(h_{0}\). This will have the effect that all background gauge fields \(\bar{A}_{\mu}\) in the parts \(S_{\rm YM}\) and \(S_{\rm LdW}\) become \(\bar{A}^{h}_{\mu}\); the parts \(S_{\rm 0LdWh}\), \(S_{\gamma\rm LdWh}\), and \(S_{h}\) remain unchanged as the gluon fields there appear in invariant combinations. Finally, once we have imposed the Landau-DeWitt gauge through \(S_{\rm LdW}\) (see (21)), the projection operator in (28) will simplify to a unit operator and we have that \(a^{h}_{\mu}-\bar{A}^{h}_{\mu}\to A_{\mu}+\cdots\).
It remains to discuss the BRST and background gauge invariance of (28), order per order in the quantum fields. Intuitively, it is clear that we will find a BRST invariant \(a^{h}\), since it corresponds to the minimum along the gauge orbit and BRST transformations correspond to local gauge transformations. To be more concrete, in the current case we have the following BRST symmetry generated by the operator \(s\):2
Footnote 2: In [136], a nonzero transformation of the background gauge field \(s\bar{A}_{\mu}=\Omega_{\mu}\) with \(\Omega_{\mu}\) an auxiliary background ghost field was used, but this is not necessary for our purposes here. It merely served to simplify the algebraic discussion and proof of renormalizability of [136]. The physical case is recovered when \(\Omega^{a}_{\mu}\to 0\), such that \((a^{h})^{a}_{\mu}\) is invariant.
\[s\bar{A}^{a}_{\mu}=0\;,\qquad sA^{a}_{\mu}=-D^{ab}_{\mu}c^{b}\;,\qquad sc^{a}= \frac{1}{2}gf^{abc}c^{b}c^{c}\;,\qquad s\bar{c}^{a}=-ib^{a}\;, \tag{29}\]
and all other transformations zero. This transformation gives, to leading order in the quantum fields
\[s(h^{\dagger}_{0}A_{\mu}h^{0})=-h^{\dagger}_{0}(D_{\mu}c)h_{0}=-h^{\dagger}_{0 }(\bar{D}_{\mu}c)h_{0}+\cdots=-\bar{D}^{h}_{\mu}(h^{\dagger}_{0}ch_{0})\;, \tag{30}\]
such that (28) is indeed invariant.
Showing background gauge invariance is straightforward: transforming the background with some adjoint matrix \(U\) needs to be undone by \(h_{0}\to U^{\dagger}h_{0}\) so as to keep \((\bar{A}^{h})^{a}_{\mu}\) at its minimal value. This then requires a gauge transform with \(U\) on \(A^{a}_{\mu}\), \(c^{a}\), \(\bar{c}^{a}\), \(b^{a}\), \(\tau^{a}\), \(\eta^{a}\), and \(\bar{\eta}^{a}\) transforming as matter fields (\(\Phi\to U^{\dagger}\Phi U\)) while the Gribov ghosts \(\varphi^{ab}_{\mu}\), \(\bar{\varphi}^{ab}_{\mu}\), \(\omega^{ab}_{\mu}\), and \(\bar{\omega}^{ab}_{\mu}\) remain invariant. One easily verifies that this then leaves the action invariant.
### Kroff-Reinosa approach
In the Kroff-Reinosa (KR) approach, the action is \(S_{h}=S_{\rm YM}+S_{\rm LdW}+S_{\rm 0KR}+S_{\rm\gamma KR}\) with
\[S_{\rm 0KR}=\int d^{d}x(\hat{\bar{\varphi}}^{ae}_{\mu}D^{ab}_{\nu}D^ {bc}_{\nu}\hat{\bar{\varphi}}^{ce}_{\mu}-\hat{\bar{\phi}}^{ae}_{\mu}D^{ab}_{\nu }D^{bc}_{\nu}\hat{\bar{\omega}}^{ce}_{\mu})\;, \tag{31a}\] \[S_{\rm\gamma KR}=\gamma^{2}g\int d^{d}x\ f^{abc}[a^{a}_{\mu}-\bar {A}^{a}_{\mu}](\varphi^{bc}_{\mu}+\bar{\varphi}^{bc}_{\mu})-dV(N^{2}-1)\gamma^ {4}\;, \tag{31b}\]
The hatted quantities here are defined as
\[\hat{\Phi}^{ab}_{\mu}(x)=\Phi^{ac}_{\mu}(x)\left(Pe^{ig\int_{C}dx^{\prime}_{ \nu}\bar{A}^{e}_{\nu}(x^{\prime})T^{a}}\right)^{cb}\;, \tag{31c}\]
for \(\Phi\) equal to \(\varphi\) or \(\omega\), and the Hermitian adjoint hereof for \(\bar{\varphi}\) and \(\bar{\omega}\). The path \(C\) connects the point \(x\) to some arbitrary and constant point \(x_{0}\), which (for the constant backgrounds we consider) does not influence the dynamics in any way [118]. Under gauge transformations of the background, the hatted quantities transform as matter fields with only _one_ index, as the path-ordered exponential in (31c) absorbs the background gauge transformation of the second index. This ensures the background invariance of the action.
In practice, the effect of the Wilson line in (31c) is rather technical to work out, but when the dust settles and one integrates out the \((\bar{\varphi},\varphi)\) fields, one obtains the gluon propagator term
\[2g^{2}(N^{2}-1)\gamma^{4}\delta_{\mu\nu}\left(\frac{1}{-\bar{D}^{2}}\right)^{ ab}\;, \tag{32}\]
as was used in [137]. The structure constants that usually flank the inverse Faddeev-Popov operator in this term are absent, which greatly simplifies the computations.
Kroff & Reinosa also proposed to introduce color-dependent Gribov parameters:
\[\Big{(}\gamma_{0}P^{ab}+\gamma_{\rm ch}(\delta^{ab}-P^{ab})\Big{)}A^{b}_{\mu}\;, \tag{33}\]
where \(P^{ab}\) is a projection operator on the "neutral" subspace of color space (in the terminology of [118]), see Appendix B for the explicit construction of this non-trivial operator, which we did not find in [118]. We will not consider the nondegenerate case, where there are \(N^{2}-1\) different Gribov parameters, but only the partially degenerate case, where all the Gribov parameters in the "charged" subspace are taken equal and denoted \(\gamma_{\rm ch}\).
In [118], the authors note the loss of BRST invariance. As we already stressed the importance of this BRST invariance to ensure that a physical (background) effective potential can be computed [10; 134], let us spend a few words here to show that the Kroff-Reinosa construction can be recast in a BRST-invariant formulation. On shell and in the Landau-DeWitt gauge, this will effectively collapse back to (31a), _a posteriori_ granting credit to the approach of [118]. The construction again relies on the definition of a BRST-invariant \(A^{h}\) field. However, given that the Kroff-Reinosa setup is already manifestly invariant under gauge transformations of the background, the \(h_{0}\) used in the previous subsection is spurious. (Remember that in the Kroff-Reinosa setup, the auxiliary fields transforms in the bi-adjoint. So using the construct (28) is not an option here, since it does not transform under background transformations.) This means we need an approach similar to the one used in [10].
As such, we minimize
\[\int d^{d}x(a^{a}_{\mu}-\bar{A}^{a}_{\mu})^{2}=\int d^{d}x(A^{a}_{\mu})^{2} \tag{34}\]
under infinitesimal gauge transformations \(\delta a^{a}_{\mu}=\delta A^{a}_{\mu}=D^{ab}_{\mu}\omega^{b}\) to find a field \((A^{h})^{a}_{\mu}\) (and the background does not transform, see [138; 139] for more details). Then in \(S_{\rm 0KR}\) we make the replacement \(DD\to D^{h}D^{h}\), where \((D^{h})^{ab}_{\mu}\) is the covariant derivative containing \(\bar{A}^{a}_{\mu}+(A^{h})^{a}_{\mu}\). This makes this part of the action BRST invariant. The part \(S_{\rm\gamma KR}\) already transforms correctly.
## V BRST-invariant condensates
This section presents a short review of the Local Composite Operator (LCO) formalism as proposed in [112] modified in the presence of a background field and the Gribov horizon.
### Dimension-two gluon condensate
A BRST analysis [10] (for BRST in the background gauge, see for example [136; 140]) shows that, for the LCO formalism to stay renormalizable, the dimension-two operator
\[(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2} \tag{35}\]
should be used. First, the source terms
\[\int d^{d}x\left(\tfrac{1}{2}J(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}-\tfrac{1}{2} \zeta J^{2}\right) \tag{36}\]
are added to the action with \(J\) the source used to couple the operator to the theory. The term in \(J^{2}\) is necessary here for renormalizability of the generating functional of connected diagrams \(W(J)\) and, subsequently, of the associated generating functional of 1PI diagrams \(\Gamma\), known as the effective action. Here \(\zeta\) is a new coupling constant whose determination we will discuss later. In the physical vacuum, corresponding to \(J\to 0\), it should decouple again, at least if we were to do the computations exactly. At (any) finite order, \(\zeta\) will be explicitly present, even in physical observables, making it necessary to choose it as wisely as possibly. Notice that \(\zeta\) is _not_ a gauge parameter as it in fact couples to the BRST invariant quantity \(J^{2}\). Indeed, in a BRST invariant theory, we expect the gauge parameter to explicitly cancel order per order from physical observables, a fact guaranteed by _e.g._ the Nielsen identities [141], which are in themselves a consequence of BRST invariance [142]. Thanks to \(\zeta\), the Lagrangian remains multiplicatively renormalizable (see [10]).
To actually compute the effective potential, it is computationally simplest to rely on Jackiw's background field method [143]. Before integrating over any fluctuating quantum fields, a Legendre transform is performed, so that formally \(\sigma=\tfrac{1}{2}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}-\zeta J\). Plugging this into the Legendre transformation between \(\Gamma\) and \(W\), we find that we could just as well have started from the original path integral with the following unity inserted into it:3
Footnote 3: We normalize \(\sigma\) like in [76].
\[1=\mathcal{N}\int[\mathcal{D}\sigma]e^{-\frac{1}{2}\int d^{d}x\left(\sigma+ \tfrac{1}{2\sqrt{\zeta}}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}\right)^{2}}\;, \tag{37}\]
with \(\mathcal{N}\) an irrelevant constant. This is equivalent to a Hubbard-Stratonovich transformation, see for instance [112; 124], and it also evades the interpretational issues for the energy when higher-than-linear terms in the sources are present. Of course, if we could integrate the path integral exactly, this unity would not change a thing. The situation only gets interesting if the perturbative dynamics of the theory assign a non-vanishing vacuum expectation value to \(\sigma\). As such, this \(\sigma\) field allows to include potential non-perturbative information through its vacuum expectation value. In the case without a background, \(\sigma\) does indeed condense and a vacuum with \(\langle\sigma\rangle\neq 0\) is preferred.
For the record, BRST invariance is ensured if we assign \(s\sigma=-s\left(\tfrac{1}{2}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}\right)\), which implies off-shell that \(s\sigma=0\) thanks to the BRST invariance of \(a_{\mu}^{h}-\bar{A}_{\mu}^{h}\).
It is evident that \(\zeta\) can be interpreted as a genuine new coupling constant. Therefore, we now have two coupling constants, \(g^{2}\) and \(\zeta\), with \(g^{2}\) running as usual, that is: independently of \(\zeta\). This makes our situation suitable for the Zimmermann reduction of couplings program [144], see also [145] for a recent overview. In this program, one coupling (\(\zeta\) in our case) is re-expressed as a series in the other (here \(g^{2}\)), so that the running of \(\zeta\) controlled by \(\zeta(g^{2})\) is then automatically satisfied, see also [124]. More specifically, \(\zeta(g^{2})\) is determined such that the generating functional of connected Green functions, \(W(J)\), obeys a standard, linear renormalization group equation [112].
This selects one consistent coupling \(\zeta(g^{2})\) from a whole space of allowed couplings, and it is also the unique choice compatible with multiplicative renormalizability [112]. Given that \(\zeta\) should, in principle, not affect physics, we can safely rely here on this special choice, already made earlier in _e.g._[112]. This choice seems also to be a natural one from the point of view of the loop expansion of the background potential to be used below. In the \(\overline{\text{MS}}\) scheme, one finds [112; 146]
\[\zeta=\frac{N^{2}-1}{g^{2}N}\left(\frac{9}{13}+\frac{g^{2}N}{16 \pi^{2}}\frac{161}{52}+\mathcal{O}(g^{4})\right)\;, \tag{38a}\] \[Z_{\zeta}=1-\frac{g^{2}N}{16\pi^{2}}\frac{13}{3\epsilon}+ \mathcal{O}(g^{2})\;,\] (38b) \[Z_{J}=1-\frac{Ng^{2}}{16\pi^{2}}\frac{35}{6\epsilon}+\mathcal{O}(g^{2})\;, \tag{38c}\]
where \(Z_{\zeta}\), \(Z_{J}\) are the renormalization factors of \(\zeta J^{2}\), \(J\) respectively.
### Refined Gribov-Zwanziger action
In [70; 71; 76], it was noticed that the Gribov-Zwanziger formalism in Landau gauge is disturbed by non-perturbative dynamical instabilities, caused by the formation of dimension-two condensates, \(\langle A_{\mu}^{a}A_{\mu}^{a}\rangle\), \(\langle\bar{\varphi}_{\mu}^{ab}\varphi_{\mu}^{ab}-\bar{\omega}_{\mu}^{ab}\omega_ {\mu}^{ab}\rangle\), and/or \(\langle\bar{\varphi}_{\mu}^{ab}\varphi_{\mu}^{ab}\rangle\), which are energetically favored. Similar features were later noticed in the Maximal Abelian gauge Gribov-Zwanziger formulation [101; 147]. This led to the Refined Gribov-Zwanziger formalism, that explicitly takes the effects of these condensates into account.
The construction for the localizing-ghost condensates is analogous to that for the dimension-two gluon condensate. For the couplings and renormalization factors involved, we refer to the literature, see e.g. [112; 116; 110] and references therein.
The original proposal for the refinement to the Gribov-Zwanziger formalism [71] used the symmetric condensate \(\langle\bar{\varphi}_{\mu}^{ab}\varphi_{\mu}^{ab}-\bar{\omega}_{\mu}^{ab} \omega_{\mu}^{ab}\rangle\). This condensate has the advantage that it is immediately finite and strictly speaking no source-squared term (in the vein of the last term of (36)) is necessary. As a result, however, the gap equation for the condensate has no nonperturbative solutions. The Hubbard-Stratonovich transformation becomes useless here and as a result, there is no "classical" quadratic part for the potential for the condensate. We will circumvent this issue in the following section.
Using instead the philosophy of the approach starting from the analogon of (37) does not run into this problem, though, at the cost of introducing one truly new free coupling. In the following, we will call this approach the "symmetric" case.
Later approaches [76] focused on the condensate \(\langle\bar{\varphi}_{\mu}^{ab}\varphi_{\mu}^{ab}\rangle\). The \(T=0\) case was fully explored in [116], which can be immediately used as the starting point for the study of the Polyakov loop. In the following, this approach will be called the "\(\bar{\varphi}\varphi\)" case.
## VI Zero temperature jumping board
### Relevant parts of action
To compute the effective action at first order in the quantum corrections, we need the background part (classical part) and the quadratic terms of the action (of which we will need the trace-logarithm to compute the first-order quantum corrections).
The first term of the semi-classical perturbation series consists of the background terms. We only consider backgrounds with \(F_{\mu\nu}^{a}=0\), such that these background terms will only come from the LCO parts and from the Gribov-Zwanziger action. First we review some of the relevant formulae, which can be found in the literature.
From the Gribov-Zwanziger action we get, with the \(Z\) factors restored and in the more general renormalization scheme of [116],
\[-dV(N^{2}-1)Z_{\gamma}^{2}\gamma^{4}\;,\qquad Z_{\gamma}=1+\frac{b_{0}}{2} \frac{Ng^{2}}{(4\pi)^{2}}+\frac{3}{8}\frac{Ng^{2}}{(4\pi)^{2}}\frac{2}{\epsilon }\;. \tag{39}\]
In \(d=4-\epsilon\), this gives
\[-4V(N^{2}-1)\left(1-\frac{3}{8}\frac{Ng^{2}}{(4\pi)^{2}}+b_{0}\frac{Ng^{2}}{( 4\pi)^{2}}+\frac{3}{4}\frac{Ng^{2}}{(4\pi)^{2}}\frac{2}{\epsilon}\right)\gamma ^{4}\;. \tag{40}\]
To this we add the LCO part. The "usual" LCO part is:
\[S_{\rm LCO}=\int d^{4}x\left[\frac{1}{2}\sigma^{2}+\frac{1}{2\sqrt{\zeta}} \sigma(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}+\frac{1}{8\zeta}((a_{\mu}^{h}-\bar{A }_{\mu}^{h})^{2})^{2}\right]\;. \tag{41}\]
To add the \(\bar{\varphi}\varphi\) condensate, we need instead
\[S_{A^{2}+\bar{\varphi}\varphi}= \int d^{4}x\left[\frac{1}{2}\sigma_{1}^{2}-\frac{1}{2}\sigma_{2}^ {2}+\frac{1}{2\sqrt{\zeta}}\sigma_{1}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}- \sqrt{\frac{\zeta}{-2\alpha\zeta+\chi^{2}}}\sigma_{2}(\bar{\varphi}\varphi- \frac{\chi}{2\zeta}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2})\right.\] \[\left.+\frac{1}{8\zeta}((a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2})^{2}- \frac{1}{2}\frac{\zeta}{-2\alpha\zeta+\chi^{2}}(\bar{\varphi}\varphi-\frac{ \chi}{2\zeta}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2})^{2}\right]\;. \tag{42}\]
The background part with the renormalization factors restored is just
\[\int d^{4}x\left[\frac{1}{2Z_{\zeta}}\sigma_{1}^{2}-\frac{1}{2Z_{\alpha}} \sigma_{2}^{2}\right]\;,\qquad Z_{\zeta}^{-1}=1+\frac{13}{6}\frac{Ng^{2}}{(4 \pi)^{2}}\frac{2}{\epsilon}\;,\qquad Z_{\alpha}^{-1}=1-\frac{35}{12}\frac{Ng^{ 2}}{(4\pi)^{2}}\frac{2}{\epsilon}\;. \tag{43}\]
In the symmetric case we can just generalize the normal LCO case (because there is no mixing):
\[S_{\rm sym}= \int d^{4}x\left[\frac{1}{2}\sigma_{1}^{2}+\frac{1}{2\sqrt{\zeta}} \sigma_{1}(a_{\mu}^{h}-\bar{A}_{\mu}^{h})^{2}+\frac{1}{8\zeta}((a_{\mu}^{h}- \bar{A}_{\mu}^{h})^{2})^{2}\right]\] \[+\int d^{4}x\left[\frac{1}{2}\sigma_{2}^{2}-\frac{1}{\sqrt{\beta}} \sigma_{2}(\bar{\varphi}\varphi-\bar{\omega}\omega)+\frac{1}{2\beta}(\bar{ \varphi}\varphi-\bar{\omega}\omega)^{2}\right]\;. \tag{44}\]
Here, \(\beta\) is a new (free) coupling constant that will require determination. This \(\beta\) cannot be fixed from renormalization-group requirements as is the case with \(\zeta\), this due to the aforementioned lack of quadratic divergences after introducing the symmetric condensate. This means \(\beta\) is a non-running parameter that can be freely chosen; we determine a value for it in Appendix C.
The background part with the \(Z\) factors restored is just
\[\int d^{4}x\left[\frac{1}{2Z_{\zeta}}\sigma_{1}^{2}+\frac{1}{2}\sigma_{2}^{2} \right]\;,\qquad Z_{\zeta}^{-1}=1+\frac{13}{6}\frac{Ng^{2}}{(4\pi)^{2}}\frac{ 2}{\epsilon}\;. \tag{45}\]
We can now write down the background and quadratic parts for the cases we consider in this paper. At zero temperature, the gluon background field does not yet need to be included.
The full background part (classical part) in the \(\bar{\varphi}\varphi\) case is
\[\int d^{4}x\left[-\frac{2(N^{2}-1)}{Ng^{2}}\left(1-\frac{3}{8} \frac{Ng^{2}}{(4\pi)^{2}}+b_{0}\frac{Ng^{2}}{(4\pi)^{2}}+\frac{3}{4}\frac{Ng^{ 2}}{(4\pi)^{2}}\frac{2}{\epsilon}\right)\lambda^{4}\right.\] \[\left.+\frac{9}{26}\frac{N^{2}-1}{Ng^{2}}\left(1+\frac{13}{6} \frac{Ng^{2}}{(4\pi)^{2}}\frac{2}{\epsilon}\right)m^{4}-\frac{24}{35}\frac{( N^{2}-1)^{2}}{Ng^{2}}\left(1-\frac{35}{12}\frac{Ng^{2}}{(4\pi)^{2}}\frac{2}{ \epsilon}\right)M^{4}\right]\;.\] (46a) where we defined \[\lambda^{4}=2Ng^{2}\gamma^{4}\;, \tag{46b}\] \[m^{2}=\left.\frac{1}{\sqrt{\zeta}}\right|_{\rm leading}\sigma_{1}= \sqrt{\frac{13}{9}}\frac{Ng^{2}}{N^{2}-1}\sigma_{1}\;,\qquad M^{2}=\left.\sqrt {\frac{\zeta}{-2\alpha\zeta+\chi^{2}}}\right|_{\rm leading}\sigma_{2}=\sqrt{ \frac{35}{48}\frac{Ng^{2}}{(N^{2}-1)^{2}}}\sigma_{2}\;. \tag{46c}\]
The quadratic part of the action is:
\[\int d^{4}x\left(\frac{1}{2}A_{\mu}^{a}\left(-\delta_{\mu\nu} \partial^{2}+\left(1-\frac{1}{\xi}\right)\partial_{\mu}\partial_{\nu}\right)A _{\nu}^{a}+\bar{c}^{a}\partial^{2}c^{a}+U_{\mu}^{ab}\partial^{2}U_{\mu}^{ab}+ V_{\mu}^{ab}\partial^{2}V_{\mu}^{ab}\right.\] \[\left.-\bar{\omega}_{\mu}^{ab}\partial^{2}\omega_{\mu}^{ab}-2 \gamma^{2}gf^{abc}A_{\mu}^{a}U_{\mu}^{bc}+\frac{m^{2}}{2}A^{2}-M^{2}(U^{2}+V^ {2})\right)\;.\] (47a) where \[U_{\mu}^{ab}=\frac{1}{2}(\varphi_{\mu}^{ab}+\bar{\varphi}_{\mu}^{ab})\;, \qquad V_{\mu}^{ab}=\frac{i}{2}(\varphi_{\mu}^{ab}-\bar{\varphi}_{\mu}^{ab})\;. \tag{47b}\]
The full background part (classical part) in the symmetric case is
\[\int d^{4}x\left[-\frac{2(N^{2}-1)}{Ng^{2}}\left(1-\frac{3}{8} \frac{Ng^{2}}{(4\pi)^{2}}+b_{0}\frac{Ng^{2}}{(4\pi)^{2}}+\frac{3}{4}\frac{Ng^{ 2}}{(4\pi)^{2}}\frac{2}{\epsilon}\right)\lambda^{4}+\frac{9}{26}\frac{N^{2}-1 }{Ng^{2}}\left(1+\frac{13}{6}\frac{Ng^{2}}{(4\pi)^{2}}\frac{2}{\epsilon}\right)m ^{4}+\frac{\beta}{2}M^{4}\right]\;,\] (48a) where \[\lambda^{4}=2Ng^{2}\gamma^{4}\;,\qquad m^{2}=\left.\frac{1}{\sqrt{\zeta}} \right|_{\rm leading}\sigma_{1}=\sqrt{\frac{13}{9}}\frac{Ng^{2}}{N^{2}-1} \sigma_{1}\;,\qquad M^{2}=\frac{1}{\sqrt{\beta}}\sigma_{2}\;.\] (48b) The quadratic part of the action is: \[\int d^{4}x\left(\frac{1}{2}A_{\mu}^{a}\left(-\delta_{\mu\nu} \partial^{2}+\left(1-\frac{1}{\xi}\right)\partial_{\mu}\partial_{\nu}\right) A_{\nu}^{a}+\bar{c}^{a}\partial^{2}c^{a}+U_{\mu}^{ab}\partial^{2}U_{\mu}^{ab}+V_{\mu}^{ ab}\partial^{2}V_{\mu}^{ab}\right.\] \[\left.-\bar{\omega}_{\mu}^{ab}\partial^{2}\omega_{\mu}^{ab}-2 \gamma^{2}gf^{abc}A_{\mu}^{a}U_{\mu}^{bc}+\frac{m^{2}}{2}A^{2}-M^{2}(U^{2}+V^ {2}-\bar{\omega}\omega)\right)\;.\] (49a) where \[U_{\mu}^{ab}=\frac{1}{2}(\varphi_{\mu}^{ab}+\bar{\varphi}_{\mu}^{ab})\;, \qquad V_{\mu}^{ab}=\frac{i}{2}(\varphi_{\mu}^{ab}-\bar{\varphi}_{\mu}^{ab})\;. \tag{49b}\]
### Effective actions at zero temperature
The logarithmic trace of the operators is
\[\frac{1}{2}\operatorname{tr}\ln\left(\delta^{ab}\left(\delta_{\mu \nu}(p^{2}+m^{2})-\left(1-\frac{1}{\xi}\right)p_{\mu}p_{\nu}\right)\right.\qquad \left.-2\gamma^{2}gf^{acf}\delta_{\mu\nu}\right.\] \[\qquad\qquad\qquad\qquad\left.-(N^{2}-1)\operatorname{tr}\ln(p^{ 2})+\frac{d}{2}(N^{2}-1)^{2}\operatorname{tr}\ln(p^{2}+M^{2})-d(N^{2}-1)^{2} \operatorname{tr}\ln(p^{2}+sM^{2})\right.\] \[\qquad\qquad\qquad=\frac{1}{2}(N^{2}-1)(d-1)\operatorname{tr}\ln \left(p^{2}+m^{2}+\frac{\lambda^{4}}{p^{2}+M^{2}}\right)-\frac{1}{2}(N^{2}-1) \operatorname{tr}\ln(p^{2})+d(N^{2}-1)^{2}\operatorname{tr}\ln\frac{p^{2}+M^{ 2}}{p^{2}+sM^{2}}\;, \tag{50}\]
were we took the limit \(\xi\to 0\), and \(s=0\) for the \(\bar{\varphi}\varphi\) approach and \(s=1\) for the symmetric approach. The first \(\operatorname{tr}\ln\) can be rewritten as
\[\frac{1}{2}(N^{2}-1)(d-1)\bigg{(}\operatorname{tr}\ln(p^{2}+z_{+})+ \operatorname{tr}\ln(p^{2}+z_{-})-\operatorname{tr}\ln(p^{2}+M^{2})\bigg{)}\;, \tag{51}\]
where
\[z_{\pm}=\frac{1}{2}\left(m^{2}+M^{2}\pm i\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}} \right)\;. \tag{52}\]
Computing the trace in \(d=4-\epsilon\) dimensions gives
\[-\frac{3}{4(4\pi)^{2}}(N^{2}-1)\left((z_{+}^{2}+z_{-}^{2}-M^{4})\left(\frac{2 }{\epsilon}+\frac{5}{6}\right)-z_{+}^{2}\ln\frac{z_{+}}{\bar{\mu}^{2}}-z_{-}^ {2}\ln\frac{z_{-}}{\bar{\mu}^{2}}+M^{4}\ln\frac{M^{2}}{\bar{\mu}^{2}}\right)\;. \tag{53}\]
Given
\[z_{+}^{2}+z_{-}^{2}-M^{4}=m^{4}-2\lambda^{4}\;, \tag{54a}\] \[z_{+}z_{-}=m^{2}M^{2}+\lambda^{4}\;,\] (54b) \[z_{+}^{2}-z_{-}^{2}=i(m^{2}+M^{2})\sqrt{4\lambda^{4}-(m^{2}-M^{ 2})^{2}}\;,\] (54c) \[z_{+}^{2}\ln\frac{z_{+}}{\bar{\mu}^{2}}+z_{-}^{2}\ln\frac{z_{-} }{\bar{\mu}^{2}}=\frac{z_{+}^{2}+z_{-}^{2}}{2}\ln\frac{z_{+}z_{-}}{\bar{\mu}^{ 4}}+\frac{z_{+}^{2}-z_{-}^{2}}{2}\ln\frac{z_{+}}{z_{-}}\;,\] (54d) \[\ln\frac{z_{+}}{z_{-}}=2i\arctan\frac{\sqrt{4\lambda^{4}-(m^{2}-M^{ 2})^{2}}}{m^{2}+M^{2}}\;, \tag{54e}\]
we get for the trace:
\[-\frac{3}{4(4\pi)^{2}}(N^{2}-1)\left((m^{4}-2\lambda^{4})\left( \frac{2}{\epsilon}+\frac{5}{6}\right)-\frac{1}{2}(m^{4}+M^{4}-2\lambda^{4}) \ln\frac{m^{2}M^{2}+\lambda^{4}}{\bar{\mu}^{4}}\right.\\ \left.+(m^{2}+M^{2})\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}\arctan \frac{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}{m^{2}+M^{2}}+M^{4}\ln\frac{M^{2} }{\bar{\mu}^{2}}\right)\\ =-\frac{3}{4(4\pi)^{2}}(N^{2}-1)\left((m^{4}-2\lambda^{4})\left( \frac{2}{\epsilon}+\frac{5}{6}-\frac{1}{2}\ln\frac{m^{2}M^{2}+\lambda^{4}}{ \bar{\mu}^{4}}\right)\right.\\ \left.+(m^{2}+M^{2})\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}\arctan \frac{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}{m^{2}+M^{2}}-\frac{1}{2}M^{4}\ln \frac{m^{2}M^{2}+\lambda^{4}}{M^{4}}\right)\;. \tag{55}\]
The last \(\operatorname{tr}\ln\) is
\[d(N^{2}-1)^{2}\operatorname{tr}\ln\frac{p^{2}+M^{2}}{p^{2}+sM^{2}}=(1-s)d(N^{ 2}-1)^{2}\operatorname{tr}\ln(p^{2}+M^{2})=-\frac{2M^{4}}{(4\pi)^{2}}(1-s)(N^{ 2}-1)^{2}\left(\frac{2}{\epsilon}+1-\ln\frac{M^{2}}{\bar{\mu}^{2}}\right)\;. \tag{56}\]
In the \(\bar{\varphi}\varphi\) approach we have
\[\Gamma_{\bar{\varphi}\varphi}(m^{2},M^{2},\lambda^{4})=-\frac{2(N^{2 }-1)}{Ng^{2}}\left(1-\frac{3}{8}\frac{Ng^{2}}{(4\pi)^{2}}+b_{0}\frac{Ng^{2}}{(4 \pi)^{2}}\right)\lambda^{4}+\frac{9}{26}\frac{N^{2}-1}{Ng^{2}}m^{4}-\frac{24}{ 35}\frac{(N^{2}-1)^{2}}{Ng^{2}}M^{4}\\ -\frac{3}{4(4\pi)^{2}}(N^{2}-1)\left((m^{4}-2\lambda^{4})\left( \frac{5}{6}-\frac{1}{2}\ln\frac{m^{2}M^{2}+\lambda^{4}}{\bar{\mu}^{4}}\right) \right.\\ \left.+(m^{2}+M^{2})\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}\arctan \frac{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}{m^{2}+M^{2}}-\frac{1}{2}M^{4}\ln \frac{m^{2}M^{2}+\lambda^{4}}{M^{4}}\right)\\ -\frac{2M^{4}}{(4\pi)^{2}}(N^{2}-1)^{2}\left(1-\ln\frac{M^{2}}{ \bar{\mu}^{2}}\right)\;. \tag{57}\]
In the symmetric approach we get instead
\[\Gamma_{\rm sym}(m^{2},M^{2},\lambda^{4})=-\frac{2(N^{2}-1)}{Ng^{2 }}\left(1-\frac{3}{8}\frac{Ng^{2}}{(4\pi)^{2}}+b_{0}\frac{Ng^{2}}{(4\pi)^{2}} \right)\lambda^{4}+\frac{9}{26}\frac{N^{2}-1}{Ng^{2}}m^{4}+\frac{\beta}{2}M^{4} \\ -\frac{3}{4(4\pi)^{2}}(N^{2}-1)\left((m^{4}-2\lambda^{4})\left( \frac{5}{6}-\frac{1}{2}\ln\frac{m^{2}M^{2}+\lambda^{4}}{\bar{\mu}^{4}}\right) \right.\\ \left.+(m^{2}+M^{2})\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}\arctan \frac{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}{m^{2}+M^{2}}-\frac{1}{2}M^{4}\ln \frac{m^{2}M^{2}+\lambda^{4}}{M^{4}}\right)\;. \tag{58}\]
In order to determine the free parameters (\(b_{0}\), \(\bar{\mu}^{2}\), \(g^{2}\), \(\beta\)) and the zero-temperature condensates (\(m_{0}^{2}\), \(M_{0}^{2}\), \(\lambda_{0}^{4}\)), we have the following constraints:
* Gribov gap equation \(\frac{\partial\Gamma}{\partial\lambda^{4}}(m_{0}^{2},M_{0}^{2},\lambda_{0}^{4 })=0\),
* LCO gap equation for \(A^{2}\) condensate \(\frac{\partial\Gamma}{\partial m^{2}}(m_{0}^{2},M_{0}^{2},\lambda_{0}^{4})=0\),
* LCO gap equation for Gribov ghost condensate \(\frac{\partial\Gamma}{\partial M^{2}}(m_{0}^{2},M_{0}^{2},\lambda_{0}^{4})=0\),
* renormalization group \(\frac{(4\pi)^{2}}{Ng^{2}}=\frac{11}{3}\ln\frac{\bar{\mu}^{2}}{\lambda_{\rm MS }^{4}}\), with \(\Lambda_{\overline{\rm MS}}=0.224\,\)GeV in SU(3) and \(0.331\,\)GeV in SU(2) [148; 149],
* two pole masses: \(x_{0}=\frac{1}{2}(m^{2}+M^{2})\), \(y_{0}=\frac{1}{2}\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}\).
In the \(\bar{\varphi}\varphi\) approach this gives six constraints for six degrees of freedom. In the symmetric approach there is one more free parameter (\(\beta\)), leaving us with the freedom to choose \(\bar{\mu}^{2}\) to one of the scales in the logarithms. These scales are not too different from one another; we choose \(\bar{\mu}^{2}=\sqrt{m^{2}M^{2}+\lambda^{4}}=\sqrt{x_{0}^{2}+y_{0}^{2}}\).
The gluon propagator has poles at the values \(p_{\pm}^{2}=x_{0}\pm iy_{0}\); in SU(3) we have [150]\(x_{0}=0.261\,\)GeV\({}^{2}\) and \(y_{0}=0.465\,\)GeV\({}^{2}\), and in SU(2) we have [151]\(x_{0}=0.29\,\)GeV\({}^{2}\) and \(y_{0}=0.66\,\)GeV\({}^{2}\).
In the \(\bar{\varphi}\varphi\) approach we find [116] for SU(3): \(b_{0}=-3.42\), \(\bar{\mu}=0.31\,\)GeV; and for SU(2): \(b_{0}=-1.6\), \(\bar{\mu}=0.37\,\)GeV.
The symmetric approach is worked out in the Appendix C.
## VII Finite temperature
To reduce clutter in the subsequent subsections, let us introduce the following shorthands:
\[P_{\kappa}^{2}=(2\pi n+\kappa r)^{2}T^{2}+\vec{p}^{2}\;, \tag{59a}\] \[I(\Delta,r,T)=T\int\frac{d^{3}p}{(2\pi)^{3}}\ln\left(1-2e^{- \sqrt{\vec{\rho}^{2}+\Delta}/T}\cos r+e^{-2\sqrt{\vec{\rho}^{2}+\Delta}/T} \right)\;,\] (59b) \[I(\Delta,0,T)=2T\int\frac{d^{3}p}{(2\pi)^{3}}\ln\left(1-e^{-\sqrt{\vec{\rho}^{2}+ \Delta}/T}\right)\;. \tag{59c}\]
### Trace-logarithms
With a constant background \((\bar{A}_{\mu}^{a})^{h}=\delta^{a3}\delta_{\mu 0}rT/g\) (\(-2\pi<r<2\pi\)) in SU(2), we have that
\[\bar{D}_{\mu}^{\kappa}=\partial_{\mu}+i\kappa rT\delta_{\mu 0}\;, \tag{60}\]
where we used the conventions in Appendix D.1. As such the eigenvalues of \(-\bar{D}_{h}^{2}\) are \(P_{\kappa}^{2}\). In SU(2), the last two \(\mathrm{tr}\ln\)'s in (50) thus give the finite-temperature correction
\[\left(-\frac{1}{2}-12(1-s)\right)(I(0,r,T)+I(0,0,T)+I(0,-r,T))+12(1- s)(I(M^{2},r,T)+I(M^{2},0,T)+I(M^{2},-r,T))\\ =\left(-\frac{1}{2}-12(1-s)\right)\left(2I(0,r,T)-\frac{\pi^{2}T ^{2}}{45}\right)+12(1-s)(2I(M^{2},r,T)+I(M^{2},0,T))\;, \tag{61}\]
where we used the symmetry of \(I(\Delta,r,T)\) under \(r\to-r\).
In SU(3), charge conjugation invariance implies [118] it is enough to consider the background \((\bar{A}_{\mu}^{a})^{h}=\delta^{a3}\delta_{\mu 0}rT/g\) (\(-2\pi<r<2\pi\)). With the conventions in Appendix D.2, \(\hat{D}_{\mu}^{h}\) evaluates to:
\[\mathrm{v}_{3,8} : \partial_{\mu}\;, \tag{62a}\] \[\mathrm{v}_{1}^{\pm} : \partial_{\mu}\pm irT\delta_{\mu 0}\;,\] (62b) \[\mathrm{v}_{2}^{\pm} : \partial_{\mu}\pm\tfrac{i}{2}rT\delta_{\mu 0}\;,\] (62c) \[\mathrm{v}_{3}^{\pm} : \partial_{\mu}\pm\tfrac{i}{2}rT\delta_{\mu 0}\;. \tag{62d}\]
This allows us to compute the finite-temperature correction to the last two \(\mathrm{tr}\ln\)'s in (50) in SU(3):
\[\left(-\frac{1}{2}-32(1-s)\right)\left(2I(0,0,T)+I(0,r,T)+I(0,-r, T)+2I(0,\tfrac{r}{2},T)+2I(0,-\tfrac{r}{2},T)\right)\\ +32(1-s)\left(2I(M^{2},0,T)+I(M^{2},r,T)+I(M^{2},-r,T)+2I(M^{2}, \tfrac{r}{2},T)+2I(M^{2},-\tfrac{r}{2},T)\right)\\ =2\left(-\frac{1}{2}-32(1-s)\right)\left(-\frac{\pi^{2}T^{2}}{45} +I(0,r,T)+2I(0,\tfrac{r}{2},T)\right)+64(1-s)\left(I(M^{2},0,T)+I(M^{2},r,T)+ 2I(M^{2},\tfrac{r}{2},T)\right)\;. \tag{63}\]
The gluon trace-logarithm (the first trace in the last line of (50)) is more complicated. In the \(\bar{A}^{h}\) approach, the Gribov term is, at finite temperature, replaced with
\[\delta_{\mu\nu}\delta^{ab}\frac{\lambda^{4}}{p^{2}+M^{2}}\to\delta_{\mu\nu} \frac{\lambda^{4}}{N}f^{ace}\left(\frac{1}{P^{2}+M^{2}}\right)^{cd}f^{dbe}\;. \tag{64}\]
To evaluate this, we use (76) for SU(2) and (82) for SU(3).
In SU(2), the eigenvalues of the quadratic gluon operator (the analogon of the first term of the last line of (50)) are
\[P_{\pm}^{2}+m^{2}+\frac{\lambda^{4}}{2}\left(\frac{1}{P_{\pm}^{ 2}+M^{2}}+\frac{1}{P_{0}^{2}+M^{2}}\right)\;,\text{ and} \tag{65}\] \[P_{0}^{2}+m^{2}+\frac{\lambda^{4}}{2}\left(\frac{1}{P_{+1}^{2}+ M^{2}}+\frac{1}{P_{-1}^{2}+M^{2}}\right)\;.\]
For the trace-logarithm, this gives \(\frac{d-1}{2}\) times
\[\ln\left((P_{\pm}^{2}+m^{2})(P_{\pm}^{2}+M^{2})(P_{0}^{2}+M^{2})+ \frac{\lambda^{4}}{2}(P_{\pm}^{2}+M^{2}+P_{0}^{2}+M^{2})\right)\\ +\ln\left((P_{0}^{2}+m^{2})(P_{+1}^{2}+M^{2})(P_{-1}^{2}+M^{2})+ \frac{\lambda^{4}}{2}(P_{+1}^{2}+M^{2}+P_{-1}^{2}+M^{2})\right)\\ -2\ln(P_{0}^{2}+M^{2})-2\ln(P_{\pm}^{2}+M^{2})\;, \tag{66}\]
where the indices "\(\pm\)" need to be summed over. The terms on the last line give (after multiplication with \(\frac{d-1}{2}\) and taking the trace)
\[-9\operatorname{tr}_{T=0}\ln(-\partial^{2}+M^{2})-6I(M^{2},r,T)-3I(M^{2},0,T)\;. \tag{67}\]
What is left are three sixth-order polynomials in \(n\).4 In order to deal with them, we use (E4). This is straightforward to implement numerically, but does considerably slow down the computations.
Footnote 4: The second one, from the \(r=0\) state, is actually a third-order polynomial in \(n^{2}\), which can be factored, but handling this one numerically as well saves handwork and does not waste relatively that much more time doing numerics.
In SU(3), the eigenvalues of the gluon propagator are
\[\mathsf{v}_{3} : P_{0}^{2}+m^{2}+\frac{\lambda^{4}}{3}\left(\frac{1}{P_{+1}^{2}+ M^{2}}+\frac{1}{P_{-1}^{2}+M^{2}}+\tfrac{1}{2}\frac{1}{P_{+1/2}^{2}+M^{2}}+ \tfrac{1}{2}\frac{1}{P_{-1/2}^{2}+M^{2}}\right)\;, \tag{68a}\] \[\mathsf{v}_{8} : P_{0}^{2}+m^{2}+\frac{\lambda^{4}}{2}\left(\frac{1}{P_{+1/2}^{2} +M^{2}}+\frac{1}{P_{-1/2}^{2}+M^{2}}\right)\;,\] (68b) \[\mathsf{v}_{1}^{\pm} : P_{\pm 1}^{2}+m^{2}+\frac{\lambda^{4}}{3}\left(\frac{1}{P_{0}^{2}+M^ {2}}+\frac{1}{P_{\pm 1}^{2}+M^{2}}+\frac{1}{P_{\pm 1/2}^{2}+M^{2}}\right)\;,\] (68c) \[\mathsf{v}_{2}^{\pm},\mathsf{v}_{3}^{\mp} : P_{\pm 1/2}^{2}+m^{2}+\frac{\lambda^{4}}{3}\left(\frac{1}{P_{0}^{2}+ M^{2}}+\tfrac{1}{2}\frac{1}{P_{\pm 1}^{2}+M^{2}}+\tfrac{1}{P_{\pm 1/2}^{2}+M^{2}}+ \tfrac{1}{2}\frac{1}{P_{\mp 1/2}^{2}+M^{2}}\right)\;. \tag{68d}\]
The trace-logarithm now gives polynomials up to tenth order, for which we again use (E4), and the denominators lead to the subtraction
\[-\frac{d-1}{2}\Bigg{(}6\operatorname{tr}\ln(P_{0}^{2}+M^{2})+4 \operatorname{tr}\ln(P_{\pm 1}^{2}+M^{2})+7\operatorname{tr}\ln(P_{\pm\frac{1}{2}}^{2}+M^{2}) \Bigg{)}\\ =-42\operatorname{tr}_{T=0}\ln(-\partial^{2}+M^{2})-9I(M^{2},0,T) -12I(M^{2},r,T)-21I(M^{2},\tfrac{r}{2},T)\;. \tag{69}\]
In the Kroff-Reinosa approach, the Gribov term is, at finite temperature, replaced with
\[\delta_{\mu\nu}\delta^{ab}\frac{\lambda^{4}}{p^{2}+M^{2}}\to\delta_{\mu\nu} \delta^{ab}\frac{\lambda^{4}}{P^{2}+M^{2}}\;. \tag{70}\]
This gives instead: For SU(2) \(\frac{d-1}{2}\) times
\[\ln\left(P_{\pm}^{2}+m^{2}+\frac{\lambda^{4}}{P_{\pm}^{2}+M^{2}}\right)+\ln \left(P_{0}^{2}+m^{2}+\frac{\lambda^{4}}{P_{0}^{2}+M^{2}}\right)\;, \tag{71}\]
and for SU(3) \(\frac{d-1}{2}\) times
\[2\ln\left(P_{0}^{2}+m^{2}+\frac{\lambda^{4}}{P_{0}^{2}+M^{2}}\right)+\ln \left(P_{\pm}^{2}+m^{2}+\frac{\lambda^{4}}{P_{\pm}^{2}+M^{2}}\right)+2\ln \left(P_{\pm\frac{1}{2}}^{2}+m^{2}+\frac{\lambda^{4}}{P_{\pm\frac{1}{2}}^{2}+ M^{2}}\right)\;. \tag{72}\]
To compute this, we see that
\[\operatorname{tr}\ln\left(P_{r}^{2}+m^{2}+\frac{\lambda^{4}}{P_{ r}^{2}+M^{2}}\right) =\operatorname{tr}\ln(P_{r}^{2}+z_{+})+\operatorname{tr}\ln(P_{r} ^{2}+z_{-})-\operatorname{tr}\ln(P_{r}^{2}+M^{2})\\ =\operatorname{tr}_{T=0}\ln\left(p^{2}+m^{2}+\frac{\lambda^{4}}{ p^{2}+M^{2}}\right)+I(z_{+},r,T)+I(z_{-},r,T)-I(M^{2},r,T)\;,\] (73a) where \[z_{\pm}=\frac{1}{2}\left(m^{2}+M^{2}\pm i\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}} \right)\;. \tag{73b}\]
### Extremization
Once we have computed the effective action, we solve the gap equation to find the Gribov parameter \(\lambda\) and minimize with respect to the condensates. The Gribov gap equation corresponds to finding a _maximum_, which means the final solution will be a saddle point in the four-dimensional space of the parameters. This complicates numerical minimization.
In order to find this saddle point, we found it most straightforward to use iteration. Starting from seed values for the parameters (obtained from extrapolating from previous data obtained at, for example, lower temperature), we first maximize with respect to the Gribov parameter, then minimize with respect to the other parameters, maximize with respect to the Gribov parameter again, etc. until successive steps do not lead to significant changes any longer. Then we move on to the next value of the temperature.
This iteration is sometimes unstable, and may diverge. We found this can be cured by "damping" the change in the Gribov parameter \(\lambda\) in successive steps. If \(\lambda_{\rm o}^{2}\) is the previous value (of the square) and \(\lambda_{\rm n}^{2}\) the newly obtained one, we use
\[\frac{a\lambda_{\rm o}^{2}+\lambda_{\rm n}^{2}}{a+1} \tag{74}\]
for the next value of \(\lambda^{2}\). Taking \(a=1\) often leads to fast convergence for low temperatures. In the deconfined phases, taking \(a=10\) or some such generally ensures convergence.
### Results in \(\bar{\varphi}\varphi\) case
With the \(\bar{\varphi}\varphi\) approach in SU(2) with the Polyakov loop in the \(\bar{A}^{h}\) approach we still did not find any phase transition even at \(T=1.3\,\mathrm{GeV}\) (see FIG. 1c),5 while \(\lambda\) goes to zero around \(0.32\,\mathrm{GeV}\) (see FIG. 1a). In the KR approach the same happens: \(\lambda\) goes to zero around \(0.34\,\mathrm{GeV}\) (see FIG. 2a), while the Polyakov loop still signals confinement around \(T=1.3\,\mathrm{GeV}\) (see FIG. 2c).
Footnote 5: In the \(\bar{\varphi}\varphi\) approach, the renormalization scale \(\bar{\mu}\) is usually held fixed to its zero temperature value. For temperatures higher than this value, we took \(\bar{\mu}=T\) instead, but keeping \(\bar{\mu}\) fixed did not give qualitatively different results.
This shows that the Gribov parameter is not really an order parameter for confinement in this case. The discrepancy is due to the difference in mass between \(\varphi\) and \(\omega\): these fields are supposed to have their determinants cancel, which does not happen here. If these determinants were to cancel, \(\lambda\to 0\) would bring us back to the Curci-Ferrari-type model considered in [10], where confinement is recovered for \(T=0.32\,\mathrm{GeV}\). Without this cancellation of the two
Figure 1: Some of the results obtained in the \(\bar{A}^{h}\) approach for the \(\bar{\varphi}\varphi\) case in SU(2). As the numerics are quite heavy, we did the computation for a smaller selection of temperatures. FIG. a shows the Gribov parameter \(\lambda\), which goes to zero at \(T\approx 0.32\,\mathrm{GeV}\). Not shown is the Polyakov loop \(r\), which is equal to \(\pi\) throughout. FIG. b shows how the \(\langle\bar{\varphi}\varphi\rangle\) condensate (proportional to the mass parameter \(M^{2}\)) starts a rapid increase after \(\lambda\) has gone to zero. (Points for temperatures beyond \(0.50\,\mathrm{GeV}\) fall outside the plot.) Also not shown is \(m\), which does not vary all that much in the temperature range shown. FIG. c finally shows the potential of the Polyakov loop \(r\) (keeping the other parameters fixed to the values they have in the minimum of the potential) for \(T=1.3\,\mathrm{GeV}\), showing clearly that \(r=\pi\) is still the minimum.
determinants, \(M^{2}\) increases without bound (see FIG. 1b and 2b) (while \(m^{2}\) shows only a modest increase) and this seems to drive \(r\) to \(\pi\).
To conclude, it appears that the \(\bar{\varphi}\varphi\) case is flawed and does not describe the physics well. Due to these shortcomings, we did not bother to investigate the (more involved) SU(3) theory.
### Results in symmetric case: \(\bar{A}^{h}\) approach
In the \(\bar{A}^{h}\) approach for the symmetric case, the determinants of the \(\varphi\) and \(\omega\) propagators cancel, such that \(r\) is not constant anymore. It turns out, however, that \(r\) starts increasing in value the moment temperature is switched on, see FIG. 3a and 4b. A value of \(r\)_higher_ than its confining value (called "overconfining" in the following) suggests the Polyakov loop itself is negative, or the quark free energy has an imaginary part.
For SU(2), this overconfining minimum persist for all the temperature values we investigated. For \(T>0.40\,\mathrm{GeV}\), we found a second "normal" deconfining solution. However, the energy in this minimum remains higher than the energy in the overconfining minimum, and the situation shows no signs of improving with increasing temperature, see FIG. 3b. Given the difficulty of finding this deconfining minimum, we cannot rule out the existence of additional minima. The second-order phase transition one expects in SU(2), where the confining minimum spontaneously "rolls" into the deconfining minimum, certainly does not happen though.
Figure 3: Some of the results obtained in the \(\bar{A}^{h}\) approach for the symmetric case in SU(2) for the two minima we found: the “overconfining” minimum (\(r>\pi\)) in a full line and the deconfining minimum in a dashed line. Not shown are Gribov parameter and the dimension-two condensates, which do not vary much and also do not differ much between the two vacua. FIG. a shows the Polyakov loop \(r\) as a function of temperature. Already at very small temperature, \(r>\pi\), which implies the quark free energy has an imaginary part. FIG. b shows the energy in the minima. The “overconfining” vacuum is preferred for the entire temperature range.
Figure 2: Some of the results obtained in the KR approach for the \(\bar{\varphi}\varphi\) case in SU(2). FIG. a shows the Gribov parameter \(\lambda\), which goes to zero at \(T\approx 0.34\,\mathrm{GeV}\). Not shown is the Polyakov loop \(r\), which is equal to \(\pi\) throughout. FIG. b shows how the \(\langle\bar{\varphi}\varphi\rangle\) condensate (proportional to the mass parameter \(M^{2}\)) starts a rapid increase after \(\lambda\) has gone to zero. Also not shown is \(m\), which does not vary all that much in the temperature range shown. FIG. c finally shows the potential of the Polyakov loop \(r\) (keeping the other parameters fixed to the values they have in the minimum of the potential) for \(T=1.3\,\mathrm{GeV}\), showing clearly that \(r=\pi\) is still the minimum.
For SU(3) as well, the Polyakov loop does not remain in its symmetric point \(r=4\pi/3\) already at low temperatures, see FIG. 4b. Instead it goes up to \(5.58\) at \(T=0.335\,\mathrm{GeV}\). This time we do find a transition at \(T_{c}=0.335\,\mathrm{GeV}\), see FIG. 4c, and \(r\) is good and well below \(4\pi/3\) after the transition, signaling deconfinement. The Gribov parameter \(\lambda\) goes _up_ when going through the transition, as seen in FIG. 4a.
We can conclude that the \(\bar{A}^{h}\) approach also has some flaws, indicated by the Polyakov loop \(r\) increasing in value rather than staying constant during what we would expect to be the confining phase. Furthermore we did not find any deconfined phase for SU(2) in the temperature range we investigated (until \(T=0.46\,\mathrm{GeV}\)), and the trends in the vacuum energies do not suggest a deconfined phase will soon be found for higher temperatures. Finally, the transition we did find for SU(3) is at a temperature much higher than found in other works. A lattice computation (see Table 6 in [1], taking for the string tension a typical value of \(\sqrt{\sigma}=0.44\,\mathrm{GeV}\), see [152] for more details) gives \(T_{c}=0.28\,\mathrm{GeV}\); other approaches usually find even lower values, see Table 6.1 in [133] for a selection.
One might speculate that the fact the above results are deviating so from what is expected, is related to the observation made in [118]: in principle, when we go on-shell in the \(h\)-sector via the \(\tau\)-equation of motion, the \(h\)-field must evidently be periodic, but up to a \(\mathbb{Z}_{N}\) twist. As of now, we have not been able to find a way to deal with the twisted sectors in the path integral, and we must restrict ourselves to a fixed twist sector.
### Results in symmetric case: KR approach
In the KR approach, the results are better. We find a second-order phase transition at \(T_{c}=0.34\,\mathrm{GeV}\) for SU(2), see FIG. 5b. This is not too far from the lattice result in Table 6 in [1]: \(0.31\,\mathrm{GeV}\). For SU(3), we found the transition at \(T_{c}=0.310\,\mathrm{GeV}\) (see FIG. 6b) and of first order (see FIG. 6b), again not too far from the lattice result of \(0.28\,\mathrm{GeV}\)[1]. The Gribov parameter \(\lambda\) again goes up when going through the SU(3) transition, as seen in FIG. 6a.
The existence and orders of the transitions are in line with expectations, now. The transition temperatures are still on the high side, however. We tried playing with the scale parameter \(\bar{\mu}\), but the results seem quite stable. We took \(\bar{\mu}^{2}\) equal to the value of \(m^{2}\) at zero temperature (for which the computations in Appendix C needed to be redone), which gave a smaller value of \(\bar{\mu}^{2}\) and thus a higher value of the coupling constant \(g^{2}\). We found a transition temperature of \(T_{c}=0.35\,\mathrm{GeV}\) for SU(2): barely higher. With a higher coupling constant one could expect the finite-temperature corrections (which are all of first order in the coupling) to become more important, thus speeding up the transition. But changing \(\bar{\mu}^{2}\) also modifies all the other zero-temperature parameters that enter the theory, and this seems to undo the effect.
In [118], Kroff and Reinosa also consider the introduction of different Gribov parameters in different color directions. In their paper, they find that doing so has a noteworthy impact on the transition temperature. We therefore also considered what they call the "partially degenerate" approach, where a "neutral" Gribov parameter \(\gamma_{0}\) is coupled to the gluon fields in the Casimir and a "charged" one \(\gamma_{\mathrm{ch}}\) is coupled to the other modes. For SU(2) the transition temperature comes down with about a fifth to \(T_{c}=0.27\,\mathrm{GeV}\) (see FIG. 7), while for SU(3) the temperature of the
Figure 4: Some of the results obtained in the \(\bar{A}^{h}\) approach for the symmetric case in SU(3) for the two minima we found: the “overconfining” minimum (\(r>4\pi/3=4.19\)) with dots and the deconfining minimum in plus signs. Again the numerics are quite heavy, so we did the computation for a smaller selection of temperatures. FIG. a shows the Gribov parameter, which jumps to higher values when entering the deconfined phase. Not shown are the dimension-two condensates, which do not vary much and also do not differ much between the two vacua. FIG. b shows the Polyakov loop \(r\) as a function of temperature. Already at very small temperatures, \(r>4\pi/3\), which implies the quark free energy has an imaginary part. FIG. c shows the energy in the minima with inset zoomed in on the transition.
(first-order) phase transition is between \(T=0.264\,\mathrm{GeV}\) and \(0.284\,\mathrm{GeV}\) (see FIG. 8). Probably related to the flatness of the potential, we are unable to find a numerically more precise estimate of the transition temperature for SU(3).
## VIII Conclusions
In this paper we studied the Refined Gribov-Zwanziger action for SU(2) and SU(3) gauge theories with the Polyakov loop coupled to it via the background field formalism. Doing so, we were able to compute the finite-temperature value of the Polyakov loop, the Gribov parameter, and the values of the dimension-two condensates simultaneously at the leading one-loop approximation.
We used several approaches. First there are two candidates for the Gribov auxiliary fields condensate that have been investigated in the past: \(\langle\bar{\varphi}\varphi-\bar{\omega}\omega\rangle\) and \(\langle\bar{\varphi}\varphi\rangle\)[70; 71; 76]. The second one has enjoyed relatively more attention up to now, but from our results it turns out that only the first one (the more symmetric one) leads to phenomenologically acceptable results at finite temperature, where the second one does not. We furthermore used two different proposals to add a gluon background field to the Gribov formalism. The one proposed by the authors in [117] turns out to have issues, whereas the one proposed by Kroff and Reinosa [118] gives the best results.
From the point of view of physics, we found a second-order deconfinement phase transition for SU(2) and a first-order transition for SU(3), provided we used the symmetric condensate \(\langle\bar{\varphi}\varphi-\bar{\omega}\omega\rangle\) and the Kroff-Reinosa approach. Just as in [137], the Gribov mass is nonzero at temperatures above \(T_{c}\), indicating that the gluon propagator still
Figure 5: Some of the results obtained in the KR approach for the symmetric case in SU(2). Not shown are the dimension-two condensates, which do not vary much and also do not change much through the transition. FIG. a shows the Gribov parameter \(\lambda\) and FIG. b shows the Polyakov loop \(r\) as a function of temperature. The second-order transition at \(T_{c}=0.34\,\mathrm{GeV}\) is clear in the sudden drop in \(r\).
Figure 6: Some of the results obtained in the KR approach for the symmetric case in SU(3) for the two minima we found: the confining minimum (\(r=4\pi/3=4.19\)) in a full line and the deconfining minimum in a dashed line. The deconfining minimum is very shallow right above the transition temperature, making the numerics very unstable. (Minimization often ends up in the confining minimum.) This has resulted in a small gap in the data. Not shown are the dimension-two condensates, which do not vary much and also do not change much through the transition. FIG. a shows the Gribov parameter \(\lambda\) and FIG. b shows the Polyakov loop \(r\) as a function of temperature. FIG. c shows the vacuum energy. Extrapolating the vacuum energy of the deconfined minimum gives a first-order transition temperature at \(T_{c}=0.310\,\mathrm{GeV}\).
violates positivity and as such it rather describes a quasiparticle than a "free" observable particle; see also [153, 33] for more on this.
Several improvements on the current setup can be proposed. First, one would expect the condensates to develop electric-magnetic asymmetries at finite temperature, in the vein of [154, 155]. This markedly complicates the computations, and previous work has shown that the results are not greatly impacted [10]. Another possibility is, naturally, to go to two-loop order. The Kroff-Reinosa approach is computationally the most elegant and simplest one, and luckily it turned out to be the best one phenomenologically as well. This allows one to hope that a two-loop computation would be tractable, although the two-loop generalization of [118] without any extra condensates is also still lacking. It would also be interesting to test in practice the argument in [118] that the KR model is renormalizable to two-loop order as well. A full BRST based analysis of this feature to all orders looks too ambitious given the presence of the non-local dressing factors as in (31c). Furthermore, it remains an open question how to split the path integration over the various twisted sectors when the auxiliary Stueckelberg-like \(h\)-field is brought on-shell.
## Acknowledgements
D.V. is grateful for the hospitality at KU Leuven, where parts of this work were done, made possible through the KU Leuven Senior Fellowship SF/19/008 and IF project C14/21/087. We would also like to thank Diego R. Granado for the check of the SU(3) calculus.
Figure 8: Some of the results obtained in the KR approach for the symmetric case in SU(3) for the partially degenerate approach to color-dependent Gribov parameters. Not shown are the dimension-two condensates, which do not vary much and also do not change much through the transition. We did not manage to find solutions between \(T=0.264\,\mathrm{GeV}\) and \(0.284\,\mathrm{GeV}\), due to the potential being nearly flat. As a result, we could only determine that the temperature of the (first-order) phase transition must be somewhere withing that range. FIG. a shows the Gribov parameters \(\lambda_{0}\) (upper line) and \(\lambda_{\mathrm{ch}}\) (lower line) and FIG. b shows the Polyakov loop \(r\) as a function of temperature. FIG. c shows the value of the effective potential.
Figure 7: Some of the results obtained in the KR approach for the symmetric case in SU(2) for the partially degenerate approach to color-dependent Gribov parameters. Not shown are the dimension-two condensates, which do not vary much and also do not change much through the transition. FIG. a shows the Gribov parameters \(\lambda_{0}\) (upper line) and \(\lambda_{\mathrm{ch}}\) (lower line) and FIG. b shows the Polyakov loop \(r\) as a function of temperature. The second-order transition is now at \(T_{c}=0.27\,\mathrm{GeV}\).
## Appendix A Proof of sufficiency of Gribov construction applied to \(-\bar{D}^{h}_{\mu}D^{h}_{\mu}\)
In this section, we prove that the modification of the Gribov-Zwanziger action as given in (22a) is sufficient to remove infinitesimal Gribov copies in the Landau-DeWitt gauge with background \(\bar{A}_{\mu}\).
The Faddeev-Popov operator in the Landau background gauge is \(-\bar{D}^{ab}_{\mu}D^{bc}_{\mu}\). As shown in [117], however, basing the Gribov construction on this operator leads to a breaking of background gauge symmetry \(\delta\bar{A}^{a}_{\mu}=\bar{D}^{ab}_{\mu}\bar{\phi}^{b}\) with \(\beta^{a}\) the gauge parameter. In [117], the operator \(-\partial_{\mu}(D^{h})^{ab}_{\mu}\) was proposed. In the case at hand, however, we have a nonzero \((\bar{A}^{h})^{a}_{\mu}\), such that we need to use \(-(\bar{D}^{h})^{ab}_{\mu}(D^{h})^{bc}_{\mu}\). (In this operator, the first covariant derivative contains the transformed background \((\bar{A}^{h})^{a}_{\mu}\), the second one contains the full field \((a^{h})^{a}_{\mu}\).)
Let us now prove that this is correct. To do so, let us use a shorthand notation from here on to avoid clutter of colour and Lorentz indices, writing \(-\bar{D}^{h}D^{h}\) and \(D=\partial+a\) etc. We want to prove that restricting the path integral to configurations with \(-\bar{D}^{h}D^{h}>0\) actually excludes (almost) all Gribov copies related to the zero modes of the Faddeev-Popov operator \(-\bar{D}D\). Given a configuration in the permissible space \(-\bar{D}^{h}D^{h}>0\), assume the exists a zero mode \(\xi\) of \(-\bar{D}D\):
\[-\bar{D}D\xi=0\;. \tag{104}\]
To prove that this implies \(\xi=0\), we will assume \(\xi\) can be written as a series in the background \(\xi=\sum_{n=0}^{\infty}\bar{A}^{n}\xi_{n}[\mathcal{A}]\). We can rewrite the equation for \(\xi\) as
\[-\bar{D}^{h}D^{h}\xi+\bar{A}^{h}D^{h}\xi-\bar{A}D\xi+\partial((a^{h}-a)\xi)=0\;. \tag{105}\]
Due to the assumed invertibility of \(-\bar{D}^{h}D^{h}\), this means that
\[\xi=\frac{1}{-\bar{D}^{h}D^{h}}\Big{(}-\bar{A}^{h}D^{h}\xi+\bar{A}D\xi- \partial((a^{h}-a)\xi)\Big{)}\;. \tag{106}\]
In the limit \(\bar{A}\to 0\), we have that \(\bar{A}^{h}\to 0\), such that \(\bar{A}^{h}=\mathcal{O}(\bar{A})\). Furthermore in the same limit the gauge condition for \(a^{h}\) becomes identical to that for \(a\), such that also \(a^{h}-a\to 0\). This means that the right-hand side of (106) starts at at least one order in \(\bar{A}\) higher than \(\xi\), which can never be equal to \(\xi\) except if \(\xi=0\). This concludes the proof that restricting the path integral to configurations with \(-\bar{D}^{h}D^{h}>0\) actually excludes all Gribov copies related to the zero modes of the Faddeev-Popov operator \(-\bar{D}D\) that are expressable as a Taylor series in the background field, _i.e._ that are continuous deformations around the zero background (standard Landau gauge).
This completes our proof.
## Appendix B The projection operator in equation (33)
We want to construct (in the notations of [118], see equation (26)) a background-gauge invariant equivalent to \(\gamma_{\kappa}^{2}f^{\kappa\lambda\eta}A^{\kappa}_{\mu}(\varphi^{\lambda\eta }_{\mu}+\bar{\varphi}^{\lambda\eta}_{\mu})\). Under background gauge transformations, one has
\[\delta\bar{A}^{a}_{\mu}=\bar{D}^{ab}_{\mu}\varpi^{b}\;, \tag{107a}\] \[\delta A^{a}_{\mu}=-gf^{abc}\varpi^{b}A^{c}_{\mu}\;, \tag{107b}\]
and transformations analogous to (107b) for \(\varphi\) and \(\bar{\varphi}\). In [118] the authors state that this is possible, but without showing explicitly how. If the background is a constant and the transformation brings it to another constant background (for example a gauge rotation) then the expression show in equation (26) in [118] is manifestly invariant provided we remember to redefine the indices. (The Greek color indices in [118] are defined with respect to the Casimir, where the background is assumed to be in.) To get invariance under general background transformations, we need to do more work.
We need to define a projection operator \(P^{ab}\) such that
\[f^{acd}P^{ab}A^{b}_{\mu}(\varphi^{cd}_{\mu}+\bar{\varphi}^{cd}_{\mu}) \tag{108}\]
is invariant. If the background is in the minimal Landau gauge, we want this projection operator to be equal to \(P^{ab}\to\bar{A}^{a}_{\mu}\bar{A}^{b}_{\mu}/\bar{A}^{2}\). In that case, the projector will pick out the color direction along the background, to which we couple one of the \(\gamma_{0}\)'s. For example in SU(2) there is only one Casimir direction and we can therefore use
\[\Big{(}\gamma_{0}P^{ab}+\gamma_{\rm ch}(\delta^{ab}-P^{ab})\Big{)}A^{b}_{\mu}\;, \tag{109}\]
In order to write down such a projector, we search for a field \(\bar{\mathcal{A}}^{a}_{\mu}\) such that \(\bar{\mathcal{A}}^{a}_{\mu}\) transforms as (B1b) under background transformations (\(\delta\bar{\mathcal{A}}^{a}_{\mu}=-gf^{abc}\varpi^{b}\bar{\mathcal{A}}^{c}_{\mu}\)) and also such that \(\bar{\mathcal{A}}^{a}_{\mu}\to\bar{A}^{a}_{\mu}\) whenever the background is in minimal Landau gauge. Then,
\[P^{ab}=\frac{\bar{\mathcal{A}}^{a}_{\mu}\bar{\mathcal{A}}^{b}_{\mu}}{\bar{ \mathcal{A}}^{2}} \tag{101}\]
fits the bill:
\[\delta P^{ab}=-g\frac{f^{acd}\varpi^{c}\bar{\mathcal{A}}^{d}_{\mu}\bar{ \mathcal{A}}^{b}_{\mu}+f^{bcd}\bar{\mathcal{A}}^{a}_{\mu}\varpi^{c}\bar{ \mathcal{A}}^{d}_{\mu}}{\bar{\mathcal{A}}^{2}}=-g(f^{acd}\delta^{be}+\delta^{ ad}f^{bce})\varpi^{c}P^{de}\quad\Rightarrow\quad\delta(P^{ab}A^{b}_{\mu})=-gf^{ abc}\varpi^{b}P^{cd}A^{d}_{\mu}\;, \tag{102}\]
which is sufficient for our needs.
Take the Ansatz
\[\bar{\mathcal{A}}^{a}_{\mu}=(\bar{A}^{h})^{a}_{\mu}+X^{a}_{\mu}\;.\] (103a) Expand the above in orders of \[\bar{A}^{a}_{\mu}\] : \[(\bar{A}^{h})^{a}_{\mu}=\left(\delta_{\mu\nu}-\tfrac{\partial_{\mu }\partial_{\nu}}{\partial^{2}}\right)\sum_{n=1}^{\infty}(\mathcal{F}_{n})^{a}_ {\nu}(\bar{A})\;, \tag{103b}\] \[X^{a}_{\mu}=\sum_{n=2}^{\infty}(\mathcal{G}_{n})^{a}_{\mu}(\bar{A })\;, \tag{103c}\]
where the index \(n\) denotes the number of \(\bar{A}^{a}_{\mu}\) fields. Given that \((\bar{A}^{h})^{a}_{\mu}\) is invariant under \(\delta\bar{A}^{a}_{\mu}=\bar{D}^{ab}_{\mu}\varpi^{b}\), we get
\[\delta\bar{\mathcal{A}}^{a}_{\mu}=\delta X^{a}_{\mu}=\sum_{n=2}^{\infty}\bar{ D}^{bc}_{\nu}\varpi^{c}\frac{\delta(\mathcal{G}_{n})^{a}_{\mu}}{\delta\bar{A}^{b} _{\nu}}(\bar{A})=\sum_{n=1}^{\infty}\partial_{\nu}\varpi^{b}\frac{\delta( \mathcal{G}_{n+1})^{a}_{\mu}}{\delta\bar{A}^{b}_{\nu}}(\bar{A})-gf^{bcd}\bar{ A}^{d}_{\nu}\varpi^{c}\sum_{n=2}^{\infty}\frac{\delta(\mathcal{G}_{n})^{a}_{ \mu}}{\delta\bar{A}^{b}_{\nu}}(\bar{A})\;. \tag{104}\]
Requiring \(\delta\bar{\mathcal{A}}^{a}_{\mu}=-gf^{abc}\varpi^{b}\bar{\mathcal{A}}^{c}_{\mu}\) and equating order by order in \(\bar{A}^{a}_{\mu}\) gives
\[-gf^{abc}\varpi^{b}\left(\delta_{\mu\nu}-\tfrac{\partial_{\mu} \partial_{\nu}}{\partial^{2}}\right)\sum_{n=1}^{\infty}(\mathcal{F}_{n})^{c}_ {\nu}(\bar{A})-gf^{abc}\varpi^{b}\sum_{n=2}^{\infty}(\mathcal{G}_{n})^{c}_{ \mu}(\bar{A})\\ =\sum_{n=1}^{\infty}\partial_{\nu}\varpi^{b}\frac{\delta(\mathcal{ G}_{n+1})^{a}_{\mu}}{\delta\bar{A}^{b}_{\nu}}(\bar{A})-gf^{bcd}\bar{A}^{d}_{\nu} \varpi^{c}\sum_{n=2}^{\infty}\frac{\delta(\mathcal{G}_{n})^{a}_{\mu}}{\delta \bar{A}^{b}_{\nu}}(\bar{A})\\ \Rightarrow\quad\partial_{\nu}\varpi^{b}\frac{\delta(\mathcal{G}_{n +1})^{a}_{\mu}}{\delta\bar{A}^{b}_{\nu}}(\bar{A})=-gf^{abc}\varpi^{b}\left( \delta_{\mu\nu}-\tfrac{\partial_{\mu}\partial_{\nu}}{\partial^{2}}\right)( \mathcal{F}_{n})^{c}_{\nu}(\bar{A})\\ -gf^{abc}\varpi^{b}(\mathcal{G}_{n})^{c}_{\mu}(\bar{A})+gf^{bcd} \bar{A}^{d}_{\nu}\varpi^{c}\frac{\delta(\mathcal{G}_{n})^{a}_{\mu}}{\delta \bar{A}^{b}_{\nu}}(\bar{A})\;. \tag{105}\]
For \(n=1\) one gets (with \(\mathcal{G}_{1}=0\)):
\[\partial_{\nu}\varpi^{b}\frac{\delta(\mathcal{G}_{2})^{a}_{\mu}}{\delta\bar{A} ^{b}_{\nu}}(\bar{A})=-gf^{abc}\varpi^{b}\left(\delta_{\mu\nu}-\tfrac{\partial _{\mu}\partial_{\nu}}{\partial^{2}}\right)\bar{A}^{c}_{\nu} \tag{106}\]
Given that
\[\partial_{\nu}\omega^{b}\frac{\delta}{\delta\bar{A}^{b}_{\nu}}\left(\delta_{ \mu\nu}-\tfrac{\partial_{\mu}\partial_{\nu}}{\partial^{2}}\right)\bar{A}^{a}_{ \nu}=\left(\delta_{\mu\nu}-\tfrac{\partial_{\mu}\partial_{\nu}}{\partial^{2}} \right)\partial_{\nu}\varpi^{a}=0\;, \tag{107}\]
we only need to multiply \(\left(\delta_{\mu\nu}-\tfrac{\partial_{\mu}\partial_{\nu}}{\partial^{2}}\right) \bar{A}^{b}_{\nu}\) with some expression \(Y^{ab}\) that obeys \(\partial_{\nu}\varpi^{b}\frac{\delta}{\delta A^{c}_{\nu}}Y^{ac}=-gf^{abc}\varpi^ {b}\). An obvious solutions is \(Y^{bc}=gf^{abc}\tfrac{\partial_{\mu}}{\partial^{2}}\bar{A}^{c}_{\lambda}\).
The cases for \(n>1\) are left as an exercise for the reader. The final result is (to second order in the background field):
\[\bar{\mathcal{A}}^{a}_{\mu}=\left(\delta_{\mu\nu}-\tfrac{\partial_{\mu} \partial_{\nu}}{\partial^{2}}\right)\left(\bar{A}^{a}_{\nu}-gf^{abc}\left(( \delta_{\nu\lambda}-\tfrac{1}{2}\tfrac{\partial_{\mu}\partial_{\lambda}}{ \partial^{2}})\bar{A}^{b}_{\lambda}\right)\tfrac{\partial_{\mu}}{\partial^{2}} \bar{A}^{c}_{\kappa}+\cdots\right)+gf^{abc}\left((\delta_{\mu\nu}-\tfrac{ \partial_{\mu}\partial_{\nu}}{\partial^{2}})\bar{A}^{b}_{\nu}\right)\tfrac{ \partial_{\lambda}}{\partial^{2}}\bar{A}^{c}_{\lambda}+\cdots \tag{108}\]
## Appendix C Free parameters in the symmetric approach
The free parameters of the \(\bar{\varphi}\varphi\) approach at zero temperature were computed in [116]. The symmetric approach has not yet been done, so we work it out in this Appendix.
The gap equation for \(\lambda^{4}\) is
\[\frac{5}{6}-\frac{4}{3}b_{0}-\frac{4(4\pi)^{2}}{3Ng^{2}}-\frac{1}{2}\log\frac{m ^{2}M^{2}+\lambda^{4}}{\bar{\mu}^{4}}-\frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-( m^{2}-M^{2})^{2}}}\,\mathrm{arccot}\,\frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{ 2})^{2}}}=0\;, \tag{116}\]
or, after plugging in the renormalization group, the poles, and our choice for \(\bar{\mu}^{2}\):
\[\frac{5}{6}-\frac{4}{3}b_{0}-\frac{44}{9}\ln\frac{\sqrt{x_{0}^{2}+y_{0}^{2}}}{ \Lambda_{\mathrm{MS}}^{2}}-\frac{x_{0}}{y_{0}}\,\mathrm{arccot}\,\frac{x_{0}} {y_{0}}=0\;. \tag{117}\]
This gives \(b_{0}=-8.49\) in SU(3) and \(-6.7\) in SU(2).
The equation for \(m^{2}\) is
\[\frac{2(m^{2}M^{2}+\lambda^{4})}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^ {2}}}\,\mathrm{arccot}\,\frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2 }}}\\ +m^{2}\left(\frac{1}{3}-\frac{6}{13}\frac{(4\pi)^{2}}{g^{2}N}- \frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}\,\mathrm{arccot}\, \frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}-\frac{1}{2}\ln\frac{ m^{2}M^{2}+\lambda^{4}}{\bar{\mu}^{4}}\right)=0\;, \tag{118}\]
or, after plugging in the renormalization group, the poles, and our choice for \(\bar{\mu}^{2}\):
\[\frac{x_{0}^{2}+y_{0}^{2}}{y_{0}}\,\mathrm{arccot}\,\frac{x_{0}}{y_{0}}+m^{2} \left(\frac{1}{3}-\frac{22}{13}\ln\frac{\sqrt{x_{0}^{2}+y_{0}^{2}}}{\Lambda_{ \mathrm{MS}}^{2}}-\frac{x_{0}}{y_{0}}\,\mathrm{arccot}\,\frac{x_{0}}{y_{0}} \right)=0\;. \tag{119}\]
This gives \(m^{2}=0.152\,\mathrm{GeV}^{2}\) in SU(3) and \(0.27\,\mathrm{GeV}^{2}\) in SU(2). Given that \(M^{2}=2x_{0}-m^{2}\), we also find \(M^{2}=0.370\,\mathrm{GeV}^{2}\) in SU(3) and \(0.31\,\mathrm{GeV}^{2}\) in SU(2). Given that \(\lambda^{4}=x_{0}^{2}+y_{0}^{2}-m^{2}M^{2}\) we also find \(\lambda^{2}=0.478\,\mathrm{GeV}^{2}\) in SU(3) and \(0.67\,\mathrm{GeV}^{2}\) in SU(2).
The equation for \(M^{2}\) is
\[\frac{2(m^{2}M^{2}+\lambda^{4})}{\sqrt{4\lambda^{4}-(m^{2}-M^{2}) ^{2}}}\,\mathrm{arccot}\,\frac{m^{2}+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2 }}}\\ +M^{2}\left(-\frac{2}{3}\frac{(4\pi)^{2}}{N^{2}-1}\beta-\frac{m^{2 }+M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}\,\mathrm{arccot}\,\frac{m^{2} +M^{2}}{\sqrt{4\lambda^{4}-(m^{2}-M^{2})^{2}}}-\frac{1}{2}\ln\frac{m^{2}M^{2} +\lambda^{4}}{M^{4}}\right)=0\;, \tag{120}\]
or, after plugging in the poles:
\[\frac{x_{0}^{2}+y_{0}^{2}}{y_{0}}\,\mathrm{arccot}\,\frac{x_{0}}{y_{0}}+M^{2} \left(-\frac{2}{3}\frac{(4\pi)^{2}}{N^{2}-1}\beta-\frac{x_{0}}{y_{0}}\, \mathrm{arccot}\,\frac{x_{0}}{y_{0}}-\frac{1}{2}\ln\frac{x_{0}^{2}+y_{0}^{2}}{ M^{4}}\right)=0\;. \tag{121}\]
This gives \(\beta=0.0601\) in SU(3) and \(0.045\) in SU(2).
## Appendix D Conventions
### Su(2)
We define isospin eigenstates as
\[\mathsf{v}_{+}=\frac{1}{\sqrt{2}}\begin{pmatrix}i\\ 1\\ 0\end{pmatrix}\;,\qquad\mathsf{v}_{-}=\frac{1}{\sqrt{2}}\begin{pmatrix}i\\ -1\\ 0\end{pmatrix}\;,\qquad\mathsf{v}_{0}=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}\;. \tag{122}\]
We then have that
\[\mathbb{1}=\mathsf{v}_{+}\mathsf{v}_{+}^{\dagger}+\mathsf{v}_{-} \mathsf{v}_{-}^{\dagger}+\mathsf{v}_{3}\mathsf{v}_{3}^{\dagger} \tag{104a}\] \[\operatorname{tr}\mathsf{A}=\mathsf{v}_{+}^{\dagger}\mathsf{Av}_{+} +\mathsf{v}_{-}^{\dagger}\mathsf{Av}_{-}+\mathsf{v}_{3}^{\dagger}\mathsf{Av}_{3 }\;. \tag{104b}\]
If we define
\[\mathsf{s}^{ab}=i\epsilon^{ab3}\;,\qquad\mathsf{a}_{+}^{ab}=\epsilon^{ab1}-i \epsilon^{ab2}\;,\qquad\mathsf{a}_{-}^{ab}=-\epsilon^{ab1}-i\epsilon^{ab2}\;. \tag{105}\]
we have the commutation relations
\[[\mathsf{s},\mathsf{a}_{\pm}]=\pm\mathsf{a}_{\pm}\;,\qquad[\mathsf{a}_{+}, \mathsf{a}_{-}]=2\mathsf{s}\;, \tag{106}\]
and that
\[\mathsf{sv}_{3}=0\;,\qquad\mathsf{sv}_{s}=s\mathsf{v}_{s}\;, \tag{107a}\] \[\mathsf{a}_{+}\mathsf{v}_{+}=0\;,\qquad\mathsf{a}_{+}\mathsf{v}_ {0}=\sqrt{2}\mathsf{v}_{+}\;,\qquad\mathsf{a}_{+}\mathsf{v}_{-}=\sqrt{2} \mathsf{v}_{0}\;,\] (107b) \[\mathsf{a}_{-}\mathsf{v}_{+}=\sqrt{2}\mathsf{v}_{0}\;,\qquad \mathsf{a}_{-}\mathsf{v}_{0}=\sqrt{2}\mathsf{v}_{-}\;,\qquad\mathsf{a}_{-} \mathsf{v}_{-}=0\;. \tag{107c}\]
We also have
\[\epsilon^{ace}\mathcal{O}^{cd}\epsilon^{bde}=\left(\mathsf{s}\mathcal{O} \mathsf{s}+\tfrac{1}{2}\mathsf{a}_{+}\mathcal{O}\mathsf{a}_{-}+\tfrac{1}{2} \mathsf{a}_{-}\mathcal{O}\mathsf{a}_{+}\right)^{ab}\;. \tag{108}\]
### Su(3)
The structure constants of the Lie algebra of SU(3) are given by
\[f_{123}=1\;, \tag{109a}\] \[f_{147}=-f_{156}=f_{246}=f_{257}=f_{345}=-f_{367}=\frac{1}{2}\;,\] (110b) \[f_{458}=f_{678}=\frac{\sqrt{3}}{2}\;, \tag{110c}\]
while all other \(f_{abc}\) not related to these by permutation are zero. To avoid cluttered indices, define the matrices \((f^{a})_{bc}=f_{abc}\). Now define the following operators:
\[\mathsf{s}_{3}=if^{3}\;,\qquad\mathsf{s}_{8}=if^{8}\;, \tag{111a}\] \[\mathsf{a}_{1}^{\pm}=\pm f^{1}-if^{2}\;,\qquad\mathsf{a}_{2}^{\pm }=\pm f^{4}-if^{5}\;,\qquad\mathsf{a}_{3}^{\pm}=\pm f^{6}-if^{7}\;. \tag{111b}\]
These obey the commutation relations:
\[[\mathsf{s}_{3},\mathsf{s}_{8}]=0\;, \tag{112a}\] \[[\mathsf{s}_{3},\mathsf{a}_{1}^{\pm}]=\pm\mathsf{a}_{1}^{\pm}\;, \qquad[\mathsf{s}_{8},\mathsf{a}_{1}^{\pm}]=0\;,\] (112b) \[[\mathsf{s}_{3},\mathsf{a}_{2}^{\pm}]=\pm\tfrac{1}{2}\mathsf{a}_{ 2}^{\pm}\;,\qquad[\mathsf{s}_{8},\mathsf{a}_{2}^{\pm}]=\pm\tfrac{\sqrt{3}}{2} \mathsf{a}_{2}^{\pm}\;,\] (112c) \[[\mathsf{s}_{3},\mathsf{a}_{3}^{\pm}]=\mp\tfrac{1}{2}\mathsf{a}_{ 3}^{\pm}\;,\qquad[\mathsf{s}_{8},\mathsf{a}_{3}^{\pm}]=\pm\tfrac{\sqrt{3}}{2} \mathsf{a}_{3}^{\pm}\;,\] (112d) \[[\mathsf{a}_{1}^{+},\mathsf{a}_{1}^{-}]=2\mathsf{s}_{3}\;,\qquad[ \mathsf{a}_{2}^{+},\mathsf{a}_{2}^{-}]=\mathsf{s}_{3}+\sqrt{3}\mathsf{s}_{8} \;,\qquad[\mathsf{a}_{3}^{+},\mathsf{a}_{3}^{-}]=-\mathsf{s}_{3}+\sqrt{3} \mathsf{s}_{8}\;,\] (112e) \[[\mathsf{a}_{1}^{\pm},\mathsf{a}_{2}^{\pm}]=0\;,\qquad[\mathsf{a}_ {1}^{\pm},\mathsf{a}_{2}^{\mp}]=-i\mathsf{a}_{3}^{\mp}\;,\] (112f) \[[\mathsf{a}_{1}^{\pm},\mathsf{a}_{3}^{\pm}]=-i\mathsf{a}_{2}^{\pm}\;, \qquad[\mathsf{a}_{1}^{\pm},\mathsf{a}_{3}^{\mp}]=0\;,\] (112g) \[[\mathsf{a}_{2}^{\pm},\mathsf{a}_{3}^{\pm}]=0\;,\qquad[\mathsf{a}_ {2}^{\pm},\mathsf{a}_{3}^{\mp}]=-i\mathsf{a}_{1}^{\pm}\;. \tag{112h}\]
Next, define the following vectors:
\[\mathsf{v}_{3}^{a}=\delta^{a3}\;,\qquad\mathsf{v}_{8}^{a}=\delta^{a8}\;, \tag{113a}\] \[(\mathsf{v}_{1}^{\pm})^{a}=\frac{1}{\sqrt{2}}(i\delta^{a1}\pm \delta^{a2})\;,\qquad(\mathsf{v}_{2}^{\pm})^{a}=\frac{1}{\sqrt{2}}(i\delta^{a 4}\pm\delta^{a5})\;,\qquad(\mathsf{v}_{3}^{\pm})^{a}=\frac{1}{\sqrt{2}}(i\delta^{ a6}\pm\delta^{a7})\;. \tag{113b}\]
We have the following operations:
\[\begin{array}{c|c|c|c|c|c|c|c|c}&\mathsf{s}_{3}&\mathsf{s}_{8}&\mathsf{a}_{1}^{+ }&\mathsf{a}_{1}^{-}&\mathsf{a}_{2}^{+}&\mathsf{a}_{2}^{-}&\mathsf{a}_{3}^{+}& \mathsf{a}_{3}^{-}\\ \hline\mathsf{v}_{3}&0&0&\sqrt{2}\mathsf{v}_{1}^{+}&\sqrt{2}\mathsf{v}_{1}^{-}& \frac{1}{\sqrt{2}}\mathsf{v}_{2}^{+}&\frac{1}{\sqrt{2}}\mathsf{v}_{2}^{-}&- \frac{1}{\sqrt{2}}\mathsf{v}_{3}^{+}&-\frac{1}{\sqrt{2}}\mathsf{v}_{3}^{-}\\ \hline\mathsf{v}_{8}&0&0&0&0&\sqrt{\tfrac{3}{2}\mathsf{v}_{2}^{+}}&\sqrt{\tfrac {3}{2}\mathsf{v}_{2}^{-}}&\sqrt{\tfrac{3}{2}\mathsf{v}_{3}^{+}}&\sqrt{\tfrac{3} {2}\mathsf{v}_{3}^{-}}\\ \hline\mathsf{v}_{1}^{+}&\mathsf{v}_{1}^{+}&0&0&\sqrt{2}\mathsf{v}_{3}&0&i \mathsf{v}_{3}^{+}&i\mathsf{v}_{2}^{+}&0\\ \hline\mathsf{v}_{1}^{-}&-\mathsf{v}_{1}^{-}&0&\sqrt{2}\mathsf{v}_{3}&0&i \mathsf{v}_{3}^{+}&0&0&i\mathsf{v}_{2}^{-}\\ \hline\mathsf{v}_{2}^{+}&\frac{1}{2}\mathsf{v}_{2}^{+}&\frac{\sqrt{3}}{2} \mathsf{v}_{2}^{+}&0&i\mathsf{v}_{3}^{+}&0&\frac{1}{\sqrt{2}}(\mathsf{v}_{3}+ \sqrt{3}\mathsf{v}_{8})&0&-i\mathsf{v}_{1}^{+}\\ \hline\mathsf{v}_{2}^{-}&-\frac{1}{2}\mathsf{v}_{2}^{-}&-\frac{\sqrt{3}}{2} \mathsf{v}_{2}^{-}&i\mathsf{v}_{3}^{-}&0&\frac{1}{\sqrt{2}}(\mathsf{v}_{3}+ \sqrt{3}\mathsf{v}_{8})&0&-i\mathsf{v}_{1}^{-}&0\\ \hline\mathsf{v}_{3}^{+}&-\frac{1}{2}\mathsf{v}_{3}^{+}&\frac{2}{2}\mathsf{v} _{3}^{+}&-i\mathsf{v}_{2}^{+}&0&0&-i\mathsf{v}_{1}^{-}&0&\frac{1}{\sqrt{2}}(- \mathsf{v}_{3}+\sqrt{3}\mathsf{v}_{8})\\ \hline\mathsf{v}_{3}^{-}&-\frac{1}{2}\mathsf{v}_{3}^{-}&-\frac{\sqrt{3}}{2} \mathsf{v}_{3}^{-}&0&-i\mathsf{v}_{2}^{-}&-i\mathsf{v}_{1}^{+}&0&\frac{1}{ \sqrt{2}}(-\mathsf{v}_{3}+\sqrt{3}\mathsf{v}_{8})&0\\ \hline\end{array}\]
As a result, the \(\mathsf{a}\)'s function as ladder operators:
\[\mathsf{a}_{1}^{\pm}\ : 0\leftarrow\mathsf{v}_{1}^{-}\leftrightarrow\mathsf{v}_{3} \leftrightarrow\mathsf{v}_{1}^{+}\to 0\,\quad 0\leftarrow\mathsf{v}_{2}^{-} \leftrightarrow\mathsf{v}_{3}^{-}\to 0\,\quad 0\leftarrow\mathsf{v}_{3}^{+} \leftrightarrow\mathsf{v}_{2}^{+}\to 0\,\quad 0\leftarrow\mathsf{v}_{8}\to 0\, \tag{11a}\] \[\mathsf{a}_{2}^{\pm}\ : 0\leftarrow\mathsf{v}_{2}^{-}\leftrightarrow(\mathsf{v}_{3}, \mathsf{v}_{8})\leftrightarrow\mathsf{v}_{2}^{+}\to 0\,\quad 0\gets\mathsf{v}_{1}^{-} \leftrightarrow\mathsf{v}_{3}^{+}\to 0\,\quad 0\leftarrow\mathsf{v}_{3}^{-} \leftrightarrow\mathsf{v}_{1}^{+}\to 0\,\] (11b) \[\mathsf{a}_{3}^{\pm}\ : 0\leftarrow\mathsf{v}_{3}^{-}\leftrightarrow(\mathsf{v}_{3}, \mathsf{v}_{8})\leftrightarrow\mathsf{v}_{3}^{+}\to 0\,\quad 0\leftarrow\mathsf{v}_{2}^{-} \leftrightarrow\mathsf{v}_{1}^{-}\to 0\,\quad 0\leftarrow\mathsf{v}_{1}^{+} \leftrightarrow\mathsf{v}_{2}^{+}\to 0\, \tag{11c}\]
where the plus operators work to the right and the minus operators to the left.
Now consider the operator \(f_{ace}\mathcal{O}^{cd}f_{dbe}\). In the above notations, this gives
\[\left(\mathsf{s}_{3}\mathcal{O}\mathsf{s}_{3}+\mathsf{s}_{8}\mathcal{O}\mathsf{ s}_{8}+\tfrac{1}{2}\mathsf{a}_{i}^{+}\mathcal{O}\mathsf{a}_{i}^{-}+\tfrac{1}{2} \mathsf{a}_{i}^{-}\mathcal{O}\mathsf{a}_{i}^{+}\right)_{ab}. \tag{12}\]
Assuming \(\mathcal{O}^{ab}\) to be diagonal in the above basis, the operator under consideration is also diagonal in the \(\mathsf{v}_{i}^{\pm}\) subspace with eigenvalues
\[\mathsf{v}_{1}^{\pm}\ : \mathcal{O}_{3}+\mathcal{O}_{1}^{\pm}+\frac{1}{2}\mathcal{O}_{2}^ {\pm}+\frac{1}{2}\mathcal{O}_{3}^{\mp}\, \tag{13a}\] \[\mathsf{v}_{2}^{\pm}\ : \frac{1}{4}\mathcal{O}_{3}+\frac{3}{4}\mathcal{O}_{8}+\frac{1}{2} \mathcal{O}_{1}^{\pm}+\mathcal{O}_{2}^{\pm}+\frac{1}{2}\mathcal{O}_{3}^{\pm}\,\] (13b) \[\mathsf{v}_{3}^{\pm}\ : \frac{1}{4}\mathcal{O}_{3}+\frac{3}{4}\mathcal{O}_{8}+\frac{1}{2} \mathcal{O}_{1}^{\mp}+\frac{1}{2}\mathcal{O}_{2}^{\pm}+\mathcal{O}_{3}^{\pm}. \tag{13c}\]
In the \(\mathsf{v}_{3,8}\) subspace, the operator under consideration has the following form:
\[\frac{1}{4}\begin{pmatrix}4\mathcal{O}_{1}^{+}+4\mathcal{O}_{1}^{-}+\mathcal{O}_{ 2}^{+}+\mathcal{O}_{2}^{-}+\mathcal{O}_{3}^{+}+\mathcal{O}_{3}^{-}&\sqrt{3}( \mathcal{O}_{2}^{+}+\mathcal{O}_{2}^{-}-\mathcal{O}_{3}^{+}-\mathcal{O}_{3}^{-}) \\ \sqrt{3}(\mathcal{O}_{2}^{+}+\mathcal{O}_{2}^{-}-\mathcal{O}_{3}^{+}-\mathcal{O}_ {3}^{-})&3(\mathcal{O}_{2}^{+}+\mathcal{O}_{2}^{-}+\mathcal{O}_{3}^{+}+ \mathcal{O}_{3}^{-})\end{pmatrix}. \tag{14}\]
In our case we will have that \(\mathcal{O}_{2}^{+}+\mathcal{O}_{2}^{-}=\mathcal{O}_{3}^{+}+\mathcal{O}_{3}^{-}\), such that this part is also diagonal.
## Appendix E Sums at finite temperature
In this Appendix, all integrals and sums are assumed to be part of suitably regularized multidimensional integrals, such that we do not need to care about convergence.
Consider the most general (up to a multiplicative constant) second-order polynomial \(z^{2}+az+b\) with complex conjugate (nonreal) roots. We have that
\[T\sum_{n=-\infty}^{+\infty}\ln\left((2\pi nT)^{2}+a(2\pi nT)+b\right)=\int_{- \infty}^{+\infty}\frac{dp}{2\pi}\ln(p^{2}+ap+b)+T\ln(1-e^{\frac{i}{2}z_{+}})(1-e^{- \frac{i}{2}z_{-}})\, \tag{15}\]
where \(z_{\pm}=-\frac{a}{2}\pm i\sqrt{b-\frac{a^{2}}{4}}\), the roots of the polynomial. In the case considered in this paper, the polynomials under consideration are of the form \((z+\alpha T)^{2}+\beta\). In this case we find
\[T\sum_{n=-\infty}^{+\infty}\ln\left((2\pi n+\alpha)^{2}T^{2}+\beta\right)=\int_{- \infty}^{+\infty}\frac{dp}{2\pi}\ln(p^{2}+\beta)+T\ln(1-2e^{-\sqrt{\beta}/T}\cos \alpha+e^{-2\sqrt{\beta}/T})\, \tag{16}\]
where we performed a shift \(p\to p-\alpha T\) in the integral at the right. Using the notation (59), we can write
\[T\int\frac{d^{3}p}{(2\pi)^{3}}\sum_{n=-\infty}^{+\infty}\ln\Big{(}(2\pi n+\alpha )^{2}T^{2}+\vec{p}^{2}+\beta\Big{)}=\int\frac{d^{4}p}{(2\pi)^{4}}\ln(p^{2}+ \beta)+I(\beta,\alpha,T)\;. \tag{104}\]
If we start from an arbitrary polynomial function \(P(z)\) with two-by-two complex conjugate zeros and with the coefficient of the term with highest power equal to one, we have that
\[T\sum_{n=-\infty}^{+\infty}\ln P(2\pi nT)=\int_{-\infty}^{+\infty}\frac{dp}{2 \pi}\ln P(p)+T\ln\left(\prod_{z_{0}}(1-e^{\mathrm{sgn}(\Im(z_{0}))\frac{z}{T} \cdot z_{0}})\right)\;, \tag{105}\]
where the product goes over all zeros \(z_{0}\) of the polynomial \(P(z)\), and \(\mathrm{sgn}(\Im(z_{0}))\) is the sign of the imaginary part of the zero. The roots of a polynomial can be easily found numerically, making numeric evaluation straightforward.
|
2309.01755 | Flat-band and multi-dimensional fermions in Pb10(PO4)6O4 | Employing a combination of first-principles calculations and low-energy
effective models, we present a comprehensive investigation on the electronic
structure of Pb$_{10}$(PO$_{4}$)$_{6}$O$_{4}$, which exhibits remarkable
quasi-one-dimensional flat-band around the Fermi level that contains novel
multi-dimensional fermions. These flat bands predominantly originate from
$p_x/p_y$ orbital of the oxygen molecules chain at $4e$ Wyckoff positions, and
thus can be well-captured by a four-band tight-binding model. Furthermore, the
abundant crystal symmetry inherent to Pb$_{10}$(PO$_{4}$)$_{6}$O$_{4}$ provides
an ideal platform for the emergence of various multi-dimensional fermions,
including a 0D four-fold degenerated Dirac fermion with quadratic dispersion, a
1D quadratic/linear nodal-line (QNL/LNL) fermion along symmetric $k$-paths, 1D
hourglass nodal-line (HNL) fermion linked to the Dirac fermion, and a 2D
symmetry-enforced nodal surface (NS) found on the $k_z$=$\pi$ plane. Moreover,
when considering the weak ferromagnetic order, Pb$_{10}$(PO$_{4}$)$_{6}$O$_{4}$
transforms into a rare semi-half-metal, which is characterized by the presence
of Dirac fermion and HNL fermion at the Fermi level for a single spin channel
exhibiting 100$\%$ spin polarization. Our findings reveal the coexistence of
flat bands, diverse topological semimetal states and ferromagnetism within in
Pb$_{10}$(PO$_{4}$)$_{6}$O$_{4}$, which may provide valuable insights for
further exploring intriguing interplay between superconductivity and exotic
electronic states. | Botao Fu, Qin He, Xiao-Ping Li | 2023-09-04T18:27:15Z | http://arxiv.org/abs/2309.01755v1 | # Flat-band and multi-dimensional fermions in Pb\({}_{10}\)(Po\({}_{4}\))\({}_{6}\)O\({}_{4}\)
###### Abstract
Employing a combination of first-principles calculations and low-energy effective models, we present a comprehensive investigation on the electronic structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\), which exhibits remarkable quasi-one-dimensional flat-band around the Fermi level that contains novel multi-dimensional fermions. These flat bands predominantly originate from \(p_{x}/p_{y}\) orbital of the oxygen molecules chain at \(4e\) Wyckoff positions, and thus can be well-captured by a four-band tight-binding model. Furthermore, the abundant crystal symmetry inherent to Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) provides an ideal platform for the emergence of various multi-dimensional fermions, including a 0D four-fold degenerated Dirac fermion with quadratic dispersion, a 1D quadratic/linear nodal-line (QNL/LNL) fermion along symmetric \(k\)-paths, 1D hourglass nodal-line (HNL) fermion linked to the Dirac fermion, and a 2D symmetry-enforced nodal surface (NS) found on the \(k_{z}\)=\(\pi\) plane. Moreover, when considering the weak ferromagnetic order, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) transforms into a rare semi-half-metal, which is characterized by the presence of Dirac fermion and HNL fermion at the Fermi level for a single spin channel exhibiting 100% spin polarization. Our findings reveal the coexistence of flat bands, diverse topological semimetal states and ferromagnetism within in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\), which may provide valuable insights for further exploring intriguing interplay between superconductivity and exotic electronic states.
## I Introduction
Recently, attention has turned to a new class of materials with unique structures, exemplified by the ground-breaking discovery of Cu-doped lead-apatite (LK-99), Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, which has been reported as the first room-temperature ambient-pressure superconductor[1; 2]. Unfortunately, as more precise experimental observations accumulate[3], the mystery surrounding superconductivity in LK-99 is gradually unraveling, and researchers are increasingly inclined to refute the notion of room-temperature superconductivity in LK-99. Nevertheless, this system is very interesting in its own electronic structure. The structure is based on the P63/m-Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O structure model as refined by Krivovichev and Burns in 2003[4]. The Apaites[5] belongs to a group of materials with the general formula A\({}_{10}\)(TO\({}_{4}\))\({}_{6}\)X\({}_{2+x}\), where A is the alkaline or rare earth metal, T=Ge, Si, or P, and X=halide, O or OH. The parent material, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, can be modulated through various factors, such as element substitutions[6; 7; 8] and concentration adjustments[9]. Consequently, the investigation of derivatives of these materials becomes highly significant.
On the other side, recently, an array of diverse topological quantum states, including Weyl/Dirac semimetals[10; 11; 12; 13; 14; 15] and nodal-line semimetals[16; 17; 18] have been experimentally observed in several superconducting systems such as ZrTe\({}_{5}\), TaN-series, TaIrTe\({}_{4}\), SnSe and YPtBi, etc. What's more, the interaction between these emergent topological non-trivial electronic states and superconductivity has been recognized as a promising platform for the realization of exotic quasiparticles, holding great promise for applications in high-speed electronics and topological quantum computing[19; 20; 21; 22].
Previous theoretical work has discussed the possibility of placing O-atom at any one of the \(4e\) Wyckoff Position (WP), which agree that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O is a insulator[23; 24]. The compound is non-magnetic and it also has no spin polarization properties[25]. However, strictly speaking, replacing the O atoms are 1/4 occupied with one of the four equivalent positions, would break the crystals original high symmetry, and they have found it plays a crucial role in modulating the electronic transport properties along the chain direction. T. Kun et al. predicted that the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\), with fully occupied \(4e\) positions, belongs to metal and they suggested that the introduction of oxygen vacancies would lead to the disappearance of the flat bands in the parent material[26]. While K. Ogawa et al. conducted investigations into the reaction thermodynamics of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{x}\), where \(x\) ranges from 0 to 4, thereby proposing potential synthesis conditions for varying oxygen concentrations[9]. Despite extensive research efforts[23; 24; 25; 26; 27; 28; 29; 30], few attention have been payed to the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\), especially regarding its underlying topological semimetal state within the electronic structure and unexplored magnetic properties.
In this work, combining the first-principles calculations and low-energy effective models, we conduct a comprehensive investigation on the bandstructure, topological properties and magnetism of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) with with oxygen fully occupied \(4e\) positions. First, we reveal origin of quasi-one-dimensional flat-band around Fermi level by DFT calculation and tight-binding model. Moreover, via symmetry analysis and effective \(k\)-\(p\) models, we uncover the coexistence of multi-dimensional fermions around Fermi level, including 0D quadratic Dirac fermion
at A point, 1D QNL/LNL along \(\Gamma\)A/HK path, 1D HNL on ALMI plane, and 2D NS on \(k_{z}\)=\(\pi\) plane. In addition, the oxygen molecules chains contribute a weak ferromagnetic order, thus Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) becomes rare semi-half-metal, simultaneously hosting 100% spin polarization and quadratic Dirac and HNL fermions at the Fermi level.
## II Computational methods
Structural optimization and electronic structure calculations were carried out by using Vienna _ab-initio_ simulation package (VASP)[31; 32] with Perdew-Burke-Ernzerhof parameterized generalized gradient approximation (PBE-GGA) [33]. The energy cutoff of 500 eV and a _k_-point mesh of 5\(\times\)5\(\times\)7 are chosen. The ionic relaxations were performed until the force on each atom was less than 0.01 eV/A and convergence criterion for the self-consistent electronic minimization loop was set to 10\({}^{-6}\) eV. To analyze the electronic topological properties, the tight-binding (TB) Hamiltonian was constructed via MagneticTB code[34], and surface states were calculated using the iterative Green's function method as implemented in the WannierTools package[35].
## III Results and analysis
### Crystal structure and chemical bonding of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\)
The Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) has the same crystal structure with experimentally synthesized[1] Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O with the space group of 176 (P6\({}_{3}\)/m). The optimized lattice constants are _a=b=_10.151 A and _c=_7.367 A, slightly larger than those of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. The schematic structure is depicted in Fig. 1(a)-(c). In Fig. 1(a), six phosphorus atoms are situated at the Wyckoff position (WP) of 6\(h\) (0.5912,0.9643, 0.75) and are bonded to four nearest neighboring oxygen atoms, forming six [PO4] tetrahedra. The remaining four oxygen atoms uniformly occupy the 4\(e\) WP (0, 0, 0.6571), arranged in a row and neatly positioned on the rhombus of the unit cell. As shown in Fig. 1(b), the ten lead atoms can be categorized into two groups: the first group is at the WP of 6\(h\) (0.2515,0.9882, 0.75), forming a regular hexagon centered around the oxygen atoms on the vertical edge, the second group of lead atoms is located at 4\(f\) (1/3, 2/3, 0.4955), comprising two two layers as depicted in Fig. 1(c). These lead atoms are surrounded by those oxygen atoms from the [PO\({}_{4}\)] tetrahedra, with each lead atom coordinated by six oxygen atoms, exhibiting bonding characteristics reminiscent of H-MoS\({}_{2}\).
The bonding configurations are vividly visualized through calculating the electron localization function (ELF)[36] in Fig. 1(d). In the [PO4] tetrahedron, covalent bonds are observed between P and O atoms, with a bond length of 1.567 A. Notably, the P atom has no remaining lone pair electrons, indicating that all its valence electrons are engaged in bonding. Along the vertical edge of the oxygen atom chain, the intra- and inter-molecular distances are 1.721 A and 1.994 A, respectively. The ELF clearly reveals the presence of covalent bonds between the O atoms within the molecule. Additionally, a pair of lone pairs with the characteristics of \(p_{x,y}\)-\(\pi^{*}\) anti-bonding orbitals of O\({}_{2}\) molecular are clearly observed. Furthermore, Bader charge analysis indicates that Pb1 and Pb2 atoms lose 2.59 and 2.60 electrons, respectively, confirming the ionic bonding between Pb and [PO\({}_{4}\)], which is consistent with the above analysis.
### Quasi-one-dimensional band structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\)
The electronic structure, as shown in Fig. 2(a)-(c), reveals that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) exhibits metallic properties. Close to the Fermi level, we observe four bands that are well-separated from the rest bands, hosting pseudo band gaps above and below them. Interestingly, in the \(k_{z}=0\) and \(k_{z}=\pi\) planes, these four bands exhibit strong anisotropic dispersion, where they display nearly flat dispersion along \(k_{x}/k_{y}\) direction, known as "flat bands", while showing strong dispersion along the \(k_{z}\) direction. To provide a clearer visualization of these flat bands, we present a 3D view of bands at \(k_{z}=0\) and \(\pi\) planes in Fig. 2(d) and (e). At \(k_{z}=0\) plane, the two conduction bands (blue color) demonstrate minimal dispersion with a extremely narrow band width of only 5 meV, indicating a perfect flat band characteristic. Similarly, the two valence bands (cyan color) also exhibit weak dispersion with a band width of approximately 60 meV. Moving
Figure 1: Crystal structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) with bird view (a), top view (b) and side view (c). (d) The ELF of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\).
to the \(k_{z}=\pi\) plane, the four bands become in pairs, intertwined together. Here too, they display very weak dispersion with a band width of around 23 meV.
To delve deeper into the origins of these distinctive band structures, we conducted an orbital projection analysis. It's discovered that the conduction bands (\(\geq\)3 eV) mainly arise from the \(p\)-orbitals of Pb and the \(s\)-orbitals of O. Conversely, the valence bands around and below the Fermi level are primarily contributed by the \(s\)-orbitals of Pb and the \(p\)-orbitals of O. These can be understood as follows: Pb atom loses electron from 6\(p\) orbitals, resulting in a substantial contribution to the conduction bands. Simultaneously, the \(s\)-orbitals of Pb are occupied by electrons, and this contributes to the formation of the valence bands below the Fermi level (-3 eV to -1 eV) as depicted in Fig. 2(c). O atom gains electrons, and hence, its \(p\)-orbitals mainly contribute to the below and around the Fermi level in Fig. 2(b).
Particularly, through charge density calculations in supplementary materials, we found that the four energy bands near the Fermi level primarily originate from the anti-bonding molecular orbital of the oxygen at 4\(e\) WP positions:
\[|1,\pi^{*}_{p_{x}}>,|1,\pi^{*}_{p_{y}}>,|2,\pi^{*}_{p_{x}}>,|2,\pi^{*}_{p_{y}}>, \tag{1}\]
the 1 and 2 refers two O\({}_{2}\) molecular, the \(\pi^{*}_{p_{x}}\) stands for the anti \(\pi\)-bond formed by \(p_{x,y}\)-orbital of O-atoms. Along \(z\) direction, the Hamiltonian is written as,
\[\mathcal{H}(k_{z})=e_{0}\tau_{0}\sigma_{0}+t[(1+cos(k_{z})\tau_{x})+sin(k_{z}) \tau_{y}]\sigma_{x}. \tag{2}\]
The \(\mathbf{\sigma}\) and \(\mathbf{\tau}\) represent intra- and inter-molecule degrees of freedom while the \(e_{0}\) is the molecular orbital energy level and the \(t\) is the coupling strength between two molecules. It gives energy spectrum along \(k_{z}\) as \(E_{\pm}=e_{0}\pm 2tcos(k_{z}/2)\) with a gap of 2\(t\) at \(k_{z}\)=0. As \(k_{z}\) increases, these bands gradually approach each other, eventually crossing at \(k_{z}\)=\(\pi\), in consistent with the DFT results in Fig. 3(b).
Previous structural analysis already reveals that these four oxygen atoms form an effectively one-dimensional oxygen molecule chain along the \(z\) direction, with a large intermolecular distance (9.865 A), resulting in very weak in-plane coupling and non-dispersive flat band. As shown in Fig. 2(f), this quasi-one-dimensional electronic structure is also reflected from the Fermi surface, where the Fermi surface of the system consists of two parallel planes, providing perfectly Fermi surface nesting with a vector of (0, 0, 0.517).
Multi-dimensional fermions embedded within the flat band of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\)
Because electrons near the Fermi level play a dominant role in transport properties, we will conduct an in-depth analysis of the four energy bands in close proximity to
Figure 2: (a)-(c) The band structures of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) with different atomic orbital projections. The 3D view of four bands around Fermi level at \(k_{z}=0\) plane in (d) and at \(k_{z}=\pi\) plane in (e). (f) The Fermi surface of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\).
the Fermi level through through first-principles calculations, symmetry analysis, and low-energy effective models. A magnified view of the band structure around the Fermi level is depicted in Fig. 3(a). Firstly, within the \(k_{z}=0\) plane, these four energy bands can be categorized into two distinct groups. The lower-energy bands represent bonding states between oxygen molecules, while the higher-energy bands correspond to anti-bonding states. The separation between these two sets of bands is dictated by the strength of coupling between molecules along the \(z\)-direction, whereas the dispersion within each set is influenced by the in-plane coupling between O\({}_{2}\) molecules. Despite the relatively weak dispersion observed in the in-plane bands, meticulous calculations unveil that the two valence bands mutually interact within the energy range of -0.6 eV to -0.5 eV, resulting in a double degeneracy both at \(\Gamma\) and K points. Notably, at the \(\Gamma\) point, there is quadratic dispersion in the \(k_{x}\)-\(k_{y}\) plane, while at K point, a linear dispersion is evident. Upon closer examination of the two conduction bands, a similar double degeneracy is observed at both the \(\Gamma\) and K points.
Even more intriguingly, these instances of 2-fold degeneracy are not isolated. Actually, they persist consistently along the \(k_{z}\) direction, resulting in the creation of a quadratic nodal-line (QNL) [37] along the \(\Gamma\)A path and a linear nodal-line (LNL) along the KH path, as illustrated in Fig. 3(b) and 3(c). As \(k_{z}\) evolves, the energy of the conduction bands decreases while the energy of the valence bands increases until they converge on the \(k_{z}=\pi\) plane. In fact, due to the nonsymmorphic symmetry of 176 space group these bands will maintain a twofold degeneracy throughout the entire \(k_{z}=\pi\) plane, which consequently gives rise to the formation of a 2D nodal surface (NS)[38; 39; 40], as shown in Fig. 3(f). Of particular interest is the A point, where the two doubly nodal surface intersect, forming a fourfold degenerate Dirac point with quadratic dispersion in the \(k_{x}\)-\(k_{y}\) plane, namely forming a quadratic Dirac point [41] (QDP).
Additionally, on the high-symmetric ATML plane in the Brillouin zone (BZ), there exists new nodal line structure centered at L point. As illustrated in Fig. 3(f), along LM and LA' paths, the 2-fold degenerate conduction and valence band are lift and inter-exchange partners, leading to a hourglass-like dispersion relation, creating new double degeneracy point between middle two-bands. Due to coexistence of space inversion (\(\mathcal{P}\)) and time reversal symmetry (\(\mathcal{T}\)) these crossing points are not isolated, they form a continuous closed curve in the momentum space. Indeed, it gives rise to a hourglass nodal-line (HNL)[42; 43] centered at L point and linked to A point as schematically shown in Fig. 3(g). Considering the \(\widetilde{\mathcal{C}}_{6z}\) symmetry, there will be six equivalent HNL which are linked to the central Dirac point, extending along AL direction, forming a nodal-network structure[44; 45; 46] in the extended BZ. As a whole, multi-dimensional fermions including 0D QDP at A point, 1D QNL/LNL along \(\Gamma\)A/KH path, 1D HNL lying on ATML plane and 2D NS on
Figure 3: (a) Comparison showing of four-bands of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) around Fermi level by DFT and tight-binding (TB) model, where the fitting parameters of Hamiltonian are chosen as \(e_{1}\)=-0.224, \(t_{1}\)=0.165, \(r_{1}\)=0.008, \(s_{1}\)=-0.009 and \(s_{2}\)=-0.005. The irreducible representations are marked for each band, where the superscript \(\pm\) denotes the parity eigenvalues. (b) Double degenerated band along \(\Gamma\)A path. (c) Double degenerated band along KH path. (d) Hourglass-like band dispersion along LM path, (e) Hourglass-like band dispersion along LA’ path. The A’ (0, 0, 0.48)located at \(\Gamma\)A path. (f) 3D view of quadratic Dirac point at A point on \(k_{z}\)=\(\pi\) plane. (g) Schematically illustrations for the location of multi-dimensional fermions in the Brillouin zone.
\(k_{z}=\pi\) plane are clearly demonstrated in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\).
The tight-binding model and low energy effective \(k\)\(\cdot\)\(p\) model of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\)
To unravel the origin of multi-dimensional fermions in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\), we construct effective models through symmetry analysis. The lattice of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) belong to a nonsymmorphic space group \(P6_{3}/m\) (No. 176) which hosts various types of symmetry operators such as rotation (\(\mathcal{C}_{3z}\)), spatial-inversion (\(\mathcal{P}\)), mirror (\(\mathcal{M}_{z}\)) and screw-rotation (\(\widetilde{\mathcal{C}}_{2z}\)). Based on the first-principles calculations, we can find that appearance of the four-fold Dirac point at the \(A\) arises from the essential band degeneracy, which corresponds to the 4D irreducible representations (IRRs) of \(A_{2}A_{3}\) [see Fig. 3(a)]. Under the basis of \(A_{2}A_{3}\), the matrix representations of the generators can be given as
\[\widetilde{\mathcal{C}}_{6z}=\frac{\sqrt{3}}{2}\Gamma_{0,3}+\frac{i}{2} \Gamma_{3,3},\ \ \mathcal{P}=\Gamma_{0,1},\ \ \mathcal{T}=\Gamma_{1,0}\mathcal{K} \tag{3}\]
with \(\Gamma_{i,j}=\sigma_{i}\otimes\sigma_{j}\), \(\sigma_{0}\) as the \(2\times 2\) identity matrix and \(\sigma_{i}\) (\(i=1,2,3\)) as the Pauli matrix. \(\mathcal{K}\) is a complex conjugate operator. With the standard approach [47], the effective Hamiltonian around the \(A\) point retained to the leading order reads
\[\mathcal{H}_{A} = \left[c_{1}+c_{2}(k_{x}^{2}+k_{y}^{2})\right]\Gamma_{0,0}+c_{3} \Gamma_{3,3}k_{z} \tag{4}\] \[+ \left(\alpha_{1}\Gamma_{+,1}k_{-}^{2}+\alpha_{2}\Gamma_{+,0}k_{+} k_{z}+h.c.\right).\]
Here, \(\Gamma_{\pm,i}=\sigma_{\pm}\otimes\sigma_{i}\) with \(\sigma_{\pm}=(\sigma_{1}\pm i\sigma_{2})/2\), \(k_{\pm}=k_{x}\pm ik_{y}\). \(c_{i=1,2,3}\) is a real parameter, and \(\alpha_{i}\) are complex parameters. Clearly, the obtained Hamiltonian (4) exhibits linear band splitting along the \(k_{z}\) direction and quadratic dispersion in \(k_{x}\)-\(k_{y}\) plane, thus describes a QDP. Moreover, the QDP hosts a toplogical charge \(\mathcal{C}\)=0 due to the presence of \(\mathcal{PT}\) symmetry.
Besides the 0D Dirac point, one can observe a doubly degenerate nodal line along the \(\Gamma\)A path in Fig. 3(b). The little group of the \(\Gamma\)A path is \(\mathcal{C}_{6}\), which is an abelian group without any 2D IRRs. Fortunately, there exists an anti-unitary operator \(\mathcal{PT}\) in the magnetic co-little group of \(\Gamma\)A, which can stick two 1D IRRs together to form a 2D IRRs \(\Delta_{3}\Delta_{5}\) or \(\Delta_{4}\Delta_{6}\) [see Fig. 3(b)], and thus protects a nodal line existence. We take \(\Delta_{3}\Delta_{5}\) as an example, the matrix representation of symmetry generators can be expressed as
\[\widetilde{\mathcal{C}}_{6z}=-\frac{1}{2}\sigma_{0}+\frac{\sqrt{3}i}{2}\sigma _{3},\ \ \mathcal{PT}=\sigma_{1}\mathcal{K}. \tag{5}\]
Under the symmetry constraints, the effective Hamiltonian around the \(\Gamma\)A path could be derived as
\[\mathcal{H}_{\Gamma A} = \left[c_{1}+c_{2}k_{z}+c_{3}(k_{x}^{2}+k_{y}^{2})\right]\sigma_{0} \tag{6}\] \[+ \left(\alpha_{1}\sigma_{+}k_{+}^{2}+h.c.\right),\]
where \(c_{i}\) are real parameters, and \(\alpha_{1}\) is a complex parameters. The Hamiltonian (6) describes a QNL located at the \(\Gamma\)A path, which exhibits linear dispersion along \(k_{z}\) and quadratic dispersion normal to nodal line. For the other 2D IRRs \(\Delta_{4}\Delta_{6}\) in the co-little group of \(\Gamma\)A, one can check that its effective Hamiltonian shares the same form as Eq. (6), which also describes a QNL. Besides, applying above analysis to the double degenerate nodal line along KH path with IRRs \(P_{2}P_{3}\) [see Fig. 3(c)], the corresponding effective Hamiltonian is written as,
\[H_{KH} = \left(c_{1}+c_{2}k_{z}\right)\sigma_{0}+\left(c_{3}k_{x}+c_{4}k_{ y}\right)\sigma_{1} \tag{7}\] \[+ \left(c_{3}k_{y}-c_{4}k_{x}\right)\sigma_{2}\]
which describes a LNL as expected.
In addition, there also exists a NS with 2D manifold of reciprocal space due to the presence of twofold screw-rotational symmetry \(\widetilde{\mathcal{C}}_{2z}=\left\{\mathcal{C}_{2z}|0,0,\frac{1}{2}\right\}\) in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\). It is worth noting that a combination of twofold screw-rotational symmetry and and time-reversal symmetry \(\mathcal{T}\) could guarantee the appearance of NS on boundary of the Brillouin zone (BZ) [48]. Specifically, one can find that \((\widetilde{\mathcal{C}}_{2z})^{2}=e^{ik_{z}}\), \(\mathcal{T}^{2}=1\) for spinless case. Therefore, \(\widetilde{\mathcal{C}}_{2z}\mathcal{T}\) is anti-unitary operator and satisfies \((\widetilde{\mathcal{C}}_{2z}\mathcal{T})^{2}=-1\) on the \(k_{z}=\pi\) plane, which leads to Kramers-like degeneracy of NS. The result is also confirmed by our DFT calculations.
Since the multi-dimensional fermions including QDP, QNL/LNL, HNL and NS all originate from the isolated four bands near the Fermi surface, a simple four-band lattice model is possible to be established, and can be used for further investigation of the physical properties related to Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\). Based on the first-principles calculations, we identify the IRRs of four bands near the Fermi level belong to a single elementary band representations \({}^{1}E^{{}^{\prime\prime}2}E^{{}^{\prime\prime}}\oplus 2a\) of SG 176 in topological quantum chemistry [49; 50; 51]. Therefore, we can construct a four-band lattice model by setting two basis orbitals \(\left(\left|p_{x}\right\rangle,\left|p_{y}\right\rangle\right)\) on \(2a\) Wyckoff position in terms of the relationship between the site-symmetry of the Wyckoff position and the EBRs. The matrix representation of generators of SG 176 are given by
\[D(\mathcal{C}_{3z}) = \frac{1}{2}\Gamma_{0,0}-\frac{\sqrt{3}i}{2}\Gamma_{0,2},\ \ D( \widetilde{\mathcal{C}}_{2z})=-\Gamma_{1,0} \tag{8}\] \[D(\mathcal{P}) = -\Gamma_{1,0},\ \ D(\mathcal{T})=\Gamma_{0,0}\mathcal{K}. \tag{9}\]
Resultantly, the symmetry allowed lattice model may be written as
Here, \(e_{1}\) represents the onsite energy, \(t_{1}\) and \(r_{1}\) correspond to the nearest neighbor and next neighbor intra-chain hopping along the \(z\)-direction, respectively. Additionally, \(s_{i}\) denotes the inter-chain coupling. In Fig. 3(a), we plot the electronic band structure of Eq. (10), one can observe the established lattice model indeed captures the main feature of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) material with multi-dimensional Fermions.
III Semi-half-metal state of Pb\({}_{10}\)(Po\({}_{4}\))\({}_{6}\)O\({}_{4}\) under ferromagnetic ground state
Building on our previous analysis, the low energy electron states of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) are mainly come from the 4\(e\) O-atoms. These four oxygen atoms can be viewed as two pairs of interacting oxygen molecules. Given the inherent paramagnetism of oxygen molecule (2.0 \(\mu_{B}\)) due to its half-filled 2-fold \(\pi^{*}\) bond, we further explore the underlying magnetic properties of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\). The spin polarized calculation is carrier out and we found the system possesses a total magnetic moment of 2.0 \(\mu_{B}\), primarily originating from O-atom at 4\(e\). The magnetic moment of O-atom in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) is only half of that of isolated oxygen molecule. The reduction in the magnetic moment could be interpreted from two perspective. On the one hand, these oxygen atoms have gained electrons (0.4 \(e\)) from Pb atoms, leading to electron occupancy exceeding half-filling. On the other hand, there is electron hopping between two O\({}_{2}\) molecules, weaken the electron correlation effect. Both of these factors contribute to a reduction in the magnetic moment of the oxygen atoms compared to isolated O\({}_{2}\) molecules.
Moreover, by constructing intra-molecular and inter-molecular anti-ferromagnetic configurations, we discovered that the ferromagnetic configuration along the \(z\) direction is the ground states (with 261 meV lower than the non-magnetic state and 217 meV lower than the anti-ferromagnetic state). In Fig. 4, we present the spin-polarized energy bands, revealing a spin splitting that leads one channel to become a semiconductor while the other spin channel crosses the Fermi level. This configuration is known as a half-metal state. What's particularly intriguing is the presence of completely spin-polarized flat bands at \(k_{z}\)=0 plane and topological semimetal states, including QDP at A point and the HNL around surrounding L point, all located at the Fermi energy level. This confirms Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\) belongs to the rare semi-half-metal, which may exhibit immensely strong correlation effects and topological transport properties.
## IV Conclusions
In summary, our theoretical investigation has unveiled the presence of flat bands and novel topological semimetal states in the LK-99 derivative. We have elucidated that the origin of the nearly flat dispersion around the Fermi level primarily stems from a quasi 1D O-atom chain. Furthermore, the unique arrangement of O-atoms with nonsymmorphic symmetry plays a crucial role in generating a 4-fold QDP at the A point, accompanied by the presence of highly interesting HNL fermions. To comprehensively capture the topological features of diverse multi-dimensional fermions embedded in Pb10(PO4)6O4, we have developed both local low-energy kp models and a global four-band tight-binding model. Our research not only unveils a promising material but also opens up avenues for exploring the interplay between nontrivial electronic states, magnetism, and su
Figure 4: Spin polarized band structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\). Insets are the enlarged bands around A and L points.
perconductivity within the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O\({}_{4}\).
This work is supported by the National Natural Science Foundation of China (NSFC, Grants No. 12304086, No. 12204330), and Dr. B. Fu also the Sichuan Normal University for financial support (Grant No. 341829001).
|
2302.05899 | Bl0ck: Paralyzing 802.11 connections through Block Ack frames | Despite Wi-Fi is at the eve of its seventh generation, security concerns
regarding this omnipresent technology remain in the spotlight of the research
community. This work introduces two new denial of service attacks against
contemporary Wi-Fi 5 and 6 networks. Differently to similar works in the
literature which focus on 802.11 management frames, the introduced assaults
exploit control frames. Both the attacks target the central element of any
infrastructure-based 802.11 network, i.e., the access point (AP), and result in
depriving the associated stations from any service. We demonstrate that, at the
very least, the attacks affect a great mass of off-the-self AP implementations
by different renowned vendors, and it can be mounted with inexpensive
equipment, little effort, and a low level of expertise. With reference to the
latest standard, namely, 802.11-2020, we elaborate on the root cause of the
respected vulnerabilities, pinpointing shortcomings. Following a coordinated
vulnerability disclosure process, our findings have been promptly communicated
to each affected AP vendor, already receiving positive feedback as well as a -
currently reserved - common vulnerabilities and exposures (CVE) id, namely
CVE-2022-32666. | Efstratios Chatzoglou, Vyron Kampourakis, Georgios Kambourakis | 2023-02-12T12:33:48Z | http://arxiv.org/abs/2302.05899v2 | # Block: Paralyzing 802.11 connections through Block Ack frames
###### Abstract
Despite Wi-Fi is at the eve of its seventh generation, security concerns regarding this omnipresent technology remain in the spotlight of the research community. This work introduces two new denial of service attacks against contemporary Wi-Fi 5 and 6 networks. Differently to similar works in the literature which focus on 802.11 management frames, the introduced assaults exploit control frames. Both the attacks target the central element of any infrastructure-based 802.11 network, i.e., the access point (AP), and result in depriving the associated stations from any service. We demonstrate that, at the very least, the attacks affect a great mass of off-the-self AP implementations by different renowned vendors, and it can be mounted with inexpensive equipment, little effort, and a low level of expertise. With reference to the latest standard, namely, 802.11-2020, we elaborate on the root cause of the respected vulnerabilities, pinpointing shortcomings. Following a coordinated vulnerability disclosure process, our findings have been promptly communicated to each affected AP vendor, already receiving positive feedback as well as a - currently reserved - common vulnerabilities and exposures (CVE) id, namely CVE-2022-32666.
Keywords:Network security IEEE 802.11 Wi-Fi DoS Vulnerabilities CVE
## 1 Introduction
Over the past 26 years, the IEEE 802.11 standard, commonly referred to as Wi-Fi, continuously evolved itself by improving the speed, stability, and security of wireless local area network (WLAN) connections. The seventh generation (Wi-Fi 7) of this widespread technology is already on the nearby horizon, following Wi-Fi 6E which added support for the 6GHz spectrum to Wi-Fi 6. IEEE 802.11 networks are omnipresent, not only in public places like coffee shops, libraries, airports, hotels, universities, but also in houses and corporate and enterprise |
2310.03021 | Effects of feedback-free starburst galaxies on the 21-cm signal and
reionization history | Different star-formation models at Cosmic Dawn produce detectable signatures
in the observables of upcoming 21-cm experiments. In this work, we consider the
physical scenario of feedback-free starbursts (FFB), according to which the
star-formation efficiency (SFE) is enhanced in sufficiently massive halos at
early enough times, thus explaining the indication from the James Webb Space
Telescope for an excess of bright galaxies at $z \geq 10$. We model the
contribution of FFBs to popII SFE and compute the impact these have on the
21-cm global signal and power spectrum. We show that FFBs affect the evolution
of the brightness temperature and the 21-cm power spectrum, but they only have
a limited effect on the neutral hydrogen fraction. We investigate how the
observables are affected by changes in the underlying star formation model and
by contribution from popIII stars. Finally, we forecast the capability of
next-generation Hydrogen Epoch of Reionization Array (HERA) to detect the
existence of FFB galaxies via power spectrum measurements. Our results show the
possibility of a significant detection, provided that popII stars are the main
drivers of lowering the spin temperature. Efficient popIII star formation will
make the detection more challenging. | Sarah Libanore, Jordan Flitter, Ely D. Kovetz, Zhaozhou Li, Avishai Dekel | 2023-10-04T17:59:18Z | http://arxiv.org/abs/2310.03021v2 | # Effects of feedback-free starburst galaxies on the 21-cm signal and reionization history
###### Abstract
Different star-formation models at Cosmic Dawn produce detectable signatures in the observables of upcoming 21-cm experiments. In this work, we consider the physical scenario of feedback-free starbursts (FFB), according to which the star-formation efficiency (SFE) is enhanced in sufficiently massive halos at early enough times, thus explaining the indication from the James Webb Space Telescope for an excess of bright galaxies at \(z\gtrsim 10\). We model the contribution of FFBs to popII SFE and compute the impact these have on the 21-cm global signal and power spectrum. We show that FFBs affect the evolution of the brightness temperature and the 21-cm power spectrum, but they only have a limited effect on the neutral hydrogen fraction. We investigate how the observables are affected by changes in the underlying star formation model and by contribution from popIII stars. Finally, we forecast the capability of next-generation Hydrogen Epoch of Reionization Array (HERA) to detect the existence of FFB galaxies via power spectrum measurements. Our results show the possibility of a significant detection, provided that popII stars are the main drivers of lowering the spin temperature. Efficient popIII star formation will make the detection more challenging.
## I Introduction
Between recombination and reionization, the Universe was permeated with neutral hydrogen. The absorption of CMB photons, collisions between particles and radiation from the first stars excited some of the hydrogen atoms from the singlet to the triplet state. These processes, together with hyperfine transitions that brought the atoms back to the singlet state, sourced the cosmological 21-cm signal (see e.g., Refs. [1; 2] for review). Its redshift evolution can be used to probe the conditions of the gas in the intergalactic medium (IGM) across cosmic time.
While at very high redshift, in the so called Dark Ages, the 21-cm signal provides direct access to fluctuations in the matter density field, at Cosmic Dawn below \(z\sim 30\) it becomes particularly sensitive to astrophysical processes related to the formation of the first stars and galaxies. The way neutral HI gas heats and ionizes in this phase depends on the efficiency of star formation and it develops inhomogeneously [3; 4]. Different star formation scenarios may lead to very different reionization histories, which can potentially be probed by next generation 21-cm experiments targeting the global signal (e.g., EDGES [5] and SARAS [6]) or interferometers measuring the power spectrum, such as HERA [7], MeerKAT [8] and SKA [9].
This perspective is particularly timely, as recent JWST observations seem to suggest an anomalous large number of bright galaxies at high \(z\)[10; 11; 12; 13; 14]. Possible explanations point either to inconsistencies in the cosmological \(\Lambda\)CDM model [15; 16; 17; 18] or to the presence of uncertainties in the astrophysical model. Solutions that have been suggested within the \(\Lambda\)CDM paradigm usually contain _ad-hoc_ prescriptions to match the observations, e.g., providing enhanced star-formation efficiency [19], large luminosity-to-mass ratio due to top heavy intial mass functions [20; 21; 22] or high UV radiation [23], low dust attenuation [24], or stochasticity in the star-formation history [25].
At the center of our work, we consider instead the scenario of feedback-free starbursts (FFB), which was proposed by the authors of Ref. [26]. Under the conditions of high density and low metallicity expected at high redshifts in massive dark-matter (DM) halos, star formation is predicted to be enhanced by an increased efficiency in converting the accreted gas into stars. At other epochs and DM halo masses, stellar feedback -- namely supernovae, stellar winds, radiative pressure and photo-heating-- lead to lower efficiency. FFBs naturally emerge when the free-fall timescale for star formation is \(\sim\mathcal{O}(1\,\mathrm{Myr})\), i.e., shorter than the time required for a starburst to generate effective stellar feedback.
While giving rise to a high abundance of bright galaxies at \(z\gtrsim 10\), the FFB scenario may also leave imprints on the 21-cm signal and reionization process. The synergy with 21-cm surveys, therefore, can be the key to provide observable tests for the FFB scenario.
The goal of this work is to investigate the effect of FFB galaxies in this context. In section II we summarize the modelling required: we introduce the observables (II.1), and how they depend on star formation (II.2) in the standard and FFB scenarios. Section III describes the HERA survey characteristics and the setup of our analysis. In section IV we investigate how FFBs affect the 21-cm observables, namely the global signal and power spectrum. We account for contributions from population III stars in section V. Finally, we discuss how this analysis translates to constraints on the detectability of the FFB scenario, which we forecast for HERA 21-cm power spectrum, in section VI. We draw conclusions in section VII.
Model
To perform our analysis, we need first of all to introduce the 21-cm observables and to characterize how they depend on the underlying cosmological and astrophysical models. Crucial in this sense are the role of star formation and its efficiency; therefore, we summarize the main features of its model in the standard and FFB scenarios.
### 21-cm observables
The main observables used to analyze 21-cm surveys are the brightness temperature [27]
\[T_{b}=\frac{T_{s}-T_{\gamma}}{1+z}(1-e^{-\tau_{21}}), \tag{1}\]
and its fluctuations \(\delta T_{b}\). In the previous equation, \(T_{\gamma}\propto(1+z)\) is the CMB temperature and
\[\tau_{21}=(1+\delta)\,x_{\rm HI}\,\frac{T_{0}}{T_{s}}\frac{H(z)}{H(z)+\partial_ {r}v_{r}}(1+z), \tag{2}\]
is the 21-cm optical depth, which depends on the matter fluctuations \(\delta\), the fraction of neutral hydrogen \(x_{\rm HI}\) and the comoving gradient of the baryon peculiar velocity along the line-of-sight \(\partial_{r}v_{r}\). The dependence on the cosmological model is collected into the Hubble parameter \(H(z)\) and the normalization factor
\[T_{0}=34\ {\rm mK}\left(\frac{1+z}{16}\right)^{1/2}\left(\frac{\Omega_{b}h^{2}}{0.022}\right)\left(\frac{\Omega_{m}h^{2}}{0.14}\right)^{-1/2}. \tag{3}\]
Cosmological parameters \(\{h,\Omega_{b},\Omega_{m},A_{s},n_{s}\}\) are set at the _Planck 2018_[28] fiducial values throughout this work.
The last ingredient in Eq. (1) is the spin temperature \(T_{s}\), that quantifies the ratio between the number density of hydrogen atoms in the triplet and singlet states. At thermal equilibrium, the spin temperature is set by
\[T_{s}^{-1}=\frac{x_{\gamma}T_{\gamma}^{-1}+x_{c}T_{k}^{-1}+x_{\alpha}T_{ \alpha}^{-1}}{x_{\gamma}+x_{c}+x_{\alpha}}\,, \tag{4}\]
where \(T_{k}\) is the gas kinetic temperature and \(T_{\alpha}\sim T_{k}\)[29] is the color temperature of the Ly\(\alpha\) photons emitted by the surrounding stars. The coefficient \(x_{\gamma}\simeq 1\) couples the spin temperature to the CMB, while \(x_{c}\), \(x_{\alpha}\) couple it to the gas temperature. Following Ref. [3], \(x_{c}\) depends on particle collisions and it can be estimated as a function of the number densities of neutral hydrogen, free electrons and free protons; its effect is relevant in the IGM only at \(z\gtrsim 30\)[30]. On the other hand, \(x_{\alpha}\) is set through the Wouthynsen-Field process [31; 32; 33] to be proportional to \(J_{\alpha}(\mathbf{x},z)/(1+z)\), namely to the Ly\(\alpha\) background flux due to the integrated star formation rate. The value of \(J_{\alpha}\) depends on HI excitation due to X-rays [34] and to resonant scatterings in the Ly\(\alpha\) series [35]. Near the sources, the HI optical depth and the contribution of high energy photons redshifted into the Ly\(\alpha\) band make this coupling highly efficient. On the other hand, X-rays heat the gas faster; their luminosity is parametrized through a power-law, with a low-energy cut-off below which photons are absorbed before reaching the IGM [36].
The Ly\(\alpha\) (UV) radiation produced by astrophysical processes also leads to HI ionization. Initially, the process is balanced by the recombination rate [37; 38], which stalls the growth of the ionized regions. Once the number of ionizing photons becomes high enough to saturate the Ly\(\alpha\) coupling and make the interstellar medium transparent to other ionizing photons, these escape into the IGM [39] and the fraction of neutral hydrogen \(x_{\rm HI}\) in Eq. (1) decreases, leading to a decay in the 21-cm signal.
All these processes arise inhomogeneously in the IGM: the amplitude and size of local fluctuations determine the 21-cm power spectrum, which is defined as
\[\Delta_{\rm 21cm}^{2}=\frac{k^{3}}{2\pi^{2}}\langle\delta T_{b}\delta T_{b}^{* }\rangle. \tag{5}\]
During the epoch of star formation, Ly\(\alpha\) photons initially couple the spin temperature to the adiabatically-decreasing kinetic temperature. Only after this coupling saturates, Ly\(\alpha\) photons can heat the gas: this happens earlier in small DM halos, therefore small scales in the power spectrum have larger amplitude in this stage. Large-scale power rises later, but it quickly overcomes the small scales due to X-ray heating, whose efficiency is larger close to individual sources, which are apart one from another. Once X-ray radiation reaches the IGM, fluctuations in the power spectrum are determined by the DM density field and, as time passes, by the morphology of HI ionized regions. Since ionization initially occurs due to UV radiation inside small DM halos [40], small scale power decreases faster. Once that reionization is complete, the 21-cm signal finally disappears.
### Star Formation
Star formation modelling is the key to understanding the 21-cm signal evolution at \(z\lesssim 30\). Following Ref. [4] (MUN21), we consider a standard scenario in which reionization is driven by atomic cooling galaxies (ACGs) hosting population II (popII) stars, in agreement with faint galaxy observations and the UV luminosity function [41; 42; 38; 43]. We then summarize the feedback-free starburst scenario [26] (DEK23) and describe how it alters the star formation rate (SFR) and efficiency (SFE).
In both scenarios, we adopt the formalism from Refs. [26; 44], that characterizes the SFR per halo as
\[{\rm SFR}(z,M_{h})=f_{\rm duty}\epsilon(z,M_{h})\dot{M}_{\rm acc}(z,M_{h}), \tag{6}\]
where \(\dot{M}_{\rm acc}(z,M_{h})\) is the mean baryonic accretion rate, \(\epsilon(z,M_{h})\) is the star formation efficiency, and \(f_{\rm duty}=\exp{(-M_{\rm turn}/M_{h})}\) includes a turnover mass \(M_{\rm turn}\) to suppress the SFR on the small mass end. We approximate
\(\dot{M}_{\rm acc}\) using the analytical prescription in DEK23,
\[\dot{M}_{\rm acc}=65\,M_{\odot}{\rm yr}^{-1}\left(\frac{M_{h}}{10^{10.8}M_{\odot}}\right)^{1.14}\left(\frac{1+z}{10}\right)^{5/2}. \tag{7}\]
The SFR in Eq. (6) differs from the approximated SFR model used in MUN21 and relies on more informed galaxy formation studies (e.g., Refs. [26; 44; 45; 46; 47]); in Appendix A, we discuss in detail the difference between the two formalisms and their impact on 21-cm observables.
In the following analysis, we weight the SFR by the halo mass function1\(dn/dM_{h}\), and we marginalize over \(M_{h}\) to get the star formation rate density
Footnote 1: We adopt the \(z\)-dependent halo mass function from Ref. [48].
\[{\rm SFRD}(z)=\int dM_{h}\frac{dn}{dM_{h}}{\rm SFR}(z,M_{h}). \tag{8}\]
The SFRD is the main quantity that enters the computation of the the Ly\(\alpha\) background and X-ray heating in the 21-cm signal; the shape of the halo mass function implies that the contribution of the more massive halos is suppressed compared to the small mass ones.
Moreover, the SFRD enters the computation of the number of ionizing photons. As Ref. [38] describes in detail, to compute it we need to introduce the parameter
\[\tilde{f}_{\rm esc}=\min\left[f_{\rm esc}\left(\frac{M_{h}}{10^{10 }M_{\odot}}\right)^{\alpha_{\rm esc}},1\right]\,, \tag{9}\]
that describes the fraction of Ly\(\alpha\) ionizing photons capable of leaving the galaxies and ionizing the intergalactic medium; we use \(f_{\rm esc}=10^{-1.35}\), \(\alpha_{\rm esc}=-0.3\). The value of \(\tilde{f}_{\rm esc}\) in the Epoch of Reionization is still uncertain; recent results from CEERS [53] seem to point to a mismatch between data and theoretical prescriptions.
It is interesting at this point to note that, while the ionizing fraction depends on \(\tilde{f}_{\rm esc}\), the heating is unaffected by its value [38]. This is due to the longer mean free path that both soft-UV and X-ray photons that heat the gas have with respect to UV photons that drive reionization [54; 55]. In fact, the cross section for the absorption of ionizing photons with energy \(\geq 13.6\,\)eV is very high: whenever the HI column density is large, they get trapped inside galaxies and are not capable of reaching the IGM. The recombination of HI atoms inside the galaxies then produces a Ly\(\alpha\) cascade [56] that adds to the bulk of soft-UV photons. Because of their lower energy, these can be absorbed only if the energy matches one of the lines in the Lyman series; the cross section of this process is smaller and results in a longer mean free path, that allows them to reach the IGM. Photons with energy \(>10.2\,\)eV are later on redshifted into the Ly\(\alpha\) line, and interact with HI and diffuse in the IGM as a result of scattering due to their absorption and re-emission [56].
The main consequence of the different mean free paths of ionizing- and heating- photons is their dependence on the distribution of HI column density regions [57]. While the former is described by Eq. (9), which leads to a suppression of the ionization in the more massive halos, the latter depends on the formation efficiency of the sources that mainly produce the radiation field. In our analysis, we consider three main drivers: popII stars, formed in atomic cooling galaxies; FFBs; popIII stars formed in molecular cooling galaxies. We characterize popII and FFB efficiency in the next subsections, while popIII stars are investigated in Sec. V.
#### ii.1.1 Atomic Cooling Galaxies
In the ACG scenario where popII stars are formed, we use the prescription in MUN21 to characterize the SFE in the standard case \(\epsilon(z,M_{h})=\epsilon_{\rm MUN21}(M_{h})\), in which
\[\epsilon_{\rm MUN21}(M_{h})=\min\left[f_{*}\left(\frac{M_{h}}{10^{10 }M_{\odot}}\right)^{\alpha_{*}},1\right], \tag{10}\]
where \(f_{*}=10^{-1.25}\) sets the SFE in halos with pivot mass \(M_{h}=10^{10}M_{\odot}\).2 The power law index \(\alpha_{*}=0.5\) is modelled as in Ref. [49] to account for star-formation quenching in small DM halos.
Footnote 2: We label as \(f_{*}\) the factor that MUN21 calls \(f_{*,10}^{\rm HI}\), leaving the dependence on the mass scale \(10^{10}M_{\odot}\) implicit.
With respect to the SFRD in Eq. (8), we define \(M_{\rm turn}=\max(M_{\rm atom},M_{\rm crit})\), where \(M_{\rm atom}=3.3\times 10^{7}M_{\odot}[(1+z)/21]^{-3/2}\) is the minimum mass required to form stars via atomic cooling [50], while \(M_{\rm crit}\) characterizes the critical halo mass below which star formation is inefficient because of photo-heating [52; 51; 37]. Fig. 1 shows the ACG SFE and SFRD we adopt in our analysis.
#### ii.1.2 Feedback-Free Starburst Galaxies
When modelling star formation, the role played by stellar feedbacks, such as winds or supernova explosions, is crucial. The star formation efficiency at low \(z\) is believed to be small due to feedback [58; 59; 60; 61], while the FFB scenario introduced by DEK23 shows that the SFE is higher in massive galaxies at \(z\sim 10\), in agreement with the excess of bright galaxies in JWST observations.
For FFB to happen, the free-fall collapse time of the star-forming cloud (SFC) have to be shorter than the time required by the stellar feedbacks to become effective. The former is estimated as \(t_{\rm ff}\propto n_{\rm SFC}^{-1/2}\), where \(n_{\rm SFC}\) is the gas number density, while the latter is \(t_{\rm fhk}\simeq 1\,\)Myr. Moreover, the timescale \(t_{\rm ff}\) has to be larger than the time the gas requires to cool and form stars, \(t_{\rm cool}\propto n_{\rm SFC}^{-1}\)[62]. Finally, a large enough surface density \(\Sigma_{\rm SFC}=M_{\rm SFC}/\pi r_{\rm SFC}^{2}\) is required to prevent unbounded of the SFC gas through stellar radiative pressure and photo-ionization, \(M_{\rm SFC},r_{\rm SFC}\) being its mass and
radius [63; 64; 65]. In order for these processes to realize efficiently, not only do SFCs have to be free of their own feedbacks, but they also need to be shielded against UV radiation and winds from older-generation stars.
The former is guaranteed, since \(r_{\rm{SFC}}\) is larger than the ionizing length \(\delta r\) inside which the UV photon flux overcomes the recombination rate. As for shielding against stellar winds (see e.g., Ref. [66]), the time a shock wave takes to cross the SFCs, namely the cloud crushing time [67]\(t_{\rm{cc}}\propto n_{\rm{SFC}}^{1/2}\), has to be longer than the timescales \(t_{\rm{ff}}\), \(t_{\rm{cool}}\), previously introduced. Gas inside SFCs where FFB take place is almost completely consumed, so they reach near-zero column density [66].
All these conditions are satisfied only by short starbursts in DM halos continuously supplied by gas, all of which fragment into SFCs. As DEK23 shows, the criteria for the onset of FFB can be translated in terms of the properties of the host DM halo mass at given \(z\). Star formation in the halo is driven by FFB, thus its efficiency is enhanced with respect to the standard scenario when
\[M_{h}\geq M_{\rm{FFB}}(z)=10^{10.8}\left(\frac{1+z}{10}\right)^{-6.2}\,M_{ \odot}. \tag{11}\]
This threshold has been computed in DEK23 and it is shown in Fig. 1
Eq. (11) highlights the fact that halos of mass \(\sim 10^{10.8}\,M_{\odot}\) at \(z\sim 10\) can host FFBs; the threshold decreases at higher redshift, where their presence significantly affects star formation. At low \(z\), instead, the threshold mass gets larger, but at the same time the onset of AGN feedback and the presence of hot circumgalactic medium quench star formation in halos \(M_{h}\geq M_{\rm{q}}=10^{12}M_{\odot}\). Thus, in the local Universe, FFBs are unlikely.
The way FFBs contribute to the total star formation rate per halo \({\rm SFR_{tot}}(z,M_{h})\), can be modelled as
\[{\rm SFR_{tot}}=(1-f_{\rm{FFB}}){\rm SFR_{std}}+f_{\rm{FFB}}{\rm SFR_{FFB}}, \tag{12}\]
where \({\rm SFR_{FFB}}=\epsilon_{\rm{max}}\dot{M}_{\rm{acc}}\), and the parameter \(\epsilon_{\rm{max}}\leq 1\) describes the maximum SFE that FFB galaxies can reach. The \((z,M_{h})\) dependence, which we left implicit for brevity, is encoded in
\[\begin{split} f_{\rm{FFB}}(z,M_{h})&=\mathcal{F} \times\mathcal{S}\left[\frac{\log_{10}M_{\rm{q}}/M_{h}}{0.15{\rm dex}}\right] \times\\ &\quad\times\mathcal{S}\left[\frac{\log_{10}M_{h}/M_{\rm{FFB}}(z )}{0.15{\rm dex}}\right]\end{split} \tag{13}\]
where \(\mathcal{F}\leq 1\) is the fraction of galaxies that form in halos with \(M_{h}>M_{\rm{FFB}}(z)\) and host FFBs, while \(\mathcal{S}[x]=(1+{\rm e}^{-x})^{-1}\) is a sigmoid function varying smoothly from 0 to 1. The first sigmoid characterizes the quenching for \(M_{h}\geq M_{\rm{q}}\), while the second sets the star formation rate to its value in the standard model \({\rm SFR_{std}}\) for halos below the threshold \(M_{\rm{FFB}}(z)\), while it gains a \({\rm SFR_{FFB}}\) contribution in halos that host FFBs.
Figure 1: _Star formation models. – Left panels: SFE in the standard MUN21 case (top, Eq. (10)) and when FFBs from DEK23 are included (bottom, Eq. (14)) as a function of the halo mass and redshift. The white line indicates the threshold mass \(M_{\rm{FFB}}\) for FFBs from Eq. (11). Right panels: SFRD from Eq. (8) marginalized over \(M_{h}\) in the standard case using different \(f_{\star}\) values (top) and adding FFBs with different \(\epsilon_{\rm{max}}\) (bottom). FFBs affect star formation differently from just rescaling the efficiency._
The relation in Eq. (12) can be translated to a relation between the SFE in the standard case (\(\epsilon_{\rm MUN21}\), from Eq. (10)) and the SFE in galaxies where star formation is driven by FFBs. We consider3
Footnote 3: Eq. (6) is converted into Eq. (12) by using Eq. (6) and assuming that \(\dot{M}_{\rm tot}^{\rm tot}=M_{\rm std}^{\rm std}=M_{\rm FFB}^{\rm FP}\).
\[f_{*}\epsilon_{\rm tot}=f_{*}(1-f_{\rm FFB})\left(\frac{M_{h}}{10^{10}M_{\odot} }\right)^{\alpha_{*}}+f_{\rm FFB}\epsilon_{\rm max}, \tag{14}\]
where we kept the \(f_{*}\) normalization for consistency. We set \(\{\epsilon_{\rm max},\mathcal{F}\}=\{1,1\}\) as fiducial values, meaning that all the galaxies formed in halos with \(M_{h}\geq M_{\rm FFB}(z)\) have SFE up to \(\epsilon_{\rm max}=1\). In the analysis, we keep \(\mathcal{F}=1\) fixed, but we test the more conservative case \(\epsilon_{\rm max}=0.2\), where FFB galaxies reach smaller efficiency.
The left panels of Fig. 1 compare the standard SFE \(\epsilon_{\rm MUN21}(M_{h})\) from Eq. (10) with \(\epsilon_{\rm tot}(z,M_{h})\) from Eq. (14), in which FFBs with \(\epsilon_{\rm max}=1\) are included. In the right panels, the figure compares the standard SFRD with the cases of interest for our analysis.
## III Analysis setup
The enhanced star formation efficiency from Eq. (14) naturally has a non-negligible impact on the 21-cm signal.
To estimate the effect of FFBs on 21-cm obervables, we customized the public code 21cmFAST4[3, 4, 68]. The code simulates the reionization history by modelling the radiation fields we described in Sec. II.1 and their effects on the thermal evolution and neutral hydrogen fraction inside cells of the simulation box. The evolution of cosmological fluctuations can be consistently accounted for via the initial conditions in 21cmFirstCLASS[69, 70]. We modified the ACG SFE by introducing the redshift-dependent \(\epsilon_{\rm tot}\) from Eq. (14). Since the duration \(\sim 1\,\)Myr of the FFB phase in each galaxy is shorter than the redshift integration step in the code, we changed \(\epsilon_{\rm tot}\) at each \(z\) only for halos close to and above the mass threshold in Eq. (11), according to Eq. (13). Halos that exit the FFB condition behave as in the standard scenario, with no consequences due to their previous state. Throughout the analysis, we used a \((256\,\)Mpc\()^{3}\) simulation box, inside which 384 cells are defined on each axis for the high resolution computation, related with initial condition and displacement field, and 128 for the low resolution one, for temperature and ionization fluctuations [71]. We used initial adiabatic fluctuations described by the approximation in Ref. [72], the CLASS configuration for the matter power spectrum, the \(z\)-dependent mass function from Ref. [48], and we included the effects of redshift space distortions and relative velocities between dark matter and baryons. The code smooths the density perturbations over neighbouring cells; finally, Ly\(\alpha\) heating is also included [73] (CMB heating is not).
Footnote 4: [https://github.com/21cmfast](https://github.com/21cmfast), version 3.3.1, June 2023.
We estimate the global signal and neutral hydrogen fraction based on lightcones produced by 21cmFAST, and compute the 21-cm power spectrum with powerbox5[74].
Footnote 5: [https://github.com/steven-murray/powerbox](https://github.com/steven-murray/powerbox)
Footnote 6: [http://reionization.org/](http://reionization.org/)
### Noise model
We compute the 21-cm power spectrum noise when observed with HERA, the Hydrogen Epoch of Reionization Array.6 HERA is a next-generation radio interferometer, a low-frequency precursor of SKA in South Africa. Its final configuration will comprise of 350 parabolic dishes, 14m diameter size each, arranged in a hexagonal configuration. It will observe frequencies between 50 to 250 MHz to probe the redshifted 21-cm emission from \(z\!\sim\![5,27]\)[7]. Currently, Phase I HERA data have been released using \(\sim 50\) antennas and 18 hours of observations to probe the power spectrum at redshifts \(z=7.9\) and 10.4 and scales \(k\sim 0.19\,h/\)Mpc and \(0.34\,h/\)Mpc [75, 76, 77].
Footnote 6: [https://github.com/steven-murray/21cmSense.app](https://github.com/steven-murray/21cmSense.app)
Forecasted HERA sensitivity depends on the detector configuration and on the goodness of foreground removal. Following Refs. [78, 79, 80], the baseline length between each pair of detectors sets the transverse modes \(k_{\perp}\) that can be observed, while the bandwidth sets \(k_{\parallel}\) along the line-of-sight. These are contaminated by spectral-smooth foregrounds, which determine a wedge shape in the \((k_{\perp},k_{\parallel})\) plane. The values of \(k_{\parallel}\) associated with small \(k_{\perp}\) are mostly foreground-free. Ref. [80] defines different foreground removal models depending on the shape of the wedge edge: in this work, we will adopt the "moderate" and "optimistic" foreground removal scenarios. The former extends the edge to \(0.1\,h{\rm Mpc}^{-1}\) beyond the horizon
\[k_{\parallel}^{\rm hor}=2\pi|\vec{b}|/(Yc), \tag{15}\]
where \(|\vec{b}|\) is the baseline length, \(c\) the speed of light and \(Y=c(1+z)^{2}/\nu_{21}H(z)\) is the conversion factor between bandwidth and line-of-sight distance, while \(\nu_{21}\sim 1420.4\,\)MHz is the rest frame frequency associated with the 21-cm line. The optimistic model, instead, improves the constraining power on both the small and large scales by extending \(k_{\parallel}\) to the FWHM of the primary beam, computed as \({\rm FWHM}=1.06\lambda_{\rm obs}/14\,{\rm m}\sim 10^{\circ}\). Both models assume to add coherently different baselines, namely the integration times are summed when the same pixel is sampled more than once by redundant baselines. More detail on the noise computation is given in Refs. [79, 80].
To estimate the HERA sensitivity, we rely on the public code 21cmSense7[79, 80], which combines the contribution from the thermal noise power spectrum
Footnote 7: [https://github.com/steven-murray/21cmSense.app](https://github.com/steven-murray/21cmSense.app)
\[\Delta_{\rm th}^{2}\sim\frac{k^{3}}{2\pi^{2}}\frac{X^{2}Y\Omega}{t}T_{\rm sys} ^{2}, \tag{16}\]
and the sample variance. Here, \(X\) converts the observed angles into transverse measurements, \(T_{\rm sys}\) is the system temperature, \(t\) the duration of the observational run and \(\Omega=1.13\,{\rm FWHM}^{2}\) the solid angle associated with the primary beam. The sample variance instead is estimated as the 21-cm power spectrum in Eq. (5) and summed with the thermal noise to get the noise variance \(\sigma_{\rm HERA}^{2}\).
In our analysis, we consider a hexagonal configuration with 11 dishes per side (i.e., 331 antennas in total), each having 14m diameter. In Eq. (16), the observational time \(t\) is set to 6 hours per day over 540 days, while \(T_{\rm sys}=T_{\rm sky}+T_{\rm rev}\), where the sky temperature is \(T_{\rm sky}=60\,{\rm K}/(\nu/300\,{\rm MHz})^{2.55}\) and the receiver temperature is \(T_{\rm rev}=100\,{\rm K}\). We consider a minimum observed frequency of \(50\,{\rm MHz}\), a maximum frequency of \(225\,{\rm MHz}\) and \(8\,{\rm MHz}\) bandwidths probed by 82 channels each. This sets the observed redshift bins to
\[[z_{0},z_{1}]=\left[\frac{\nu_{21}}{50\,{\rm MHz}}-1,\frac{\nu_{21}}{(50+8)\, {\rm MHz}}-1\right],... \tag{17}\]
The 19 bins obtained are equally spaced in frequency but not in redshift, providing a finer sampling at low \(z\).
## IV FFB signatures on 21-cm observables
The analysis presented in this work has been realized using the public codes: 21cmFAST, version 3.3.1 updated in June 2023; powerbox; 21cmSense. We modified 21cmFAST to include FFB galaxies and to account for the SFR formalism defined in Eq. (6). As we discuss in detail in Appendix A, this SFR model differs from the approximation used in 21cmFAST public release: in the standard scenario, it provides a slightly larger SFR, thus anticipating reionization with respect to, e.g., results in MUN21.
We checked that, in the range of scales probed by HERA, using a single realization of the power spectrum or averaging over 5, 10 or 15 21cmFAST simulations provides variations smaller than the error bars. Therefore, to reduce the computational cost, plots and forecasts are realized using the same random seed.
### Global signal
First of all, we investigate how FFBs affect the 21-cm global signal. This observable, in fact, can help us understanding in a more straightforward way the peculiarities the FFB scenario has compared with other cases.
Fig. 2 shows how the presence of FFBs impacts the brightness temperature and reionization once different values of \(\epsilon_{\rm max}\) are considered. The value of \(T_{b}\) in Eq. (1) is estimated for our FFB prescription in Eq. (14) and compared with the standard 21cmFAST configuration from Eq. (10). For comparison, we also consider an artificial toy model in which \(\epsilon_{\rm MUN21}\) is increased over all the halo masses and the entire redshift range, by simply rescaling the value of \(f_{*}\) by 3; the SFRD for the same model is also shown in Fig. 1. This value was chosen to match the star formation efficiency required by JWST observations at \(z\sim 9\) without FFBs (see e.g., Refs. [12, 81]), but has no particular physical meaning.
When star formation efficiency is increased at high redshift, a larger amount of Ly\(\alpha\) and X-ray radiation is produced, speeding up the coupling of the spin temperature to the gas temperature and anticipating the moment in which this heats up. Therefore, in the top panel of Fig. 2, both the \(3f_{*}\) and FFB cases induce a larger \(T_{b}\) global signal and move its peak towards large \(z\), when the gas is colder (i.e., the global signal peak reaches lower values). However, as more time passes, only the more massive halos keep satisfying the conditions that allow the presence of FFB, while SFR in halos with masses between \(10^{8}\,M_{\odot}\) and \(10^{10}\,M_{\odot}\) becomes less and less efficient.
While one might expect the high efficiency of the FFBs to result in a more efficient reionization, we find that it is not necessarily the case. As described in Sec. II.1, the reionization rate is determined by the escape fraction \(\tilde{f}_{\rm esc}\) of the ionizing photons. In the model underlying our assumption, the negative value of \(\alpha_{\rm esc}\), motivated by Lyman-\(\alpha\) forest and CMB data [82], penalizes the contribution of large halos. Since these are indeed the one that host FFBs, the effect of FFBs on \(x_{\rm HI}\) is limited. In contrast, if we were using positive values of \(\alpha_{\rm esc}\) we would increase the ionizing power of massive halos and thereby improve the relevance of FFBs on reionization.
As a side comment, note that we adopted the same expression for the escape fraction \(\tilde{f}_{\rm esc}\) in Eq. (9) for both non-FFB and FFB galaxies. This quantity is determined by the neutral hydrogen column density inside a galaxy, which describes the abundance of atoms capable of absorbing the ionizing radiation, preventing it to reach the IGM. In principle, the presence of FFBs may increase \(\tilde{f}_{\rm esc}\), since they consume all the gas in the star-forming clouds, and they remove dust via steady wind [83]. As the escape fraction in the Epoch of Reionization is anyway very uncertain, we leave further investigation on how it gets affected by FFBs to future work.
To sum up, when FFBs are taken into account, the peak in the 21-cm global signal starts at higher \(z\), reaches a lower value and then ends slightly before the standard scenario; smaller values of \(\epsilon_{\rm max}\) or \(\mathcal{F}\) make the effect less significant in an almost-degenerate way. This is different from what we would expect for an overall increased SFR, as we model with \(3f_{*}\). Here, the peak shifts at higher \(z\) but, thanks to the large efficiency in small mass halos, reionization is faster and the signal reaches \(T_{b}=0\) earlier.
### Power spectrum
FFBs also affect the 21-cm power spectrum. Fig. 3 shows its redshift evolution: consistently with the global signal, the ionization bump rises earlier, at \(z\sim 8\), for the \(3f_{*}\) model, while for FFBs it matches the standard scenario at \(z\sim 6\). The presence of massive FFB-hosting halos increases \(\Delta^{2}_{\rm 21cm}\) at high \(z\); their lack of ionization power would keep the signal amplitude large even at low \(z\), but the contribution of small halos brings \(\Delta^{2}_{\rm 21cm}\) back to the standard case. The errorbars in the figure are estimated for HERA with moderate foreground, assuming the FFB scenario as fiducial, as in Sec. III.1; qualitatively, FFB signatures on the 21-cm power spectrum seems to be distinguishable in certain \((z,k)\) ranges.
Finally, in Fig. 4 we compare the redshift evolution of the scale-dependent power spectrum with and without FFBs. These plots were obtained using a \(700\,\mathrm{Mpc}\) simulation box to access larger scales. At very high redshift, the power spectrum in the FFB scenario has larger power; this can be understood comparing the amplitude of the global signal at the same epoch. At \(8<z<13\), FFBs experience a boost initially on the large, then on the small scales: only the rare, most massive halos can still host FFBs at this redshift, thus increasing correlation on large scales; since these halos are the most densely clustered, they lead to an increase in the small scale power as well. Lower redshifts, instead, are dominated by the contribution of small halos; as already discussed, this brings the shape of the power spectrum back to the standard case.
Figure 3: _FFB effect on \(\Delta^{2}_{\rm 21cm}\). – Power spectrum as function of \(z\); the scales we show are the ones HERA Phase I constrained [75, 76], once we set \(h=0.6736\) from Planck 18 [28]. Same legend as Fig. 2. The shaded area shows \(\pm\sigma_{\rm HERA}\), where the noise is computed for HERA with moderate foreground. Below \(z\simeq 17\), FFBs signatures can be detected outside the errorbars._
Figure 2: _FFB effect on \(T_{b}\) and \(x_{\rm HI}\). – \(T_{b}\)_ global signal (left) and neutral hydrogen fraction \(x_{\rm HI}\) (right) in the standard scenario using either the nominal \(f_{*}\) (black) or \(3f_{*}\) (gray, dotted), compared with the case that includes FFBs with \(\epsilon_{\rm max}=1\) (orange) or \(0.2\) (magenta, dashed). _FFBs anticipate the \(T_{b}\) peak, while they have negligible effect on \(x_{\rm HI}\), due to low \(\tilde{f}_{\rm enc}\) in massive halos._
## V Including molecular cooling galaxies
A further contribution to the SFR and 21-cm signal could come from population III (popIII) stars [84, 85]. Usually, popIII stars are associated with a pristine, metal-poor environment, and their formation is driven by H2 molecular cooling (see e.g., [86, 87, 88]). As in MUN21, we consider their formation as associated with molecular cooling galaxies (MCG) inside mini-halos, whose typical mass is \(\sim 10^{7}M_{\odot}\). Their contribution to the reionization process is still uncertain, see e.g. Refs. [89, 90, 91], and not yet completely accepted. For example, recent results from HERA Phase I do not account for MCGs in their modelling; including them, depending on their efficiency, can lead to variations in the parameter constraints [77].
In this section, we model popIII contribution to SFE and we study how this affects the 21-cm observables, accounting for uncertainties. We assume MCGs cannot host FFBs, since the modelling in DEK23 refers to atomic cooling SFCs and the threshold mass in Eq. (11) largely penalizes mini-halos, its value being \(M_{\rm FFB}>10^{7}\,M_{\odot}\) up to \(z\sim 40\). Thus, in our formalism, FFBs only enhance SFR for popII stars. Further analysis on the presence of popIII stars in massive halos and how they could be affected by FFBs are beyond the scope of this work.
### Model
Following MUN21, we approximate SFR in MCGs as8
Footnote 8: See further discussion on this SFR approximation in Appendix A.
Footnote 9: The parameter we indicate as \(f_{*}^{\rm III}\) is called \(f_{*,7}\) in MUN21.
\[{\rm SFR}^{\rm III}(z,M_{h})=\frac{M_{*}^{\rm III}f_{\rm duty}^{\rm III}}{t_{* }H(z)^{-1}}=\frac{\epsilon^{\rm III}(z,M_{h})f_{b}M_{h}f_{\rm duty}^{\rm III}}{ t_{*}t_{H}(z)}, \tag{18}\]
where \(t_{*}=0.5\) is a fudge parameter and \(t_{H}(z)=H(z)^{-1}\) is the Hubble time. The SFE is estimated as
\[\epsilon^{\rm III}(M_{h})=f_{*}^{\rm III}\left(\frac{M_{h}}{10^{7}M_{\odot}} \right)^{\alpha_{*}^{\rm III}}\,, \tag{19}\]
where we set \(\alpha_{*}^{\rm III}=0\) and we normalize with respect to the SFE in halos with \(M_{h}=10^{7}M_{\odot}\), namely \(f_{*}^{\rm III}\).9
Footnote 9: The parameter we indicate as \(f_{*}^{\rm III}\) is called \(f_{*,7}\) in MUN21.
Figure 4: _FFB effect on \(\Delta_{\rm 21cm}^{2}\) redshift evolution. – Power spectrum in the standard scenario (black continuous and dotted lines, respectively \(f_{*}\) and \(3f_{*}\) ) and including FFBs (orange, \(\epsilon_{\rm max}=1\)). The orange shaded area shows \(\sigma_{\rm HERA}\), while the gray area indicates the \(k-\)range not probed by HERA. Here we run 21cmFAST in a larger, 700 Mpc side box, to understand how FFBs contributes to larger scales. At \(z=6.5\), there is no 21-cm power spectrum for \(3f_{*}\) since reionization is complete. FFBs boost large and then small scales at \(8<z<13\), since they form in the most massive and more clustered halos._
We choose as nominal value \(10^{-2.5}\); to account for uncertainties in the MCG efficiency, in our analysis we also test \(f_{*}^{\rm III}\in[10^{-1.5},10^{-3.5}]\). We use
\[f_{\rm duty}^{\rm III}=\exp\left(-\frac{M_{\rm turn}^{\rm III}}{M_{ h}}-\frac{M_{h}}{M_{\rm atom}}\right)\,, \tag{20}\] \[M_{\rm turn}^{\rm III}=\max(M_{\rm mol},M_{\rm crit}),\]
to suppress star formation on the large halo mass end, where MCG star formation transits into the ACG scenario. The value of \(M_{\rm mol}\propto f_{v_{\rm cb}}f_{\rm LW}\) accounts for quenching of star formation on the small mass end. In minihalos, this is caused by the relative velocity between DM and baryons \(v_{\rm CB}\)[92; 93; 94; 95] and by Lyman-Werner feedbacks [96; 97; 98] due to photons with energy between 11.2 and 13.6 eV, that photo-dissociate molecular hydrogen and prevent the cooling of the gas clouds. We fix the relative velocity contribution to
\[f_{v_{\rm cb}}=\left(1+A_{v_{\rm cb}}\frac{v_{\rm cb}}{v_{\rm rms}}\right)^{B _{v_{\rm cb}}}, \tag{21}\]
where \(A_{v_{\rm cb}}=1\), \(B_{v_{\rm cb}}=1.8\), the rms velocity is \(v_{\rm rms}=v_{\rm avg}\sqrt{3\pi/8}\) and we set the average velocity to \(v_{\rm avg}=25.86\,{\rm km/s}\). As for LW feedbacks, we use [99]
\[f_{\rm LW}=1+A_{\rm LW}J_{21}^{B_{\rm LW}}, \tag{22}\]
where \(A_{\rm LW}=2\), \(B_{\rm LW}=0.6\) and \(J_{21}\) is the LW intensity in units of \(10^{-21}\,{\rm erg\,s^{-1}cm^{-2}Hz^{-1}sr^{-1}}\).
The escape fraction from mini-halos is modelled analogously to Eq. (9), with \(f_{\rm esc}^{\rm III}=f_{\rm esc}\), \(\alpha_{\rm esc}^{\rm III}=\alpha_{\rm esc}\) and using \(10^{7}M_{\odot}\) as the normalizing mass scale instead of \(10^{10}M_{\odot}\).
### Effect on the 21-cm observables
The presence of MCGs inside mini-halos changes the 21-cm observables, mainly because of the larger radiation produced at high redshift. As Fig. 5 shows in the left panel, in the standard 21cmFAST case with MCGs, the global signal peak broadens and is preponed, leading also to an earlier reionization (although ACGs remain the main driver). Since popIII stars contribute also to the X-ray emission, their presence heats the gas faster, thus the \(T_{b}\) peak becomes not as deep as in the only-ACG case. Analogously, the power spectrum shown in the right panel of Fig. 5 has larger power at high \(z\), while it dies faster because of the earlier reionization.
The relevance of all these effects depends on the MCG star formation efficiency, encapsulated in the parameters \(f_{*}^{\rm III}\) and \(\alpha_{*}^{\rm III}\) in Eq. (19). In particular, following MUN21, we adopt \(\alpha_{*}^{\rm III}=0\): this choice renders the popIII star formation efficiency independent from the mass of the mini-halos up to the turnover mass. If, instead, we had chosen \(\alpha_{*}^{\rm III}<0\), star formation in smaller halos would have been accelerated. On the other hand, effects due to large values of \(f_{*}^{\rm III}\) partially cover the high-\(z\) contribution of FFB galaxies in both the observables. It is clear then that accounting for MCGs makes more challenging to detect the signatures of the FFB scenario.
## VI Fisher forecasts
In the previous sections, we estimated the effect of the existence of FFB galaxies on the 21-cm global signal and power spectrum. We now want to understand if HERA [7] will be able to detect the signatures of this scenario, provided the uncertainties on the MCGs contribution described in Sec. V. For simplicity, we begin this analysis by ignoring the contribution of popIII stars; later, we relax this assumption in Sec. VI.2. Uncertainties related with the SFR model are discussed in Appendix A.
To derive forecasts on the FFB detectability, we compute the Fisher matrix:
\[F_{\alpha\beta}=\sum_{z,k}\frac{1}{\sigma_{\rm HERA}^{2}}\frac{\partial \Delta_{21{\rm cm}}^{2}}{\partial\theta_{\alpha}}\frac{\partial\Delta_{21{ \rm cm}}^{2}}{\partial\theta_{\beta}}, \tag{23}\]
where derivatives are performed with respect to
\[\theta=\{\epsilon_{\rm max},\log_{10}f_{*},\alpha_{*},\log_{10}(L_{X}/{\rm SFR }),\log_{10}f_{\rm esc},\alpha_{\rm esc}\}. \tag{24}\]
In the parameter set, \(\epsilon_{\rm max}\) describes the properties of the FFB scenario, \(\{\log_{10}f_{*},\alpha_{*}\}\) characterize the ACG star-formation efficiency, \(\{\log_{10}f_{\rm esc},\alpha_{\rm esc}\}\) the escape fraction and \(\log_{10}(L_{X}/{\rm SFR})\) the X-ray luminosity. Degeneracies between the parameters are accounted via the process of marginalization; more details on their role in determining the 21-cm signal can be found in Sec. II.1 and MUN21. Fiducial values are summarized in Tab. 1; we use uninformative priors on all the parameters. Variances \(\sigma_{\rm HERA}^{2}\) are computed through 21cmSense for the FFB scenario and including thermal noise and sample variance. The sum is performed over the \(k\) bins computed by 21cmSense and the 19 \(z\)-bins defined by the HERA 8 MHz bandwidth.
### FFB detectability
First of all, we consider only the contribution of ACGs and FFBs, as described in Sec. II.2. We estimate that, in the case of moderate foreground, the relative marginalized error on \(\epsilon_{\rm max}=1\) is \(\sigma_{\epsilon_{\rm max}}/\epsilon_{\rm max}\simeq 13\%\); optimistic foreground improves the result to \(\sigma_{\epsilon_{\rm max}}/\epsilon_{\rm max}\simeq 1\%\). Provided that the relative difference between \(z=10\) SFRD
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \(\epsilon_{\rm max}\) & \(\log_{10}f_{*}\) & \(\alpha_{*}\) & \(\log_{10}(L_{X}/{\rm SFR})\) & \(\log_{10}f_{\rm esc}\) & \(\alpha_{\rm esc}\) \\ \hline
1; 0.2 & \(-1.25\) & 0.5 & 40.5 & \(-1.35\) & \(-0.3\) \\ \hline \end{tabular}
\end{table}
Table 1: Fiducial values in our Fisher forecast; for the FFB-related parameter \(\epsilon_{\rm max}\) we consider two cases, as discussed in Sec. II.2.2. Other cosmological and astrophysical parameters that enter 21cmFAST are fixed throughout this work.
in the standard-ACG and FFB scenarios is \(\sim\mathcal{O}(40\%)\), our analysis shows that the existence of FFBs can be detected with high significance both considering moderate and optimistic foreground removal. This was expected from the qualitative description in Sec. III: FFBs have a relevant impact on the 21-cm power spectrum and unique features with respect to a simple enhancement of SFR. Thus, degeneracies between parameters entering the Fisher computation are tiny; we check that degeneracies are small between FFBs and other ACG-related astrophysical parameters through the contour plot in Fig. 6.
Smaller values of \(\epsilon_{\rm max}\) lead to closer SFRD in the two scenarios and to a weaker constraining power in the 21-cm analysis. For example, \(\epsilon_{\rm max}=0.2\) yields \(\sigma_{\epsilon_{\rm max}}/\epsilon_{\rm max}\simeq 22\%\) with moderate foreground and \(\simeq 3\%\) with optimistic foreground, against a relative difference \(\sim\mathcal{O}(5\%)\). We note that decreasing the fraction of FFB galaxies in the massive halos via the \(\mathcal{F}\) parameter in Eq. (12) would lead to similar considerations, since its value is degenerate with \(\epsilon_{\rm max}\).
This case represents our benchmark, providing the best results we can get assuming the SFR model is known and described by Eq. (6). Further discussion on SFR model uncertainties can be found in Appendix A.
### Effect due to MCG contributions
We now account for contributions from popIII stars hosted by MCGs. To do so, we compute the Fisher matrix including in the parameter set also \(\{f_{*}^{\rm III},\alpha_{*}^{\rm III}\}\); other popIII-related parameters in 21cmFAST, namely \(\{\log_{10}f_{\rm esc}^{\rm III},\log_{10}(L_{X}/{\rm SFR})^{\rm III}\}\), are fixed to their fiducial values \(\{-1.35,40.5\}\) throughout the analysis. While for the slope we consider as fiducial \(\alpha_{*}^{\rm III}=0\), for the efficiency we test \(\log_{10}f_{*}^{\rm III}\in[-1.5,-3.5]\), to account for the large uncertainties on this parameter. The "nominal" case assumes \(\log_{10}f_{*}^{\rm III}=-2.5\); "high efficiency MCGs" adopt \(\log_{10}f_{*}^{\rm III}>-2.5\); and finally "low efficiency MCGs" consider \(\log_{10}f_{*}^{\rm III}<-2.5\). We discuss \(\epsilon_{\rm max}=1\) for conciseness; smaller values lead to less stringent constraints, consistently with results in Sec. VI.1.
Fig. 7 collects our results on \(\sigma_{\epsilon_{\rm max}}^{\rm III}\), i.e., the marginalized error on \(\epsilon_{\rm max}\) once MCGs are included in the analysis. In the case of moderate foreground, the presence of MCGs lowers the significance of the FFB detection: while with "nominal" and "low efficiency MCGs" values,
Figure 6: _Confidence ellipses. – Marginalized \(1\sigma\) confidence ellipses in the ACG+FFB case from Sec. VI.1 when \(\epsilon_{\rm max}=1\). The FFB parameter \(\epsilon_{\rm max}\) has small degeneracies with other ACG-related parameters that affect the power spectrum._
FFB signatures can be still partially detectable, for "high efficiency MCGs" the power spectrum becomes almost indistinguishable from the scenario without FFBs. The situation changes when optimistic foreground is considered: here, FFBs can be detected in both the "nominal" and "low efficiency MCGs" cases, while for "high efficiency MCGs" the FFB detection is still plausible, even if with smaller significance. Even if the conditions for optimistic foreground are quite hard to reach, this result sets a benchmark for HERA's constraining power on FFBs: results of a future data-analysis will fall in the interval bracketed by the two lines in Fig. 7.
## VII Conclusions
Upcoming years will provide improved measurements of the 21-cm global signal and power spectrum from the Epoch of Reionization. Combined with other probes, 21-cm experiments will shed light on the processes that regulate star formation in the first galaxies. Uncertainties still exist on the star formation modelling, particularly regarding the role of popIII stars and stellar feedbacks, for which observations in the local Universe suggest an important role in quenching star formation efficiency. An extrapolation of feedback models to high redshifts should take into account other phenomena.
The authors of Ref. [26] introduced the process of feedback-free starbursts, namely star formation events with short timescales that should arise in high redshift galaxies. For these to be efficient, gas clouds in which star formation takes place have to be dense enough and with low metallicity; these conditions guarantee that star formation has enough time to be realized before stellar feedbacks become effective. Moreover, under similar conditions, star-forming clouds would be shielded against radiation and winds from older stars. Overall, it is possible to show that these processes boost star formation efficiency inside halos above a certain mass threshold, whose value increases with cosmological time. Therefore, in the late Universe, feedback-free starbursts are rare since they can only be hosted by very massive halos; moreover, once AGN feedbacks set up, star formation always gets quenched in halos \(>10^{12}M_{\odot}\). On the contrary, at high redshift the evolution of the threshold mass indicates that feedback-free starbursts can be found even in smaller halos; their presence could explain the existence of high redshift, massive galaxies observed by JWST.
In this work, we investigated the observational signatures such feedback-free starbursts would have on the 21-cm signal. We modelled their contribution to star formation efficiency in atomic cooling galaxies and implemented it into 21cmFAST to estimate their effect on the 21-cm global signal and 21-cm power spectrum.
Our main results can be summarized as follows.
* The redshift and mass dependence of the SFE in the FFB scenario speed up the evolution of the brightness temperature and of the 21-cm power spectrum before \(z\sim 15\). At lower redshift, instead, their evolution gets closer to the non-FFB scenario. These result respectively from the coupling between the spin and gas temperatures, and from the X-ray heating: the coupling is stronger at high \(z\) when FFBs are accounted for, due to the low-mass halos that host FFB galaxies at those times; the heating, instead, gets effective at lower \(z\), where only massive halos can still host FFBs.
* On the other hand, the evolution of the neutral hydrogen fraction is only weakly affected by the presence of feedback-free starbursts. This is because the low-mass halos with high escape fraction of ionized photons host FFBs only prior to \(z\sim 15\), practically before the onset of reionization. At lower redshift, such halos tend to be without FFBs, and they therefore contribute to reionization similarly to the standard scenario. On the other hand, the high-mass FFB galaxies at these later times have a negligible contribution to reionization because of their lower escape fraction.
* We forecasted the detectability of the FFB scenario in the different regimes. We showed that future interferometers, such as HERA, will be able to detect signatures of their existence in the 21-cm power spectrum, compared with the standard scenario that only includes popII stars formed in atomic cooling galaxies. We also checked how our results change when the FFB efficiency is lower.
Figure 7: _Summary of our constraints on FFBs. – Marginalized \(1\sigma\) error on \(\epsilon_{\rm max}\) including MCG as a function of \(\log_{10}f_{*}^{\rm Ill}\), with moderate (orange) and optimistic (magenta) foreground. Horizontal dashed lines mark the case with only ACGs described in Sec. VI.1. The thin, black line shows \(\sigma_{\epsilon_{\rm max}}/\epsilon_{\rm max}=1/3\) as a reference, for which \(\epsilon_{\rm max}\) can be detected \(\sim\)\(O(3\sigma)\). FFB signatures can be detected by HERA when popIII star formation efficiency is not too high._
* We accounted for the possible contribution at high redshift of popIII stars in molecular cooling galaxies and showed that this may hide the effect of FFBs. We drew forecasts as a function of popIII efficiency: our results show that, except for cases with high efficient popIII star formation, signatures of the FFB scenario can still be detected. The significance level will depend on the foreground level.
To conclude, our work highlights the crucial role 21-cm experiments can have in testing astrophysical scenarios. Their synergy with other probes, such as JWST data, in the upcoming years will foster our research of the high redshift Universe, helping us to shed light on the puzzles related to reionization and the birth of the first galaxies.
###### Acknowledgements.
SL acknowledges the Azrieli Foundation for support. JF is supported by an ongoing Negev Scholarship by the Kreitman School at Ben-Gurion University. EDK acknowledges support by Grant No. 2022743 from the US-Israel Bi-national Science Foundation (BSF) and Grant No. 2307354 from the U.S. National Science Foundation (NSF). AD and ZL are supported by the Israel Science Foundation Grant ISF 861/20. ZL has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101109759 ("CuspCore"). The authors thank Debanjan Sarkar, Lukas J. Furtak and Brant Robertson for useful and stimulating discussions.
## Appendix A Changing the star formation model
Throughout the main text, all the ACG- and FFB-related results were obtained under the assumption that the SFR is computed based on Eq. (6), which from now on we label _nominal_. This is different from the prescription defined in MUN21 and defined in the 21cmFAST public release: as a first order approximation, these works assume that the SFR in the ACG scenario is
\[\text{SFR}_{\text{approx}}(z,M_{h})=\frac{M_{*}}{t_{*}H(z)^{-1}}=\frac{ \epsilon(z,M_{h})f_{b}M_{h}}{t_{*}H(z)^{-1}}, \tag{10}\]
with \(M_{*}\) the stellar mass, \(M_{h}\) the virial mass of the host halo, \(\epsilon(z,M_{h})\) defined in Eq. (10) and \(t_{*}=0.5\). With respect to Eq. (6), this expression encodes a different redshift evolution, as it can be seen in Fig. 8. In this figure, we show the star formation rate obtained for the standard model10 using Eqs. (6) and (10), with \(\epsilon(z,M_{h})\) from Eq. (10). We show as well the SFR obtained extrapolating at high-\(z\) the results of Ref. [58]; this, however, becomes less reliable for \(z\gtrsim 15\). Since the high redshift range is crucial to model the 21-cm signal, we decided to avoid this approximation in our analysis in the main text. The plot shows that, above \(z\gtrsim 9\), the _nominal_ SFR is larger than the _approximated_ one; thus, we expect the _nominal_ SFR to foresee an earlier reionization.
Footnote 10: Including FFBs we would get similar results. We verified that the relative difference between the standard and FFB scenarios remains consistent when moving from the _nominal_ SFR in Eq. (6) to the _approximated_ SFR in Eq. (10).
### Effect on the 21-cm observables
Fig. 9 shows the global signal and the neutral hydrogen fraction using the _nominal_ and _approximated_ SFR formalism. The larger SFR obtained in the _nominal_ case anticipates the \(T_{b}\) peak with respect to the _approximated_ one; consistently, reionization is anticipated.
We also show how the _approximated_ SFR behaves once \(f_{*}\) is rescaled to \(1.7f_{*}\), in the same fashion as the enhanced model in Fig. 2: \(T_{b}\) in the _nominal_ case can be well recovered by simply change the normalization of the pivot value. Therefore, we can assume that at first order the uncertainties on the SFR model are encapsulated in the uncertainties on the parameter \(f_{*}\).
Analogous considerations can be made for the power spectrum, shown in Fig. 10. In this case, changing \(f_{*}\) in the _approximated_ model we can mimic the _nominal_ SFR power spectrum only above \(z\gtrsim 10\). The difference between the _nominal_ and _approximated_ cases are outside the forecasted errorbars; this implies HERA will be able to put valuable constraints on the SFR, reducing the uncertainties currently existing in the literature.
Figure 8: _Comparison between SFR formalisms. – We consider the standard scenario for the typical halo mass \(M_{h}=10^{10}M_{\odot}\). Solid lines indicate the nominal SFR, while dashed lines show the approximated SFR based on Eq. (10). The dotted line refers to simulation results from Ref. [58]. Different assumptions lead to different SFR redshift evolution._
### FFB contraints
We apply the _approximated_ SFR formalism to the study of FFB detectability. This allows a more straightforward comparison with other works in the 21-cm literature, which adopt the same prescription, e.g., MUN21. Analogously to Sec. VI.1, we run the Fisher analysis using the _approximated_ SFR model, with the same parameter set \(\theta\). For conciseness, we only discuss \(\epsilon_{\rm max}=1\), but a similar analysis can be performed for other values.
Since the relative difference between the standard and FFB scenarios is not changed by the change in the SFR model, results using the _approximated_ SFR depart from Sec. VI.1\(\sim\mathcal{O}(1\%)\) both under moderate and optimistic foreground assumptions. Constraints in the _approximated_ model slightly improve since the power spectrum moves to lower \(z\), where HERA is more sensitive.
As Fig. 8 highlights, the difference between the _nominal_ and _approximated_ formalism can be reproduced in the 21-cm observables by varying the value of \(f_{*}\). Thus, the degeneracy between the SFR model choice and the FFB existence can be at first order understood in terms of the degeneracy between \(f_{*}\) and \(\epsilon_{\rm max}\). This is represented by the ellipses in Fig. 6 and marginalized in the Fisher: given our results, HERA should be able to disentangle features in the 21-cm power spectrum due to the presence of FFBs from uncertainties in the SFR model.
Figure 10: \(\Delta^{2}_{\rm 21cm}\) using different SFRs. – Power spectrum at large (left) and small (right) scales as a function of \(z\). Same legend as in Fig. 9. The shaded area shows \(\pm\sigma_{\rm HERA}\) with moderate foreground with respect to the _nominal_ SFR model in the standard scenario (no FFBs). _The difference between the nominal and approximation models is captured by \(f_{*}\)_.
Figure 9: \(T_{\rm b}\) and \(x_{\rm H}\) using different SFRs. – Global signal (left) and neutral hydrogen fraction (right). Solid lines use the _nominal_ SFR, while dashed the _approximated_ SFR. The magenta line shows how the _nominal_ model can be mimicked by changing \(f_{*}\) in the _approximated_ case. _The difference between the nominal and approximation models is captured by \(f_{*}\)_. |
2303.03119 | Multiverse Predictions for Habitability: Stellar and Atmospheric
Habitability | Stellar activity and planetary atmospheric properties have the potential to
strongly influence habitability. To date, neither have been adequately studied
in the multiverse context, so there has been no assessment of how these effects
impact the probabilities of observing our fundamental constants. Here, we
consider the effects of solar wind, mass loss, and extreme ultra-violet (XUV)
flux on planetary atmospheres, how these effects scale with fundamental
constants, and how this affects the likelihood of our observations. We
determine the minimum atmospheric mass that can withstand erosion, maintain
liquid surface water, and buffer diurnal temperature changes. We consider two
plausible sources of Earth's atmosphere, as well as the notion that only
initially slowly rotating stars are habitable, and find that all are equally
compatible with the multiverse. We consider whether planetary magnetic fields
are necessary for habitability, and find five boundaries in parameter space
where magnetic fields are precluded. We find that if an Earth-like
carbon-to-oxygen ratio is required for life, atmospheric effects do not have
much of an impact on multiverse calculations. If significantly different
carbon-to-oxygen ratios are compatible with life, magnetic fields must not be
essential for life, and planet atmosphere must not scale with stellar nitrogen
abundance, or else the multiverse would be ruled out to a high degree of
confidence. | McCullen Sandora, Vladimir Airapetian, Luke Barnes, Geraint F. Lewis | 2023-03-02T23:20:16Z | http://arxiv.org/abs/2303.03119v1 | # Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability
###### Abstract
Stellar activity and planetary atmospheric properties have the potential to strongly influence habitability. To date, neither have been adequately studied in the multiverse context, so there has been no assessment of how these effects impact the probabilities of observing our fundamental constants. Here, we consider the effects of solar wind, mass loss, and extreme ultra-violet (XUV) flux on planetary atmospheres, how these effects scale with fundamental constants, and how this affects the likelihood of our observations. We determine the minimum atmospheric mass that can withstand erosion, maintain liquid surface water, and buffer diurnal temperature changes. We consider two plausible sources of Earth's atmosphere, as well as the notion that only initially slowly rotating stars are habitable, and find that all are equally compatible with the multiverse. We consider whether planetary magnetic fields are necessary for habitability, and find five boundaries in parameter space where magnetic fields are precluded. We find that if an Earth-like carbon-to-oxygen ratio is required for life, atmospheric effects do not have much of an impact on multiverse calculations. If significantly different carbon-to-oxygen ratios are compatible with life, magnetic fields must not be essential for life, and planet atmosphere must not scale with stellar nitrogen abundance, or else the multiverse would be ruled out to a high degree of confidence.
multiverse; habitability; stellar activity, planetary atmospheres 2023
## 1 Introduction
The multiverse hypothesis, which posits that other universes with different laws of physics exist, is an intriguing idea in theoretical cosmology that has so far proven challenging to test [1]. This paper is part of a broader series aiming to rectify this, by generating a plethora of predictions within the multiverse framework regarding the nature of habitability [2; 3; 4; 5; 6; 7]. The core of this process is the requirement that the multiverse can only be a consistent theory of cosmology if it predicts that our presence in this particular universe is not too improbable; one way of falsifying the multiverse is to find that it predicts that the vast majority of complex (multicellular) life exists in universes with features different from our own. Our contribution to this procedure lies in the recognition that the distribution of complex life, and so observers, throughout the multiverse, depends heavily on the assumptions we make about the nature of habitability. Thus, certain habitability conditions, that are otherwise quite widely discussed, are incompatible with the multiverse. If we ultimately find that the requirements for complex life are incompatible with the multiverse, we will be able to falsify the theory, to a calculable level of statistical significance. Conversely, if we ultimately determine that all currently unknown habitability conditions turn out to be in line with multiverse expectations, we will accrue a long list of supporting evidence for the theory.
It remains to check the compatibility of each habitability condition with the multiverse framework by systematically incorporating them into our calculation of the distribution of observers throughout the multiverse, and the subsequent calculation of the probability of our observations. To this end, we have organized this endeavor into several papers on the topic, each dealing with a loosely overarching theme. The current paper explores several aspects relating to properties of planetary atmospheres, and stellar activity. The two are tightly related, and considered by many to be essential for the maintenance of planetary habitability.
The compatibility of a habitability condition \(\mathbb{H}\) with the multiverse is determined by the probability of observing our values of the fundamental constants. This is communicated through the Bayes factor, which is defined relative to the baseline case where atmospheric effects are not important \(\mathbb{H}_{0}\) by \(\mathcal{B}(\mathbb{H})=B(\mathbb{H})/B(\mathbb{H}_{0})\), where
\[B(\mathbb{H})=\mathbb{P}(\alpha|\mathbb{H})\,\mathbb{P}(\beta|\mathbb{H})\, \mathbb{P}(\gamma|\mathbb{H})\,\mathbb{P}(\delta_{u}|\mathbb{H})\,\mathbb{P}( \delta_{d}|\mathbb{H}) \tag{1}\]
and \(\mathbb{P}(x|\mathbb{H})=\min(P(x<x_{\text{obs}}|\mathbb{H}),P(x>x_{\text{obs} }|\mathbb{H}))\), for the fine structure constant \(\alpha\), the electron to proton mass \(m_{e}/m_{p}=\beta\), the proton to Planck mass \(m_{p}/M_{pl}=\gamma\), the up quark to proton mass \(m_{u}/m_{p}=\delta_{u}\), and the down quark to proton mass \(m_{d}/m_{p}=\delta_{d}\). The probability of observing particular values of the constants is defined through the probability density function \(p(x|\mathbb{H})\propto p_{\text{prior}}(x)\,\mathbb{H}(x)\), as described in more detail in [2].
For the baseline habitability condition \(\mathbb{H}_{0}\), we take the most successful account of our observations we have considered, which is that complex life requires light from a relatively narrow spectral band for photosynthesis, and that the habitability of a planet is directly proportional to the amount of entropy it receives from incident starlight [2; 4].
In Section 2, we discuss generalities of stellar properties, and how these vary with physical constants, deriving expressions that will be crucial for the rest of the paper. In Section 3, we discuss atmospheric loss processes, focusing in particular on extreme ultra-violet (XUV)-driven energy limited escape. Determining the importance of this process as constants vary necessitates determination of a great many factors, including stellar spin-down history, initial atmospheric mass, and mass required for surface water retention, which we detail within. Section 4 is dedicated to stellar wind stripping present on planets without an intrinsic magnetic field, and the conditions for planetary magnetic fields to arise.
We find that the significance of atmospheric properties depends on which additional habitability assumptions are made. If we take that an Earth-like carbon-to-oxygen ratio is required for life, as is commonly assumed, then the atmospheric conditions we consider do not strongly affect the probabilities we compute, and so they are neither favored nor disfavored by the multiverse. However, if we adopt the stance that life does not depend on the carbon-to-oxygen ratio, several atmospheric conditions do strongly affect the multiverse probabilities. Both the idea that atmospheric mass scales linearly with stellar nitrogen abundance and the idea that planetary magnetic fields are required for habitability cause the probability of our observations to significantly drop, and so both these conditions are incompatible with the multiverse hypothesis. The strategy to test the multiverse is then to check whether this prediction is correct; if life indeed does not depend on planetary carbon-to-oxygen ratio, but either of these other two conditions is found true, the multiverse will be ruled out to high significance.
## 2 How Do Stellar Properties Change in Other Universes?
Changes in stellar properties were among the first aspects to be investigated within a multiverse framework. Refs. [8; 9; 10] worked out how the properties of stars such as mass, lifetime, and luminosity change when constants vary. Ref. [11] discuss photosynthetic potential of starlight. Much work has been done on how different nuclear stability thresholds affect stellar fusion: Refs. [12; 13; 14] investigated the effects of diproton stability. Refs. [15; 16; 17; 18] discuss
the effects of alpha burning. Refs. [19; 20; 21] investigate deuteron stability. Ref. [22] investigated the consequences of tritium stability. Refs. [23; 24] discuss non-nuclear energy production pathways. Ref. [25] discuss the sizes of white dwarfs and neutron stars. Refs. [2; 26] discuss the entropy production as a key to habitability.
However, all of these previous studies have so far neglected some of the finer-grained stellar properties, which may nevertheless be just as important for determining the habitability of a planetary system. Among these are properties of stellar coronae, magnetic fields, Sunspot fraction, stellar wind, rotation, and X-ray luminosity. In part, this neglect may be due to prudence on the previous authors' parts, as many of these aspects remain imperfectly understood theoretically, making extrapolation of their behaviors to different universes fraught with potentially misplaced certainty. However, much progress has been made in the understanding of many of these aspects in recent years, and we take advantage of these recent advances to establish a first attempt at determining how these properties may differ in other universes.
### Stellar Properties
Expressions for stellar mass, radius, temperature, luminosity, and lifetime in terms of fundamental constants are all already well known (see, e.g., [27]), so we merely reproduce them here:
\[M_{\star} = 122.4\,\frac{\lambda\,M_{pl}^{3}}{m_{p}^{2}}\] \[R_{\star} = 108.6\,\frac{\lambda^{4/5}\,M_{pl}}{\alpha^{2}\,m_{p}^{2}}\] \[T_{\star} = 0.014\,\frac{\lambda^{19/40}\,\alpha^{1/2}\,m_{e}^{1/2}\,m_{p}^ {3/4}}{M_{pl}^{1/4}}\] \[L_{\star} = 9.7\times 10^{-4}\,\frac{\lambda^{7/2}\,m_{e}^{2}\,M_{pl}}{ \alpha^{2}\,m_{p}}\] \[t_{\star} = 110.0\,\frac{\alpha^{2}\,M_{pl}^{2}}{\lambda^{5/2}\,m_{e}^{2}\, m_{p}} \tag{2}\]
The symbol \(\lambda=M_{\star}/M_{\rm Ch}\) is a dimensionless parameterization of stellar mass in terms of the Chandrasekhar mass \(M_{\rm Ch}=122.4M_{pl}^{3}/m_{P}^{2}=1.4M_{\odot}\). In these and all following expressions, the functional dependence on constants is derived using physical arguments, and the coefficients are set to accurately reproduce the correct values for our Sun, for the observed values of the physical constants.
In addition, we will need the following expressions for the mass, density, orbital location, total incident power, and day length of an Earth-like planet, which is defined as both temperate (can maintain liquid surface water) and terrestrial (can retain heavy but not light atmospheric gases):
\[M_{\rm terr} = 92\,\frac{\alpha^{3/2}\,m_{e}^{3/4}\,M_{pl}^{3}}{m_{p}^{11/4}}\] \[\rho_{\rm rock} = 0.13\,\alpha^{3}\,m_{e}^{3}\,m_{p}\] \[a_{\rm temp} = 7.6\,\frac{\lambda^{7/4}\,m_{p}^{1/2}\,M_{pl}^{1/2}}{\alpha^{5}\, m_{e}^{2}}\] \[Q_{\rm solar} = 5.3\times 10^{-5}\,\frac{\alpha^{7}\,m_{e}^{9/2}\,M_{pl}^{2}}{m_{p }^{9/2}}\] \[t_{\rm day} = 376\,\frac{M_{pl}}{\alpha^{3/2}\,m_{e}^{3/2}\,m_{p}^{1/2}} \tag{3}\]
Though there will be a certain tolerable range for each of these parameters, we specify to the Earth's values for our calculations. Additionally, note that the temperate requirement has dictated that the incident stellar power is evaluated at \(a_{\rm temp}\) (=AU for our values), making this quantity independent of stellar mass.
#### 2.1.1 Speed of Stellar Wind
The escape velocity of a star is
\[v_{\rm esc}=\sqrt{\frac{2\,G\,M_{\star}}{R_{\star}}}=0.30\,\lambda^{1/10}\,\alpha \tag{4}\]
For the Sun, this is 618 km/s. The speed of solar wind is around 400-1000 km/s, roughly the same order of magnitude. This results from the fact that the escaping wind is nonthermal, as particles that have enough energy to make it off the Sun usually have a surplus of the same order.
This is larger than the thermal sound speed, which invariably depends on height. For the photosphere,
\[c_{s}\sim\sqrt{\frac{T_{\star}}{m_{p}}}=0.12\,\lambda^{19/80}\,\alpha^{1/4}\, \beta^{1/4}\,\gamma^{1/8} \tag{5}\]
The sound speed of the corona is higher, as discussed below.
#### 2.1.2 Scale Height
The scale height of a star is given by a competition between thermal and gravitational processes as
\[H_{\star}\sim\frac{c_{s}^{2}}{g}=19.7\,\frac{\lambda^{43/40}m_{e}^{1/2}\,M_{pl }^{3/4}}{\alpha^{7/2}\,m_{p}^{9/4}} \tag{6}\]
This is 100-1000 km for the Sun, and sets the scale for many processes, including the granule size and typical magnetic flux tube length.
#### 2.1.3 Stellar Magnetic Field
The magnetic field at the stellar surface is created by a highly complex and incompletely understood dynamo mechanism [28; 29]. However, the details of the precise mechanism are
unimportant for determining the overall field strength, which is set by equipartition of energy as [30]
\[B_{\rm surf}\sim\sqrt{4\pi\,P_{\rm photosphere}}=3.1\times 10^{-5}\,\frac{ \lambda^{19/20}\,\alpha\,m_{e}\,m_{p}^{3/2}}{M_{pl}^{1/2}} \tag{7}\]
For the photosphere pressure, we use \(P_{\rm photosphere}\sim T_{\star}^{\star}\), as appropriate for a \(n=3\) polytrope, which describes stellar structure well [31]. The numerical value matches the observational quantity \(B_{\rm surf}\sim 2G\). This yields an estimate for the total field strength at the surface, which consists of both open field lines that contribute to the star's long range magnetic field, as well as highly complex field configurations that do not. The long range field is related to the total strength by \(B_{\star}=f_{\rm open}B_{\rm surf}\), where \(f_{\rm open}\) is the fraction of field lines which are open. It is this factor that introduces rotational dependence to the stellar magnetic field.
#### 2.1.4 Fraction of Open Field lines
The fraction of stellar magnetic field lines which are "open" (i.e., extend to infinity, rather than form a closed loop) depends both on stellar rotation and temperature. This was postulated to depend on Rossby number in [32] as
\[f_{\rm open}=0.55\,\exp(-2.03\,Ro) \tag{8}\]
where Rossby number is the ratio of rotation period to convective turnover time, \(Ro=P_{\rm rot}/\tau_{\rm conv}\). For the convective turnover time, we use the expression from [33]:
\[\tau_{\rm conv}=\tau_{0}\,\exp\biggl{(}-\frac{T}{T_{\rm conv}}\biggr{)} \tag{9}\]
where we have neglected terms that cause shutoff for large temperatures. The turnover temperature is set by molecular absorption processes, \(T_{\rm conv}=0.27\alpha^{2}m_{e}^{3/2}/m_{p}^{1/2}\). This is normalized to yield a Rossby number of 1.96 and an open field line fraction of 0.01 for the Sun. The coefficient \(\tau_{0}\) is set dimensionally to be \(\tau_{0}\sim R_{\star}\sqrt{m_{p}/T_{\star}}=1.9\times 10^{5}\lambda^{9/ 16}M_{pl}^{9/8}/(\alpha^{9/4}m_{e}^{1/4}m_{p}^{15/8})\), and is normalized to be 246.4 days for our Sun. Expressing this in terms of fundamental parameters depends on the distribution of stellar rotation periods, which is discussed below.
### Corona
The corona is the hotter, much less dense outer layer of a star. Its properties are continuous with the star's extended stellar wind region of influence, and is the source region of most of the variable activity leading to space weather.
#### 2.2.1 Density of Corona
In the formalism of [34], the density of the corona (at the transition region) is determined by the equilibration of heating and cooling processes. The heating rate is given by \(Q_{\rm heat}\sim\rho_{\rm corona}v^{3}/\lambda_{c}\), where \(\lambda_{c}\) is the granular scale, roughly set by the scale height \(H=c_{\rm s}^{2}/g\). The cooling rate for bremsstrahlung is \(Q_{\rm cool}=n_{e}\,n_{p}\,\sigma_{T}\,v\epsilon\), where \(\sigma_{T}=8\pi/3\alpha^{2}/m_{e}^{2}\) is the Thomson cross section and \(\epsilon\sim\alpha m_{e}\) is the typical energy exchange [10]. These are equal when
\[\rho_{\rm corona}\sim\frac{m_{e}\,m_{p}\,g}{\sigma_{T}\,\epsilon}=4.7\times 10 ^{-7}\,\frac{\alpha\,m_{e}^{2}\,m_{p}^{3}}{\lambda^{3/5}\,M_{pl}} \tag{10}\]
This is equal to \(10^{-16}\) g/cm\({}^{3}\) for the Sun.
#### 2.2.2 Temperature of Corona
The corona is about two orders of magnitude hotter than the photosphere, which has proven puzzling to explain theoretically for many years. Consequently, various competing theories have been developed to explain the anomalously high temperature [35]. Perhaps the most popular account is that of Alfven wave heating, which posits that energy is transferred to the corona from the stellar interior by turbulent plasma oscillations. In the following, we only consider this theory, which gives the heat flux as [36]:
\[S=\frac{1}{2}\,\rho_{\rm corona}\,\delta v^{2}\,v_{A} \tag{11}\]
Here \(\delta v^{2}\sim T/m_{p}\) and \(v_{A}=B/\sqrt{\rho_{\rm corona}}\). This determines temperature through the diffusion equation \(S=-\kappa_{\rm th}\nabla T\sim\kappa_{\rm th}T/H_{\star}\). From [28], the thermal conductivity of a stellar plasma is
\[\kappa_{\rm th}\sim\frac{1.31}{\pi\log\Lambda}\frac{T^{5/2}}{e^{4}\,m_{e}^{1/2}} \tag{12}\]
where \(\log\Lambda\sim\)5-20 is the Coulomb logarithm, which has mild parameter dependence, but can be ignored. This can be solved for \(T\) to yield
\[T_{\rm corona}\sim\left(\frac{e^{4}\,m_{e}^{1/2}}{m_{p}^{2}}\,\frac{\rho_{\rm corona }^{1/2}\,B_{\rm surf}}{g}\right)^{2/3}=4.6\times 10^{-3}\,\frac{\lambda^{5/6} \,m_{e}^{5/3}}{\alpha^{1/3}\,m_{p}^{2/3}} \tag{13}\]
### Stellar Wind
#### 2.3.1 Mass Loss Rate
According to [31], many analytic mass loss formulas have no strong theoretical justification. Whatever the underlying mechanism for solar wind, it is constrained by the continuity equation to obey
\[\dot{M}\sim\rho_{\rm corona}\,v\,4\pi\,R_{\star}^{2}=7.0\times 10^{-5}\,\frac{ \lambda^{11/10}\,m_{e}^{2}\,M_{pl}}{\alpha^{2}\,m_{p}} \tag{14}\]
This is normalized to yield \(2\times 10^{-14}M_{\odot}/{\rm yr}\) for the Sun. With this, we may ponder whether in some universes the stellar wind is strong enough to deplete stellar material before the available nuclear energy is exhausted; in such universes, type II supernovae would not occur, with stars instead ending their lives having blown off material to the point where fusion ceases. We find \(t_{\star}\dot{M}/M_{\star}=.16\alpha^{5}\beta^{1/4}/(\lambda^{3/2}\gamma)=3 \times 10^{6}\), so that if \(\alpha\) were a factor of 20 lower, this would indeed be the case. However, this may not preclude the distribution of heavy elements into the interstellar medium, if enough reach the wind-launch site. More work is needed to determine whether this mechanism can be at play, whether heavy elements collect in the stellar core, or whether the strong wind effectively extinguishes the star before any heavy elements are created. In any case, including this boundary in parameter space does not appreciably affect the probabilities we compute.
#### 2.3.2 Alfven Radius
The Alfven radius is the point at which an appreciable azimuthal velocity component develops. This is set by
\[\frac{B_{\star}^{2}}{4\pi}\sim\rho(r)v_{r}^{2} \tag{15}\]
Throughout we take the Sun's Alfven radius to be \(R_{A}\sim 24R_{\odot}\), though it can vary by a factor of 2 throughout the solar cycle [37]. By the continuity equation, the quantity \(\rho v_{r}\propto 1/r^{2}\). The radial dependence of \(v_{r}\) can be found using Parker's model of solar wind, which gives
\[\frac{1}{v_{r}}\Big{(}v_{r}^{2}-c_{s}^{2}\Big{)}\frac{dv_{r}}{dr}=\frac{2c_{s} ^{2}}{r}-\frac{G\,M_{\star}}{r^{2}} \tag{16}\]
If we define the sonic radius \(R_{s}=GM_{\star}/(2c_{s}^{2})\), then for \(r\gg R_{s}\), this gives \(v_{r}\to 2c_{s}\log(r/R_{s})^{1/2}\)[38], though to first approximation the logarithmic dependence can be neglected.
If \(B\) is primarily dipolar, \(B(r)=B_{\star}(R_{\star}/r)^{3}\), and we find
\[R_{A}\sim\left(\frac{f_{\rm open}^{2}\,T_{\star}^{4}}{\rho_{\rm corona}\,c_{s} ^{2}}\right)^{1/4}R_{\star}=1.1\times 10^{4}\,\frac{\lambda^{11/8}\,f_{\rm open }^{1/2}\,M_{pl}}{a^{9/4}\,m_{p}^{2}} \tag{17}\]
For more generic magnetic field profiles \(B\sim r^{-q}\), the fourth root is replaced by \(1/(2q-2)\).
#### 2.3.3 X ray Luminosity
A star's X-ray luminosity, which is an important driver of planetary atmospheric loss, is greatly enhanced with respect to the thermal contribution by dynamo processes. As such, X-ray luminosity is found to correlate well with both magnetic activity and rotation speed for slowly rotating stars [39]. For stars with rotation periods less than a few days, however, the X-ray luminosity is found to saturate to about \(10^{-3}\) of the bolometric luminosity. The origin of this is not well understood, but could be due either to the saturation of surface magnetic flux, or internal dynamo [40], representing a qualitatively different regime of energy transport. These two regimes can be encapsulated with the following expression
\[L_{X}=\frac{1}{8}\,B_{\star}^{2}\,R_{\star}^{2}\,\min(v_{\rm conv},v_{\rm rot}) \tag{18}\]
which reproduces the linear rotation-activity relation between X-ray luminosity and magnetic flux found in [41]. Here, we have defined a convective speed in terms of the convective turnover time in Equation (9) as \(v_{\rm conv}=R_{\star}/\tau_{\rm conv}\).
### Rotation
Since stellar activity depends on rotation rate, and stellar rotation decreases over time, the majority of a planet's atmospheric loss may occur during the initial phase of stellar evolution. Here, we derive expressions for initial stellar rotation as well as spindown rate.
#### 2.4.1 Initial Stellar Rotation
Stars are observed to have a spread of rotation periods within the span of several days, with periods that increase as they age [42]. At formation time, one may expect that stars inherit their rotation from their collapsed dust cloud, but an order of magnitude estimate reveals that the angular momentum of the dust cloud vastly exceeds stellar angular momentum [43]. Indeed, a star possessing that much angular momentum would exceed the critical breakup velocity, and would quickly jettison its material. Instead, the star radiates angular momentum through its surrounding disk until it drops below the breakup speed, and can coalesce [42]. This process results in initial stellar rotation frequencies being close to their breakup velocity, as observed in [44]:
\[\Omega_{0}=\sqrt{\frac{2}{3}\frac{G\,M_{\star}}{R_{\star}^{3}}}=1.6\times 10^ {-3}\,\frac{\alpha^{3}\,m_{p}^{2}}{\lambda^{7/10}\,M_{pl}} \tag{19}\]
#### 2.4.2 Stellar Spindown Time
Stars lose angular momentum throughout their evolution via stellar wind. While a star's angular momentum is given by \(\dot{J}\sim MR_{\star}^{2}\Omega\), to estimate angular momentum loss we must keep in mind that the stellar wind travels radially outward until the Alfven radius, and so angular momentum loss is given by \(\dot{J}\sim\dot{M}R_{\star}^{2}\Omega\)[45]. This increased lever arm greatly enhances spindown, and also introduces extra rotation dependence, as the Alfven radius depends on spin. A linear dependence \(R_{A}\propto\Omega\) leads to a cubic evolution equation for \(\Omega\), as first discussed in [46]. Additionally, a qualitative shift in spindown behavior empirically occurs when the rotation frequency exceeds a critical value, akin to the convective turnover time given in Equation (18). This leads to the following equation governing the evolution of rotation [42]:
\[\dot{\Omega}\sim-\frac{B_{\star}^{2}\,R_{\star}^{2}}{M_{\star}\,v}\,\Omega\, \min(\Omega^{2}\,\tau_{\rm conv}^{2},1) \tag{20}\]
This also sets the spindown time as
\[t_{\rm brake}\sim\frac{M_{\star}}{\dot{M}}\frac{R_{\star}^{2}}{R_{A}^{2}}=.21 \,\frac{\alpha^{5/2}\,M_{pl}^{2}}{\lambda^{5/4}\,f_{\rm open}\,m_{e}^{2}\,m_{ p}} \tag{21}\]
For stars rotating more rapidly than the convective turnover time, spindown is set by the star's convective churn, rather than rotation. Below this, the evolution \(\dot{\Omega}\sim\Omega^{3}\) leads to the well established Skumanich law, \(P_{\rm rot}\sim\sqrt{\dot{t}}\)[47]. For fast rotators, the decay is instead exponential.
These are all the properties of stars we will need to model our habitability effects in the sections below.
## 3 How Do Atmospheric Properties Differ in Other Universes?
We now turn our attention to planetary atmospheres, and whether their character is substantially different in other universes. In particular, we ask what physics determines that the Earth's atmospheric mass is six orders of magnitude less than the planet's mass, how this compares to the minimum needed for several habitability considerations, and whether the expected atmospheric mass is lower than these thresholds for different values of the fundamental constants.
At first glance, it may seem strange to attempt to explain atmospheric mass fraction in terms of fundamental constants. After all, the solar system alone exhibits an enormous diversity of atmospheric mass fractions amongst its planets, from almost zero around small inner rocky bodies to nearly unity for the gas giants. Indeed, atmospheric mass seems to depend on a great number of variables: planetary mass, interior and surface chemistry, orbit, evolution, flux, and the presence or absence of life [48]. Even Venus, though remarkably similar to Earth in orbit and mass, has an atmosphere 90 times Earth's. However, closer inspection reveals hidden regularity; Venus's atmospheric nitrogen content is only 3-4 times that of Earth's, placing it at the same order of magnitude [49]. Its carbon dioxide content, which comprises the bulk of the atmosphere, is the same order of magnitude as that found dissolved in Earth's oceans and compressed into sedimentary rock [50]. Even Venus's initial water content is estimated to have been similar to Earth's [50] (though recent work indicates that even if its initial water content were similar, it may not have ever been able to condense from a steam atmosphere to form an ocean [51]). Evidently, this diversity stems from the different phases each species can undergo, rather than the primordial abundance of each element, giving hope that the overall mass fraction may be understood by processes operating in the early solar system, as well as galactic element abundances. Furthermore, if this is the case, we have hope of extrapolating these values to other universes.
In the following, we focus on nitrogen, as the only gas which is noncondensible under temperate conditions, and present in appreciable quantities. Its presence is essential for stability of liquid water on Earth's surface [52]. It was estimated in [53] that the nitrogen contained in the Earth's mantle is between 3-10 times that of Earth's atmospheric nitrogen, a ratio that is certainly affected by the presence of other species, but is likely to hold as a rough order of magnitude estimate under a range of conditions [49]. Earth's atmospheric nitrogen has remained constant to within a factor of two over the past 3 Gyr, as evidenced by analyzing raindrop imprint size [54] and the isotopic composition of quartz [55].
In the following, we consider two explanations for the magnitude of Earth's nitrogen abundance, corresponding to two different plausible sources: late accretion by chondrites, and initially, as dissolved material in Earth's original building blocks. Each of these hypotheses has different implications for the amounts of nitrogen on planets elsewhere in our universe, as well as throughout the multiverse. Additionally, it is an open question how planetary nitrogen abundance scales with initial stellar system nitrogen abundance, which has important implications for the multiverse, as we happen to be very close to a boundary beyond which nitrogen abundance is reduced by a factor of 270. At the two extremes, the dependence may be linear, if the nitrogen content of solar system bodies was not close to their carrying capacity, or independent, if the bodies were saturated. The dependence probably lies somewhere between these two extremes, but we report how adopting each assumption alters our multiverse probabilities, which serves to bracket the upper and lower limits for our calculations.
We then consider three atmospheric mass thresholds that are plausibly related to habitability. The first is the amount of atmosphere that can be stripped away by stellar flux. The second is related to the pressure necessary to maintain liquid surface water. Third, the mass needed to buffer diurnal temperature changes. Finally, we consider the possibility that only initially slowly rotating stars in our universe are capable of retaining their atmospheres, and assess the compatibility of this hypothesis with the multiverse.
### Possible Sources of Atmosphere
The fact that Earth possesses an atmosphere containing volatile constituents is somewhat of a mystery, given that the conditions during Earth's formation were much hotter than their condensation temperatures. Naively, this would result in inner planets that are almost completely comprised of refractory elements, which is manifestly not the case. In the following, we consider two leading theories for the origin of Earth's nitrogen atmosphere: delivery during late accretion from outer system bodies, and as a result of initial accretion from nitrogen dissolved in Earth's original building blocks.
#### 3.1.1 Initial Atmosphere Delivered during Accretion
The classic account for Earth's volatile budget is from planetesimals initially situated outside the solar system's ice line, where temperatures were below the condensation point of volatile species. It has been estimated that up to 7.5 atmospheric masses could have been delivered by carbonaceous chondrites after the main phase of planet formation was completed [56]. This account has the simplicity of explaining the origin of Earth's atmosphere and ocean by a single common source. Additionally, it can readily explain the hierarchy of why Earth's ocean is \(\sim\)100 times more massive than the atmosphere, as [57] demonstrated that the H\({}_{2}\)O/N\({}_{2}\) impact degassing ratio is \(\sim\)100 for a range of different chondrites. Finally, we would like to stress that in this scenario, final atmospheric mass will be highly stochastic, as the material delivered through late accretion is dominated by few large bodies [58]. Thus, while we compute the expected value, it should be kept in mind that this scenario yields a distribution of atmospheric mass ratios.
In [7], we derive the planetary ocean mass fraction delivered via planetesimal accretion during planet formation in terms of the amount of material delivered during late accretion. In this scenario, the atmospheric volatiles are delivered in the same manner. Therefore, we may posit the atmospheric mass fraction to simply be
\[f_{N}=0.011\,\frac{\kappa\,\lambda^{21/10}\,\gamma^{1/3}}{\alpha^{11/2}\,\beta^{ 25/12}} \tag{22}\]
For details on how this expression was obtained, we refer the reader to [7].
#### 3.1.2 Atmospheric Mass as a Result of Accretion by N-Rich Bodies
Here, we follow [59] by considering that Earth's nitrogen was delivered during accretion in the form of dissolved N inside rock and metal. We may then derive the total amount of resulting nitrogen as a function of body mass, with the presumption that only nitrogen in the interior of these planetesimals will be incorporated into the planet's final budget.
The initial nitrogen fraction of a planetesimal is \(f_{N}=m_{N}/m_{\rm pp}\), where \(m_{pp}\) is the mass of the planetesimal. The resultant nitrogen budget is obtained through the magma ocean and core as
\[\hat{f}_{N}=\frac{m_{N}^{\rm MO}+M_{N}^{\rm core}}{m^{\rm MO}+M^{\rm core}}= \frac{1+Z\,D_{N}}{1+Z}C_{N}^{\rm MO} \tag{23}\]
where \(Z=M^{\rm core}/M^{\rm mo}\), \(C_{N}^{\rm MO}=M_{N}^{\rm MO}/M^{\rm MO}\), and \(D_{N}=C_{N}^{\rm core}/C_{N}^{\rm MO}\).
For the fraction of nitrogen dissolved in the magma ocean, we use [60]:
\[C_{N}^{\rm MO}=\frac{p_{N}}{p_{1}}+fO_{2}^{-3/4}\left(\frac{p_{N}}{p_{2}} \right)^{1/2} \tag{24}\]
where \(p_{1}\) and \(p_{2}\) are coefficients, taken here to scale as \(p_{i}\)\(\propto\)\(Ry^{4}\), with \(Ry\) the Rydberg constant that dictates the electronic energy scale. The quantity \(fO_{2}\) is oxygen fugacity, and will depend of the primordial abundances of the two elements.
The partial pressure can be rewritten in terms of atmospheric nitrogen mass as
\[p_{N}=M_{N}^{\rm atm}\,\frac{g}{A}=(M_{N}^{\rm tot}-M_{N}^{\rm MO}-M_{N}^{\rm core })\frac{g}{A}=\frac{M_{pp}\,g}{A}\Big{(}f_{N}-(1-f_{N})\hat{f}_{N}\Big{)} \tag{25}\]
This can then be used to find an equation determining \(\hat{f}_{N}\):
\[\hat{f}_{N}=k_{1}(f_{N}-\hat{f}_{N})+\sqrt{k_{2}(f_{N}-\hat{f}_{N})} \tag{26}\]
where for cleanliness we have defined \(k_{1}=\zeta\tau/p_{1}\), \(k_{2}=\zeta^{2}fO_{2}^{-3/2}\tau/p_{2}\), \(\zeta=(1+ZD_{N})/(1+Z)\), and \(\tau=gM_{pp}(1-f_{N})/A\). This can be solved for \(\hat{f}_{N}\) to find
\[\hat{f}_{N}=\frac{2f_{N}k_{1}(1+k_{1})-k_{2}+\sqrt{4f_{N}(1+k_{1})k_{2}+k_{2}^ {2}}}{2(1+k_{1})^{2}} \tag{27}\]
We find that for large mass bodies, \(k_{1}\),\(k_{2}\rightarrow\infty\), \(\hat{f}_{N}\to f_{N}\), so that planetary nitrogen abundance matches the primordial value. In the limit \(k_{2}\to 0\), this expression simplifies significantly to \(\hat{f}_{N}\to f_{N}k_{1}/(1+k_{1})\). This expression allows us to derive the final nitrogen abundance as a function of planetesimal mass, by noting that \(g/A\sim G\rho_{\rm rock}^{4/3}/M^{1/3}\), \(D_{N}\sim\exp((b+cP)/T)\), \(T\sim GM/R\sim GM^{2/3}\rho_{\rm rock}^{1/3}\).
To determine the planetary nitrogen abundance fraction that results from original accretion, we need the typical planetesimal size. For this, we use the isolation mass \(M_{\rm Iso}=1.3\times 10^{8}\,\kappa^{3/2}\,\lambda^{25/8}\,m_{p}^{7/4}\,M_{pl}^{9/4 }/(\alpha^{15/2}\,m_{\epsilon}^{3})\)[3]. In the limit that \(k_{1}\ll 1\) and neglecting the dependence on planetary mass of \(D_{N}\), this gives
\[\hat{f}_{N}=7.6\times 10^{-7}\,\frac{\kappa\,\lambda^{25/12}\gamma^{1/2}}{ \alpha^{9}\,\beta^{2}} \tag{28}\]
Interestingly, the dependence on stellar mass of this quantity is practically indistinguishable from that of the alternate nitrogen source, Eqn. (22).
### Which Atmospheric Thresholds Are Important for Habitability?
Earth's atmosphere is quite comfortably above any catastrophic thresholds, being about two orders of magnitude larger than needed to prevent total atmospheric escape, maintain liquid surface water, and buffer diurnal temperature changes. However, given the exponential dependence on constants of some of these conditions, we investigate the influence each exerts on our multiverse calculations.
#### 3.2.1 How Much Atmospheric Loss Occurs in Other Universes?
In this paper, we restrict our attention to terrestrial planets, which are defined such that light gases such as hydrogen and helium, but not heavy gases such as water, oxygen and nitrogen, undergo Jeans escape. For these planets, the dominant form of atmospheric escape is driven by stellar XUV light, and is in the energy limited regime (for recent reviews, see [61; 62]). The mass loss rate for this type of escape is given by equating the energy of UV light absorbed by the atmosphere with the energy of atmospheric particles ejected at the escape speed [63],
\[\dot{M}_{\rm XUV}=\epsilon\,\frac{R_{\oplus}^{3}\,L_{\rm X}}{a_{\rm temp}^{2 }\,G\,M_{\oplus}} \tag{29}\]
Here, \(\epsilon\) is an unimportant efficiency factor. This is independent of atmospheric mass, being limited by the amount of energy imparted in the upper atmosphere rather than the amount of material present. In [64] it was estimated that an XUV flux greater than 60 times Earth's value would be needed to induce a catastrophic mass loss rate of \(1.8\times 10^{9}\) g/s, capable of eroding the entire atmosphere. For reference, M dwarfs and young K dwarfs are subjected to 100-400 times Earth's XUV flux [85].
The total atmospheric mass loss through X-ray flux may be found through Equation (18), taking rotation evolution into account:
\[\Delta M_{\rm XUV}\sim\frac{R_{\oplus}^{3}}{a_{\rm temp}^{2}\,G\,M_{\oplus} }\,\frac{B_{\star}^{2}\,R_{\star}^{3}}{8}\int_{0}^{t_{\star}}dt\,\min(\Omega_{ \rm conv},\Omega(t)) \tag{30}\]
Using the evolution dictated by Equation (20) and in the limit \(t_{\star}\gg t_{\rm brake}\), this integral can be performed to find
\[\Delta M_{\rm XUV}\sim\frac{B_{\star}^{2}\,R_{\star}^{3}}{a_{\rm temp}^{2}\,G \,\rho_{\rm rock}}\,\min(\Omega_{\rm conv},\Omega_{0})\,\sqrt{t_{\star}\,t_{ \rm brake}} \tag{31}\]
The condition \(t_{\star}\gg t_{\rm brake}\), which holds by three orders of magnitude in our universe, is not necessarily generic; we compute \(t_{\rm brake}/t_{\star}=0.0019\lambda^{5/4}\alpha^{1/2}/f_{\rm open}\), which can be much larger than 1 if no stellar magnetic field lines are open for certain parameters. However, including a more complete expression does not affect the calculated probabilities appreciably, while
considerably complicating the formulae. In Figure 1, we display the atmospheric mass loss for temperate, terrestrial planets as a function of stellar mass, for three different values of the fine structure constant. The difference resulting from adopting the two alternate origin scenarios is also displayed, but is seen to be minimal. This defines some stellar mass below which more than the initial atmosphere is lost through XUV irradiation, which depends on fundamental constants, and can be larger than the solar mass (\(\lambda=1/1.8\)) in some regions of parameter space.
#### 3.2.2 How Much Atmosphere Is Needed to Maintain Liquid Surface Water?
Liquid surface water can exist only when atmospheric pressure exceeds that at the triple point, where the three low energy phases of water coexist in equilibrium. The location of the triple point can be determined by noting that the solid-liquid transition is almost independent of pressure, and occurs at temperature set by the vibrational molecular energy \(T_{\rm freeze}\sim\alpha^{1/2}/(m^{1/2}r_{H_{2}O}^{3/2})\). The liquid-gas transition is given by the Clausius-Clapeyron equation as \(P(T)=P_{0}e^{-L/T}\), where the latent heat of evaporation is \(L\sim\alpha r_{H_{2}O}\). The coefficient \(P_{0}\) can be found by enforcing that the phase curve terminates at the observed critical point of water of 647 K and 22.1 MPa. Though an imperfect description, the van der Waals equation of state may be used to provide a theoretical expectation for the location of the critical point in terms of the molecular radius and energy \(\epsilon\), yielding \(T_{\rm crit}=8/27\,\epsilon\) and \(P_{\rm crit}=\epsilon/(18\pi r^{3})\)[65]. Normalizing to fit our observed values, this yields the pressure at the triple point to be
\[P_{\rm triple}=\frac{\epsilon}{18\pi\,r^{3}}\,e^{\frac{27L}{8\epsilon}}=1.6 \times 10^{-3}\,\alpha^{5}\,m_{e}^{4}\,e^{-424/\sqrt{\beta}} \tag{32}\]
This can then be related to minimal atmospheric mass capable of supporting liquid water through \(M_{\rm min}=4\pi R_{\oplus}^{2}P_{\rm triple}/g\), giving
\[M_{\rm min}=0.87\,\frac{\alpha^{3/2}\,m_{e}^{1/4}\,M_{pl}^{3}}{m_{p}^{9/4}}\,e ^{-424/\sqrt{\beta}} \tag{33}\]
This is about \(0.006M_{\rm atm}\) for our values.
#### 3.2.3 How Much Atmosphere Is Needed to Buffer Diurnal Temperature Changes?
Earth's atmosphere retains substantial heat, which buffets the day-night temperature difference from the otherwise extreme variations that would occur, such as the day-night
Figure 1: Fraction of atmosphere lost as a function of stellar mass. The dependence of this quantity on the fine structure constant \(\alpha\) can be observed. The solid lines assume a late atmospheric delivery scenario, and the dashed lines assume atmosphere originates in original accretion. The dotted lines correspond to the catastrophic value 1, and the Sun’s value.
temperature differences on the Moon and Mars which can reach hundreds of degrees Kelvin. This occurs because the relaxation time of Earth's atmosphere, estimated as the ratio of thermal energy over the power supplied, \(t_{\rm relax}\sim E_{\rm therm}/Q_{\rm solar}\), is about 100 days. This gives
\[t_{\rm relax}\sim\frac{T\,M_{\rm atm}\,a_{\rm temp}^{2}}{m_{p}\,L_{\star}\,R_ {\oplus}^{2}} \tag{34}\]
For small enough atmospheric mass, this is less than half a day, and the atmosphere does not play a significant role in averaging out daily variations of stellar flux. This occurs at the threshold
\[M_{\rm min}=1.9\,\frac{\pi^{7/2}\,m_{c}^{3/2}\,M_{pl}^{3}}{m_{p}^{7/2}} \tag{35}\]
The exact mass depends strongly on water content, as evidenced by the extreme temperature differences present in Earth deserts, but we do not consider this here.
#### 3.2.4 Are Only Slowly Rotating Stars Habitable?
There is evidence from noble gas isotopes [66], the Moon [67], and Venus [68] that the Sun began as an anomalously slow rotator. However, it is not currently possible to determine precisely how slow, and many studies only differentiate between stars in the lower 25 percentile. If true, this suggests a selection effect: ordinarily rotating stars may be incapable of hosting life, presumably due to high early atmospheric loss.
To determine the compatibility of this habitability hypothesis with the multiverse, we follow the fraction of slowly rotating stars
\[f_{\rm slow}=\min\biggl{(}\frac{M_{\rm atm}}{\Delta M_{\rm XUV}},1\biggr{)} \tag{36}\]
This treats the initial rotation distribution as uniform up to the natural value \(\Omega_{0}\), which is loosely consistent with observations of stellar populations [69]. To account for the observation that the Sun appears to be in the lower 25 percentile, we rescale the fraction of slow rotators (of Sun-like stars) in our universe to be \(1/4\).
### Is Atmospheric Stability a Factor Determining Our Presence in This Universe?
We can now test the various atmospheric habitability thresholds, on the basis of their compatibility with our observations within the multiverse. To this end, we test the following four thresholds: loss due to XUV radiation, the minimal mass for stable liquid surface water, the minimal mass to buffer diurnal temperature changes, and the notion that only stars which are slowly rotating are habitable. In addition, we check both the early and late origin scenarios for our atmosphere, both an independent and linearly dependent abundance as a function of stellar nitrogen content, and either restricting to a narrow range of Earth-like carbon-to-oxygen values, or not. In Table 1, we display the various Bayes factors for each of these combinations.
We find that when restricting consideration to a narrow range of carbon-to-oxygen ratios, the Bayes factors for the various habitability criteria do not vary significantly. When considering the carbon-to-oxygen ratio to not play a factor in habitability, however, several of the habitability criteria are severely disfavored in the multiverse context. The disfavored criteria all have to do with the assumption that planetary nitrogen content scales linearly with stellar system nitrogen abundance, and does not depend on the atmospheric source or threshold mass. This is a consequence of our universe being situated very close to a precipitous threshold where nitrogen-14 is unstable [6], which affects the probabilities if the carbon-to-oxygen ratio is unimportant but does not if restricted to the subspace where the carbon-to-oxygen ratio is close
to our observed value. We note that in [6] we found additional reasons to favor a restricted range of carbon-to-oxygen ratio based on the observed Hoyle energy value and organic to rock ratio in our universe. Apart from this insight, no strong preference can be given to the different atmospheric origin scenarios, threshold masses, or expectation on whether only slow rotators are habitable. Our conclusion is that it is certainly consistent that atmospheric mass may play a large role in the habitability of our universe, but it does not appear to be a driving factor in determining our particular observations.
## 4 Are Planetary Magnetic Fields Generic?
A planet's magnetic field is purported to be essential for habitability, as it shields against charged particles, preventing stellar wind stripping (see, e.g., [70]). However, it must be pointed out that magnetic fields also provide several avenues for ion escape [71], which may well represent the dominant form of atmospheric loss on Earth today [72]. Indeed, Venus has managed to retain its atmosphere without an intrinsic (as opposed to induced by the Sun's) magnetic field, despite being closer to the Sun than Earth.
Given the uncertain importance of planetary magnetic fields for habitability, we ask whether their properties change significantly in other universes, and thus whether demanding their presence influences the probabilities of any of our observables. We focus on five relevant aspects required for a magnetic field to be both present and protective: (i) The core's magnetic Reynolds number is large enough to support a dynamo. (ii) The magnetosphere must extend beyond the atmosphere, as otherwise it will have little effect on loss properties. (iii) The star's temperate zone must be outside its Alfven zone, as otherwise the planetary and stellar magnetic field lines connect, forming a direct line of transport which dumps stellar wind onto the planet's poles, rather than act as a shield. (iv) The development of a magnetic field requires a metal core, placing limits on the oxygen content of the planet. (v) The magnetic field is generated through a dynamo, and so requires the core to remain at least partly liquid for an appreciable duration. If planetary magnetic fields are essential for habitability, all of these conditions must be met.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\mathbf{H}\) & **Late Delivery** & **Late (N Dep)** & **Initial** & **Initial (N Dep)** \\ \hline Earth-like C/O & & & & \\ \hline \(M_{\mathrm{atm}}>\Delta M_{\mathrm{XUV}}\) & 0.87 & 0.66 & 0.94 & 0.71 \\ \(M_{\mathrm{atm}}>M_{\mathrm{triple}}\) & 0.82 & 0.63 & 0.89 & 0.67 \\ \(M_{\mathrm{atm}}>M_{\mathrm{d}imal}\) & 1.0 & 0.75 & 1.0 & 0.75 \\ slow rotator & 0.32 & 0.25 & 0.40 & 0.32 \\ \hline Unrestricted C/O & & & & \\ \hline \(M_{\mathrm{atm}}>\Delta M_{\mathrm{XUV}}\) & 2.05 & 0.0041 & 2.34 & 0.0038 \\ \(M_{\mathrm{atm}}>M_{\mathrm{triple}}\) & 1.09 & 0.0057 & 1.84 & 0.0032 \\ \(M_{\mathrm{atm}}>M_{\mathrm{d}imal}\) & 1.98 & 0.0074 & 3.08 & 0.068 \\ slow rotator & 1.14 & 0.00067 & 2.62 & 0.00082 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bayes factors for various atmospheric habitability criteria relative to the baseline case where atmosphere mass is unimportant for habitability. Small values indicate that a set of assumptions is disfavored to a corresponding degree in the multiverse framework. The cases considered are that atmosphere must be large enough to withstand XUV loss, be above the triple point of water, can buffer diurnal temperature changes, and that only slowly rotating stars are habitable. The late delivery vs. initial columns consider both potential origins of the atmosphere, and the N dep columns consider that planetary nitrogen abundance scales linearly with stellar system abundance. The top rows restrict to Earth-like values of C/O ratio, and the bottom do not.
### When Is the Magnetic Reynolds Number Large Enough to Induce a Dynamo?
Both theory and simulations of the Earth's core indicate that a dynamo will only exist when advection of the magnetic field dominates over diffusion [73]. This can be summarized as a condition on the magnetic Reynolds number \(R_{a}=v_{\rm core}\ L/\eta>\) 10-100 (Earth's magnetic Reynolds number is about \(10^{3}\)) [74]. This condition can be used to place constraints on the fundamental constants, using the length scale \(L\sim R_{\rm core}\), and magnetic diffusivity \(\eta=1/(4\pi\sigma_{\rm electric})\) with \(\sigma_{\rm electric}\) the electrical conductivity, which is related to thermal conductivity through the Wiedemann-Franz law [75]:
\[\frac{\delta_{\rm heat}}{\sigma_{\rm electric}}=\frac{\pi^{2}}{3}\frac{T}{e^{ 2}} \tag{37}\]
In [4], we found an expression for the thermal diffusivity in terms of fundamental constants as \(\kappa_{\rm heat}=2/(m_{e}^{1/4}m_{p}^{3/4})\), which is related to thermal conductivity through \(\kappa_{\rm heat}=\delta_{\rm heat}/(c_{p}\rho_{\rm rock})\). The core convective speed can be obtained from mixing length theory, \(v_{\rm core}\sim(Lq/(\rho_{\rm rock}H_{T}))^{1/3}\)[76]. Using our expression for heat flux \(q=0.58\pi^{11/2}m_{e}^{5}/M_{pl}\) from [4] and the generic expression for scale height \(H_{T}\sim c_{s}^{2}/g\), we find
\[R_{a}=0.33\,\frac{\alpha^{7/3}\,\beta^{7/6}}{\gamma^{2/3}} \tag{38}\]
These scalings are not significantly altered if we instead use the magnetostrophic estimate for the core convection velocity, also from [76].
### Is the Magnetosphere Always Larger Than the Atmosphere?
In order for a planetary magnetic field to be an effective shield against stellar wind, it must extend beyond the atmosphere. The size of the magnetosphere can be estimated as the point at which the magnetic pressure is equal to the stellar wind pressure, yielding for a dipole field [77] the standoff distance:
\[r_{\rm magnetosphere}=\left(\frac{2\,B_{0}^{2}}{\rho_{\rm sw}\,v_{\rm sw}^{2} }\right)^{1/6} \tag{39}\]
To evaluate this, we use the expressions for density and speed of solar wind from Section 2. It remains to estimate \(B_{0}\), the strength of the magnetic field at the planet's surface.
There are an inordinate number of proposals for how planetary magnetic field strength depends on planetary characteristics, as reviewed in [78]. We use the Elasser number rule, which posits that the Lorentz force and Coriolis force are roughly equal, and results in
\[B_{\rm core}=\sqrt{\frac{2\,\rho_{\rm rock}\,\Omega}{\sigma_{\rm electric}}} \tag{40}\]
Here \(\Omega\) is the planet's angular rotation speed.
For terrestrial planets, the atmospheric scale height is much smaller than planetary radius, and so it suffices to compare the magnetosphere size to the latter. Using our expressions above, and defining \(\mathcal{Y}=(R_{\rm core}/R_{\rm planet})^{3}\), we find this to be
\[\frac{r_{\rm magnetosphere}}{R_{\rm planet}}=3.1\,\frac{\lambda^{23/0}\,\gamma^{1 /6}}{\epsilon^{13/12}\,\beta^{11/24}}\,\mathcal{Y}^{1/6} \tag{41}\]
This ratio evaluates to 10 for our values of the constants and Earth's core radius. The dependence on fundamental constants is rather weak, and so it takes a drastic change to alter the conclusion that the magnetosphere extends beyond the atmosphere.
### When Is the Temperate Zone Outside the Alfven Zone?
If a planet is orbiting inside its host star's Alfven zone, its intrinsic magnetic field lines will connect to the star's, which will result in a highly increased level of bombardment by charged particles. This is expected to be the case for the inner planets in the Trappist-1 system, for instance [79] from simulations. Given our expressions for both the temperate zone and Alfven radius from Section 2, it is straightforward to derive their ratio:
\[\frac{a_{\rm temp}}{R_{A}}=7.1\times 10^{-4}\,\frac{\lambda^{3/8}\,\gamma^{1/2} }{f_{\rm open}\,\alpha^{11/4}\,\beta^{2}} \tag{42}\]
For fixed physical constants, this defines a smallest stellar mass for which this condition holds. In our universe this is about \(.1\,M_{\odot}\), in accordance with the expectation that Proxima Centauri b, which orbits a \(0.12\,M_{\odot}\) star at \(0.05\) AU, is outside the Alfven zone for the most likely values inferred for its orbital parameters [80]. Our treatment ignores the nonsphericity and nonstationarity of the Alfven zone and potential planetary eccentricity, which may cause the orbit to periodically dip into the Alfven zone throughout the year.
Note that the dependence on starspot fraction is of crucial importance in this expression, as otherwise this threshold stellar mass would be smaller than the smallest stellar mass. As such, this condition is loosely coincident with the onset of a full stellar convection zone.
### When Does a Core Stratify Geochemically?
In [81], it was pointed out that if a planet's mantle oxygen content is too high, the iron will all be in the form of iron oxide (FeO), and will not differentiate to form a core. They find that the quantity \(R_{1}=(\rm Mg+2Si+Fe)/O\) must exceed 1 in order for a core to develop, Earth's value of this ratio being 1.23. Interestingly, about 4% of the Earth's oxygen is left over after binding with magnesium and silicon, so that only 86% of Earth's iron makes it into the core. This raises the additional possibility that if a planet's oxygen is depleted before its magnesium and silicon are consumed, no iron will be left in the mantle or crust. This could have an additional adverse effect on habitability, which would restrict the allowable oxygen content required for habitability to a narrow range, but we leave exploration of this for future work. It was argued in [82] that planets with core mass fraction below \(\sim\)0.24 would have much higher rates of volatile subduction, due to more extensive volcanism, thicker crust, and stabilized amphibole group. This places a potential lower limit to the allowable core mass for habitability.
Though the core development condition depends on the ratio \(R_{1}\) above, this depends on the abundances of both the alpha elements and iron, which are set by two different supernova processes, and so will scale differently with fundamental constants. In [6], we found the dependence of the alpha element abundances (C, O, Mg, and Si) on the Hoyle resonance energy \(E_{R}=.626(m_{u}+m_{d})+(0.58\alpha-0.0042)m_{p}\), with \(m_{u}\), \(m_{d}\) the masses of the up and down quarks, as found in [18]. We also found an expression for the metal to rock ratio, from which we may determine the quantity
\[R_{2}=\frac{\rm Fe}{\rm Mg+Si+O}=5.0\times 10^{-3}\,\frac{\beta^{82}\,\gamma^{54} }{\kappa^{81}\,\alpha^{56}} \tag{43}\]
For Earth, this value is \(R_{2}=0.163\). This assumes a linear relationship between stellar and planetary metal to rock ratios, which indeed is found [83].
In Figure 2, we plot the oxygenation ratio for various values of the metal ratio \(R_{2}\). It can be seen that with \(R_{2}\) held fixed at the observed value, the oxygenation ratio is less than
1 for \(\Delta E_{R}>3.6\) keV. While we are rather close to a potential anthropic boundary with metal fraction held fixed, allowing it to vary relaxes this closeness. In fact, there is a silicon and magnesium rich region of parameter space for larger values of \(\Delta E_{R}\) which also satisfy the \(R_{1}>1\) requirement. Above a metal fraction of 0.62, these two branches merge, and planets will always contain enough iron to form a core. As discussed in [6], such metal rich planets may be unsuitable for life for reasons other than the possession of a magnetic field, but we found that placing an upper bound on the metal content does not appreciably affect the probabilities we compute, and we do not concern ourselves with such a boundary here.
### What Sets the Core Solidification Timescale?
The presence of a planetary dynamo requires a liquid convective core, which cannot be sustained indefinitely. As heat leaks from the planet, an initially liquid core will cool and solidify. If the solidification timescale is too rapid, any magnetic field will cease before life can take hold on a planet, and so one important consideration is the longevity of a liquid core.
First, we must establish that terrestrial planets possess enough heat for their cores to initially be liquid. This follows almost from our definition of a terrestrial planet, which demands that the gravitational binding energy is of the same order of magnitude as molecular binding energies, so that chemical reactions may take place on the planet's surface. Given the increased temperature and pressure of the planetary interior, the melting point will naturally be exceeded in the core.
The solidification timescale can be simply estimated as \(t_{\rm solid}\sim E_{\rm core}/Q_{\rm heat}\), where \(E_{\rm core}\) is the energy required to be leached from the core for solidification to take place, and \(Q_{\rm heat}\) is the total core power. A proper estimate of \(E_{\rm core}\) would take into account the difference between the gravitational binding energy and the energy that would result in solidification; thankfully, however, these two energies are similar in magnitude, another consequence of restricting our attention to terrestrial planets. So, we may approximate the total energy in the core as \(E_{\rm core}\sim GM_{\rm core}^{2}/R_{\rm core}\). By the same token, \(Q_{\rm heat}\) has components due to formation and crystallization, which are roughly equal. In [4], we found that \(Q_{\rm heat}\sim GM_{\rm planet}\rho_{\rm rock}\kappa_{\rm heat}\) based on dimensional analysis. There, we also consider radiogenic heat and time dependence in more detail, which we neglect here. This may indeed be important; as discussed in [84], too much radioactive heating can prevent core convection. However, we do not consider this in detail here.
Figure 2: Oxygenation ratio \(R_{1}\) for different metal ratios \(R_{2}\). The quantity \(\Delta E_{R}=0\) for our values of the constants. Planetary cores form only when this quantity exceeds 1 (0 on our log scale).
With this, the core solidification timescale is very simple:
\[t_{\rm solid}\sim\frac{A_{\rm planet}}{\kappa_{\rm heat}}=5.7\times 10^{-3}\, \frac{M_{pl}^{2}}{\alpha\,m_{e}^{5/4}\,m_{p}^{7/4}} \tag{44}\]
If a long-lived liquid outer core is necessary for habitability, this timescale must be larger than some timescale typical for the development of complex life, which we take here to be proportional to the stellar lifetime (see [2] for an exploration of different choices on this matter). We normalize this time to the expectation that the outer core will remain liquid for another 700 Myr from [85].
An alternative view is that a solid inner core is actually necessary for the sustenance of a magnetic field, in spite of geologic evidence to the contrary (see [86] for zircon evidence of a magnetic field at 3-4 Ga). The inner core may have developed as late as 565 Mya, based on magnetic evidence from Ediacaran rocks that record an anomalously low field strength, taken to signal a rearrangement in field configuration indicative of a recently established solid inner core [87]. This apparent incompatibility is reconciled if another mechanism generated the magnetic field before core solidification, as for example a long lived liquid mantle ocean [88]). In this case, the above timescale would need to be comparable to the evolutionary timescale, rather than simply longer than it.
### Is a Planetary Magnetic Field Necessary for Habitability?
To treat intrinsic planetary magnetic fields as essential for habitability, we include the product of all five factors into the habitability condition as
\[\mathbb{H}_{\rm B}=\theta(R_{a}-100)\,\theta\Big{(}r_{\rm B}-R_{\rm planet} \Big{)}\,\theta\big{(}a_{\rm temp}-R_{\rm Alfven}\big{)}\,\theta(R_{1}-1)\, \theta(t_{\rm solid}-t_{\star}) \tag{45}\]
If we incorporate this into our calculation, we find that the Bayes factor relative to the base case where magnetic fields are not taken to be important is \(\mathcal{B}=1.52\). We also probe the relative importance of each of these subconditions in Table 2 by first only incorporating each condition in isolation, and then incorporating the four others without each condition, into the calculation. Of the five factors considered, the magnetic Reynolds number, magnetosphere radius, and Alfven zone conditions do not perceptibly alter the probabilities. The core existence condition slightly decreases the probabilities, while the core timescale condition slightly increases them. So, the notion that a magnetic field is necessary for habitability is compatible with the multiverse, and although it is even slightly preferred to the base case, the difference is not statistically meaningful enough to draw the conclusion that the converse hypothesis is disfavored.
We also remark that the base case here took the carbon-to-oxygen ratio to be important for habitability. If instead we drop this assumption, we find the Bayes factor is \(\mathcal{B}=0.050\). The driving factor in making this so low is the core solidification timescale, as can be seen in Table 2. Therefore, we find that the assumption that planetary magnetic fields are important is only compatible with the multiverse if carbon-to-oxygen ratio is also important. This echoes the results of Section 3 and [6], where we found that restricting the carbon-to-oxygen ratio was important for compatibility with the multiverse on other accounts. This also suggests a test of the multiverse hypothesis, for if we find that complex life occurs only on magnetized planets but independently of carbon-to-oxygen ratio, our presence in this universe would be quite unlikely.
## 5 Discussion
Though few agree on exactly what conditions are required for habitability, it surely depends on the confluence of a great many factors. Likewise, our notions of habitability
strongly affect the expectation for the distribution of life, both throughout our universe, and in others. Because of this, very fine-grained effects have the potential to radically alter our estimations of the probability of our existence in this particular universe, and our observations in general. This places us in a scenario where the importance of all discussed habitability factors must be tested before we may make any statements about multiverse probabilities with a relatively high degree of certainty. The stellar activity and atmospheric aspect of this program was undertaken in this paper.
Uncertainties abound: the physics dictating the corona, stellar wind and flares, the relative importance of different atmospheric erosion rates, the ultimate source of Earth's atmosphere, the importance of planetary magnetic fields, and the distribution of all these quantities across different stellar systems are only now coming to light. While we have tried to hedge our ignorance in as many aspects as possible by contemplating competing accounts of these effects, we have necessarily restricted our attention in certain cases, and completely neglected other potentially important effects. Thus, while our work cannot claim to be a definitive exploration of stellar activity and atmospheric effects in other universes, it does represent an important first step.
Perhaps the biggest takeaway of our findings is that, if one believes that a relatively narrow carbon-to-oxygen ratio is required for complex life (as may be argued by the vastly different tectonic regimes that occur outside the interval (0.5,1)), any atmospheric habitability condition we considered had no significant bearing on multiverse probabilities. In this light, all that can be said is that atmospheric presence and stability does not appear to be a major determining factor for why we are in this universe. This is plausible, since the Earth's atmosphere is about two orders of magnitude larger than any threshold we are aware of, but many effects we consider depend exponentially on fundamental constants, so this conclusion is by no means automatic.
On the other hand, if we entertain the possibility that a carbon-to-oxygen ratio relatively close to ours is not required for habitability, altogether different conclusions are drawn. We are forced to conclude, under this assumption, that planetary magnetic fields cannot be important for life, because it renders many of the otherwise less likely regions of parameter space infertile, making us outliers. Additionally, when treating the carbon-to-oxygen ratio as unimportant, we find planetary atmospheric nitrogen must not scale with stellar system nitrogen abundance, or our presence in this universe is unlikely, independent of uncertainties about atmospheric source and lower atmospheric mass threshold.
Both of these findings also suggest potential methods for testing the multiverse hypothesis, if the true habitability conditions turn out to be incompatible with these expectations. So, if
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\mathbf{H}\) & \(R_{a}\) & \(r_{B}>R_{\text{planet}}\) & \(a_{\text{temp}}>R_{A}\) &
\begin{tabular}{c} **Mg + 2Si + Fe** \\ **> O** \\ \end{tabular} & \(t_{\text{solid}}>t_{\star}\) \\ \hline Earth-like C/O & & & & & \\ \hline with only & 1.0 & 1.0 & 1.0 & 0.658 & 1.19 \\ without only & 1.52 & 1.52 & 1.44 & 1.25 & 0.68 \\ \hline Unrestricted C/O & & & & & \\ \hline with only & 0.24 & 1.98 & 1.99 & 2.12 & 0.0046 \\ without only & 0.050 & 0.050 & 0.046 & 0.0043 & 1.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study for planetary magnetic field criteria. This table displays the Bayes factors relative to the baseline case where planetary magnetic fields are not important. The ‘with only’ rows only incorporate the condition in the given column, and the ‘without only’ rows incorporate every condition _except_ the condition in the given column into the probability calculation.
we find that an Earth-like C/O is not needed for complex life and either that atmosphere mass scales with stellar nitrogen or that planetary magnetic fields are required for life, the predictions the multiverse framework has made will be found to be incorrect. While these tests may be rather far off, the salient point is that they are possible in principle. Various biosignatures have already been proposed to help determine the distribution of life throughout the universe, for several places inside and out of our solar system, including searching for relic biomarker compounds on Mars [89], abundance ratios of organic compounds on icy moons such as Enceladus [90; 91], chemical disequilibrium and even microbial absorption in Venus's atmosphere [92], and atmospheric gases such as oxygen around exoplanets [93]. In fact, it is conceivable that the next few generations of experiments will be able to measure biosignatures on exoplanet populations large enough to distinguish trends with respect to system parameters such as composition [94], that the presence of planetary magnetic fields can be measured through auroral emissions [95], and that the relation between planetary atmospheric size and stellar composition can be determined [96].
Conceptualization, all authors; Methodology, M.S.; Formal Analysis, M.S.; Validation, V.A., L.B. and G.F.L.; Writing--Original Draft Preparation, M.S.; Writing--Review & Editing, V.A., L.B. and G.F.L. All authors have read and agreed to the published version of the manuscript. This research received no external funding.
All code to generate data and analysis is located at [https://github.com/mccsandora/Multiverse-Habitability-Handler](https://github.com/mccsandora/Multiverse-Habitability-Handler), accessed on Dec. 20, 2022. We would like to thank Daman Grewal for useful comments. The authors declare no conflict of interest.
|
2307.07850 | Scope of the action principle | Laws of motion given in terms of differential equations can not always be
derived from an action principle, at least not without introducing auxiliary
variables. By allowing auxiliary variables, e.g. in the form of Lagrange
multipliers, an action is immediately obtained. Here, we consider some ways how
this can be done, drawing examples from the literature, and apply this to
Bohmian mechanics. We also discuss the possible metaphysical status of these
auxiliary variables. A particularly interesting approach brings the theory in
the form of a gauge theory, with the auxiliary variables as gauge degrees of
freedom. | Ward Struyve | 2023-07-15T17:01:26Z | http://arxiv.org/abs/2307.07850v2 | # Scope of the action principle
###### Abstract
Laws of motion given in terms of differential equations can not always be derived from an action principle, at least not without introducing auxiliary variables. By allowing auxiliary variables, e.g. in the form of Lagrange multipliers, an action is immediately obtained. Here, we consider some ways how this can be done, drawing examples from the literature, and apply this to Bohmian mechanics. We also discuss the possible metaphysical status of these auxiliary variables. A particularly interesting approach brings the theory in the form of a gauge theory, with the auxiliary variables as gauge degrees of freedom.
## 1 Formulating an action
Laws of physics are often presented in the form of an action principle where the physically allowed histories correspond to the extrema of an action \(S=\int dtL\), given by a Lagrangian \(L\) integrated over time Lanczos (1970), Yourgrau and Mandelstam (1960). The advantages of such an action principle are manifold. First, it allows for a compact formulation of those laws. Second, Lagrangians that are local in time admit a Hamiltonian formulation Dirac (1964), Gitman and Tyutin (1990), Hanson, Regge, and Teitelboim (1976), Henneaux and Teitelboim (1992), Sundermeyer (1982). Third, Noether's (first) theorem establishes a connection between continuous symmetries of the action and conserved currents (Brown and Holland 2004a).
Here, we consider the question to what extent it is possible to formulate such an action principle. More specifically, given laws of physics in the form of a set of differential equations, local in time, can these be derived as the Euler-Lagrange equations of some Lagrangian? Consider for example the discrete case, where the dynamics concerns a configuration \(q=(q_{1},\ldots,q_{N})\in\mathbb{R}^{N}\) given by differential equations1
Footnote 1: Higher time derivatives could be considered, but we will not do so for notational simplicity.
\[f_{i}(q,\dot{q},\ddot{q},t)=0,\qquad i=1\ldots M. \tag{1}\]
Can these be derived from a Lagrangian \(L(q,\dot{q},t)\)?2 This is known as the inverse problem of Lagrangian mechanics, see e.g. Santilli (1978), and the answer is that it depends on the
particular form of the equations (1). Some can be derived from a Lagrangian, others not, and there is extensive literature on this. However, a crucial assumption in the inverse problem is that the Lagrangian depends only on \(t,q\) and \(\dot{q}\). Admitting auxiliary variables trivializes the problem. Namely, introducing Lagrange multipliers \(\lambda_{i}\), \(i=1,\ldots,M\), the equations of motion (1) can immediately be derived from the Lagrangian
\[L_{1}=\sum_{i}\lambda_{i}f_{i} \tag{2}\]
as constraint equations, by varying with respect to the \(\lambda_{i}\). But in addition to these equations, variation with respect to \(q\) yields
\[\sum_{i}\left[\frac{d^{2}}{dt^{2}}\left(\lambda_{i}\frac{\partial f_{i}}{ \partial\ddot{q}_{k}}\right)-\frac{d}{dt}\left(\lambda_{i}\frac{\partial f_{i} }{\partial\dot{q}_{k}}\right)+\lambda_{i}\frac{\partial f_{i}}{\partial q_{k} }\right]=0,\qquad k=1,\ldots,N. \tag{3}\]
The equations (1) do not depend on the \(\lambda_{i}\) and can be solved independently. Given a solution \(q(t)\), the equations (3) then form differential equations for the \(\lambda_{i}\). An early reference proposing this general method is Bateman's (1931).
Other actions exist which significantly simplify the equations of motion for the auxiliary variables, even to the point where their dynamics is completely arbitrary, i.e., unconstrained by the laws of motion. Consider for example the Lagrangian
\[L_{2}=\frac{1}{2}\sum_{i}\lambda_{i}f_{i}^{2}. \tag{4}\]
The corresponding Euler-Lagrange equations are
\[f_{i}^{2}=0, \tag{5}\]
\[\sum_{i}\left[\frac{d^{2}}{dt^{2}}\left(\lambda_{i}f_{i}\frac{\partial f_{i}} {\partial\ddot{q}_{k}}\right)-\frac{d}{dt}\left(\lambda_{i}f_{i}\frac{\partial f _{i}}{\partial\dot{q}_{k}}\right)+\lambda_{i}f_{i}\frac{\partial f_{i}}{ \partial q_{k}}\right]=0. \tag{6}\]
Since the first equation implies \(f_{i}=0\), the second equation is automatically satisfied. So in this case, the desired equations of motion are obtained and the auxiliary variables \(\lambda_{i}\) are unconstrained.
Still different Lagrangians could be considered. For example, in the case of the Lagrangian
\[L_{3}=\frac{1}{2}\lambda\sum_{i}f_{i}^{2}, \tag{7}\]
there is only one auxiliary variable, instead of \(M\) as in the case of \(L_{2}\), which is again unconstrained by the equations of motion. The same is true for
\[L_{4}=\frac{1}{2}\mathrm{e}^{\lambda}\sum_{i}f_{i}^{2}, \tag{8}\]
with the difference that now the extrema of the action are necessarily minima. Namely for any history \((q(t),\lambda(t))\), the action is non-negative, but for extrema the action is zero.
Another example is
\[L_{5}=\sum_{i}\left(\lambda_{1i}f_{i}+\frac{\lambda_{1i}^{2}}{2}\lambda_{2i} \right), \tag{9}\]
which involves auxiliary variables \(\lambda_{1i}\) and \(\lambda_{2i}\), \(i=1,\ldots,M\). The Euler-Lagrange equations are
\[\sum_{i}\left[\frac{d^{2}}{dt^{2}}\left(\lambda_{1i}\frac{\partial f_{i}}{ \partial\ddot{q}_{k}}\right)-\frac{d}{dt}\left(\lambda_{1i}\frac{\partial f_{i }}{\partial\dot{q}_{k}}\right)+\lambda_{1i}\frac{\partial f_{i}}{\partial q_{k }}\right]=0, \tag{10}\]
\[f_{i}+\lambda_{1i}\lambda_{2i}=0, \tag{11}\]
\[\lambda_{1i}^{2}=0. \tag{12}\]
Because of the last equation, these equations reduce to
\[f_{i}=0,\qquad\lambda_{1i}=0. \tag{13}\]
So the variables \(\lambda_{1i}\) vanish, while the variables \(\lambda_{2i}\) are completely free. Yet another possibility is
\[L_{6}=\sum_{i}\left(\lambda_{1i}f_{i}+\frac{\lambda_{1i}\lambda_{1j}}{2} \lambda_{2ij}\right), \tag{14}\]
with \(\lambda_{2ij}\) symmetric. These alternatives might be useful in the case one wants the action to be symmetric under certain transformations. For example, in the case of a field theory where the equations of motion have a tensorial character such actions could maintain manifest Lorentz invariance.
So by introducing auxiliary variables, actions can immediately be formulated. While such actions allow for a compact formulation of the laws of motion, it is unclear whether they lead to any technical advantage. Perhaps, using Noether's theorem it is easier to find conservation laws, by identifying continuous symmetries of the action. But this should probably be investigated on a case by case basis.
What is now the metaphysical status of these auxiliary variables? It might be desired that an action should only include dynamical variables which are regarded as physically real. (Penrose expresses such a sentiment, see below.) For example, the variables \(q\) could correspond to the positions of particles. Demanding that the action depends only on those variables (and their time-derivatives) excludes the actions proposed above. However, one could assume the variables \(\lambda\) to be physically real as well. Since the dynamics for the variables \(q\) is unaffected, the empirical content of the theory--insofar as it is derived from the \(q\)'s--is the same. But it also implies that part of the world remains forever hidden. This part can evolve in a quite non-trivial way, as in the case of the Lagrangian \(L_{1}\), or in a trivial way (with vanishing or unconstrained variables) as in the other cases \(L_{2}\)-\(L_{6}\). Interesting examples of the former case, to be discussed in the next section, are the damped harmonic oscillator and the heat equation, where the auxiliary variables correspond to a doubling of the variables, which evolve according to time-reversed laws of motion. Examples of the latter case are gauge theories. Gauge theories contain unphysical variables--the gauge variables--which happen to evolve completely freely and are regarded as mere mathematical artifacts corresponding to different representations
of the same physical reality.3 However, actions for gauge theories like Yang-Mills theories or general relativity (where the gauge freedom stems from the freedom of coordinate choice) are most naturally and simply formulated in terms of such variables. This may seem puzzling. Penrose (2004, 491) writes: "Moreover, the 'Maxwell Lagrangian' does not work as a Lagrangian unless it is expressed in terms of a potential, although the value of the potential \(A_{a}\) is not a directly observable quantity. [...] Lagrangians for fields are undoubtedly extremely useful as mathematical devices, and they enable us to write down large numbers of suggestions for physical theories. But I remain uneasy about relying upon them too strongly in our searches for improved fundamental theories.". Given a dynamics that is derived from a Lagrangian, it has become standard to identify those variables that evolve freely as the gauge variables (Dirac 1964, Gitman and Tyutin 1990, Hanson, Regge, and Teitelboim 1976, Henneaux and Teitelboim 1992, Sundermeyer 1982). As such, the theories described by the Lagrangians \(L_{2}\)-\(L_{6}\) count as gauge theories. So in this sense, given a theory expressed in terms of differential equations, it can always be derived from an action principle by turning it into a gauge theory. The action then contains unphysical degrees of freedom (the gauge variables), just as the actions of Yang-Mills theories and general relativity.
Footnote 3: While the gauge variables are usually considered as mere mathematical artefacts, there have also been arguments to consider them as physically real, for example in relation to the Aharonov-Bohm effect, see Healey (2007) for a detailed discussion.
In the next section, we present examples of actions that use auxiliary variables, often in the form of multipliers. In section 3, we present the Hamiltonian formulation for the Lagrangian \(L_{3}\). In sections 4 and 5, we respectively present an action for Bohmian mechanics, which necessarily involves auxiliary variables, and discuss its Hamiltonian formulation. We conclude in section 6.
## 2 Examples from the literature
The use of multipliers as in (2) is a familiar practice in physics. An early example is that of the damped harmonic oscillator, for which the equation of motion reads
\[\ddot{x}+2k\dot{x}+n^{2}x=0, \tag{15}\]
with \(k\) and \(n\) constants. Bateman (1931) considers the Lagrangian
\[L_{7}=y(\ddot{x}+2k\dot{x}+n^{2}x), \tag{16}\]
where \(y\) is a new variable acting as a Lagrange multiplier. The resulting Euler-Lagrange equations are (15) together with the time-reversed equation for \(y\)
\[\ddot{y}-2k\dot{y}+n^{2}y=0. \tag{17}\]
So the dynamics for \(x\) and \(y\) are decoupled and despite the special role of \(y\) as multiplier in the action, their dynamics is dual under time-reversal. This duality could also be
introduced at the level of the Lagrangian by adding the total time derivative
\[-\frac{d}{dt}(y\dot{x}+kyx) \tag{18}\]
to the Lagrangian \(L_{7}\) (Bateman 1931, Morse and Feshbach 1958, 298), resulting in
\[L_{8}=-\dot{y}\dot{x}+k(y\dot{x}-\dot{y}x)+n^{2}yx. \tag{19}\]
Such an operation does not affect the action and hence neither the equations of motion. The variable \(y\) no longer appears as a multiplier, but on par with the variable \(x\). The role of \(x\) and \(y\) in the Lagrangian can of course be interchanged, so that \(x\) acts as a multiplier:
\[L_{9}=x(\ddot{y}-2k\dot{y}+n^{2}y). \tag{20}\]
This example illustrates that there is a great variety in the role the auxiliary variable can play in the Lagrangian: from Lagrange multiplier to ordinary dynamical variable. (Still other Lagrangians exist which give the equation of motion (15), also ones that are inequivalent, in the sense that they do not differ merely by a total time derivative. Bateman even gives an example of a Lagrangian which does not employ auxiliary variables, but which is explicitly time dependent.) The Lagrangian for the heat equation is of a form similar to (19), with an auxiliary field which satisfies the time-reversed dynamics compared to that of the heat field (Morse and Feshbach 1958, 313). More examples of this type can be found in Ibragimov and Kolsrud (2004). The Lagrangian for the Schrodinger equation is also of a similar form, but with the complex conjugate \(\psi^{*}\) (rather than a new field) in the role of the dual field (Morse and Feshbach 1958, 314).
Another example is the generally covariant action proposed independently by Rosen (1966) and Sorkin (2002):
\[L_{10}=\int d^{3}x\sqrt{-g}\lambda^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}, \tag{21}\]
where \(g_{\mu\nu}\) is the Lorentzian space-time metric, \(R_{\mu\nu\rho\sigma}\) is the Riemann curvature tensor and \(\lambda^{\mu\nu\rho\sigma}\) are multipliers (which satisfy the same symmetries as the curvature tensor). The corresponding equations of motion are
\[R_{\mu\nu\rho\sigma}=0, \tag{22}\]
together with equations of motion for \(\lambda^{\mu\nu\rho\sigma}\). (The equation (22) implies that the metric equals the Minkowski metric, up to space-time diffeomorphisms.) This Lagrangian \(L_{10}\) was used in the debate on the meaning of general covariance in general relativity, see Pitts (2006) for a detailed discussion.
Our main examples, however, are given by gauge theories. As mentioned in the introduction, gauge theories involve variables that evolve completely freely. Consider for example Maxwell's theory for electromagnetism in the absence of charges. The Lagrangian is usually taken to be4
Footnote 4: Throughout the paper units are used so that \(c=\hbar=1\).
\[L_{11}=-\frac{1}{4}\int d^{3}xF^{\mu\nu}F_{\mu\nu}, \tag{23}\]
where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the electromagnetic field tensor and \(A^{\mu}=(\varphi,{\bf A})\) is the vector potential, leading to the Maxwell equations \(\partial_{\mu}F^{\mu\nu}=0\). The theory has a gauge symmetry, given by
\[A^{\mu}\to A^{\mu}+\partial^{\mu}\theta, \tag{24}\]
with \(\theta\) an arbitrary function of space and time. This symmetry maps solutions of the Maxwell equations to solutions. It is often regarded as an unphysical symmetry which merely connects different mathematical representations of the same physical history (see Healey (2007) for dissenting views). Assuming that the field vanishes sufficiently fast at spatial infinity, the Helmholtz decomposition \({\bf A}={\bf A}^{T}+{\bf A}^{L}\) can be applied (Griffiths 1999), where
\[{\bf A}^{T}={\bf A}-\mathbf{\nabla}\frac{1}{\nabla^{2}}\mathbf{\nabla}\cdot{\bf A},\qquad{\bf A}^{L}=\mathbf{\nabla}\frac {1}{\nabla^{2}}\mathbf{\nabla}\cdot{\bf A} \tag{25}\]
are the transverse and longitudinal part of the vector potential (\(\mathbf{\nabla}\cdot{\bf A}^{T}={\bf 0}\) and \(\mathbf{\nabla}\times{\bf A}^{L}={\bf 0}\)) and \(\nabla^{-2}f({\bf x})=-\int d^{3}yf({\bf y})/4\pi|{\bf x}-{\bf y}|\). The action can be written as
\[L_{12}=\frac{1}{2}\int d^{3}x\left[\dot{{\bf A}}^{T}\cdot\dot{{\bf A}}^{T}+{\bf A }^{T}\cdot\nabla^{2}{\bf A}^{T}+\dot{{\bf A}}^{L}\cdot\dot{{\bf A}}^{L}-2 \varphi\mathbf{\nabla}\cdot\dot{{\bf A}}^{L}-\varphi\nabla^{2} \varphi\right]. \tag{26}\]
In terms of these variables, the Maxwell equations are5
Footnote 5: For varying the action, one can express the potential in terms of Fourier modes.
\[\ddot{{\bf A}}^{T}-\nabla^{2}{\bf A}^{T}={\bf 0}, \tag{27}\]
\[\ddot{{\bf A}}^{L}+\mathbf{\nabla}\dot{\varphi}={\bf 0}, \tag{28}\]
\[\mathbf{\nabla}\cdot\dot{{\bf A}}^{L}+\nabla^{2}\varphi=0. \tag{29}\]
The second equation (28) follows from (29) (by applying \(\frac{\partial}{\partial t}\mathbf{\nabla}\nabla^{-2}\)) and is hence redundant. The dynamics of \({\bf A}^{T}\) is decoupled from that of \(\varphi\) and \({\bf A}^{L}\). The dynamics of the latter is such that any function \(\varphi\) or any \({\bf A}^{L}\) is allowed with the only constraint that they are mutually correlated by (29). Put differently, we can have any field \({\bf A}^{L}\) as a solution, provided \(\varphi\) is determined by (29) (which itself does not pose restrictions on \({\bf A}^{L}\)). Conversely, any function \(\varphi\) is allowed with \({\bf A}^{L}\) determined by (29). This freedom in time evolution is also clear from the gauge symmetry (24), since a gauge transformation does not affect \({\bf A}^{T}\) but only \(\varphi\) and \({\bf A}^{L}\). Because the evolution of \(\varphi\) and \({\bf A}^{L}\) is arbitrary and does not affect the evolution of \({\bf A}^{T}\), they are traditionally considered unphysical variables. This arbitrary time evolution was also encountered for auxiliary variables in the Lagrangians \(L_{2}\)-\(L_{6}\) of the previous section, making the corresponding theories gauge theories in the light of the present discussion.
In the present case, the gauge degrees of freedom can easily be dismissed, by considering just the transverse part of the potential and keeping only the first two terms in the Lagrangian \(L_{12}\). However, this comes at the price of losing manifest Lorentz invariance. Moreover, for more complicated theories like non-Abelian Yang-Mills theories or general relativity, it becomes very difficult (if at all possible globally) to express the action or equations of motion in terms of gauge invariant quantities.
There are also actions in terms of the electric and magnetic field (or the field strength), rather than the potentials, but these also involve auxiliary fields acting as Lagrange multipliers (Vollick 2017). Consider for example the Lagrangian:
\[L_{13}=-\int d^{3}x\left(\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+A_{\nu}\partial_{\mu}F^ {\mu\nu}\right), \tag{30}\]
where \(A_{\mu}\) and \(F_{\mu\nu}\) are treated as independent fields (Infeld and Plebanski 1954, Schwinger 1951). Clearly, the field \(A_{\mu}\) acts as a Lagrange multiplier, implying
\[\partial_{\mu}F^{\mu\nu}=0. \tag{31}\]
Variation with respect to \(F_{\mu\nu}\) leads to the familiar relation6
Footnote 6: The equation of motion (32) yields \(F_{\mu\nu}\) in terms of \(A_{\mu}\). This is an example of what is sometimes called an _auxiliary variable_(Pons 2010). (This technical notion of “auxiliary variable” should be contrasted with the colloquial notion we have been using in the rest of the paper.) That is a variable whose variation in the action leads to an equation of motion that allows to solve that variable in terms of the other fields. Such variables can be introduced for simplification. They can also be eliminated without changing the dynamics of the other fields. On the level of the action, this can be done by simply substituting the expression for that variable (in terms of the other fields) into the action. In the present case, such an elimination results in the Lagrangian \(L_{11}\). A further reduction is possible since also \(\phi\) is an auxiliary variable (in the technical sense), cf. (29). Its elimination yields the Lagrangian for just the transverse potential.
\[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}. \tag{32}\]
So the latter is not assumed, as in eq. (23), but is instead derived as one of the equations of motion. A similar situation arises in general relativity. The Einstein equations can be derived from the Einstein-Hilbert action by varying with respect to the metric \(g_{\mu\nu}\) (whose components also contain gauge degrees of freedom in the sense explained above). An alternative way is via the Palatini method which treats the connection and metric as independent fields (Ferraris, Francaviglia, and Reina 1982, Misner, Thorne, and Wheeler 2017, Wald 1984).
## 3 Hamiltonian formulation
The forms of the Lagrangians \(L_{1}\)-\(L_{6}\) allow for the possibility of a Hamiltonian formulation with the usual methods. Consider for example \(L_{3}\) and take \(f_{i}=f_{i}(q,\dot{q},t)\), so that the Lagrangian contains time derivatives of \(q\) only up to order one. Then the conjugate momenta are
\[p_{k}=\frac{\partial L}{\partial\dot{q}_{k}}=\lambda\sum_{i}f_{i}\frac{ \partial f_{i}}{\partial\dot{q}_{k}}, \tag{33}\]
\[\pi=\frac{\partial L}{\partial\dot{\lambda}}=0. \tag{34}\]
Because of the latter equation, these relations are not invertible to yield the velocities in terms of the phase-space variables. This means that we have to resort to the theory
of constrained dynamics (Dirac 1964, Gitman and Tyutin 1990, Hanson, Regge, and Teitelboim 1976, Henneaux and Teitelboim 1992, Sundermeyer 1982). An immediate primary constraint is
\[\pi=0. \tag{35}\]
For simplicity, we assume that there are no further primary constraints, so that the \(\dot{q}_{k}\) can be expressed in terms of the phase-space variables, i.e., there are functions \(v_{k}(q,p,\lambda,t)\) such that the relations (33) can be inverted to yield
\[\dot{q}_{k}=v_{k}(q,p,\lambda,t). \tag{36}\]
The canonical Hamiltonian is
\[H_{c} =\sum_{k}p_{k}\dot{q}_{k}+\pi\dot{\lambda}-L_{3}\] \[=\sum_{k}p_{k}v_{k}-\frac{1}{2}\lambda\sum_{i}f_{i}^{2}(q,v,t). \tag{37}\]
The total Hamiltonian is
\[H_{T}=H_{c}+u\pi, \tag{38}\]
where \(u\) is an arbitrary function of the phase-space variables. The corresponding Hamilton's equations, together with the constraint (35), give the equations of motion in phase-space. With the constraint taken into account, Hamilton's equations are
\[\dot{q}_{k} =v_{k}, \tag{39}\] \[\dot{p}_{k} =\lambda\sum_{i}f_{i}\frac{\partial f_{i}}{\partial q_{k}},\] (40) \[\dot{\lambda} =u,\] (41) \[\dot{\pi} =\frac{1}{2}\sum_{i}f_{i}^{2} \tag{42}\]
(where the definitions of the momenta were used to simplify the expressions). Since \(u\) was an arbitrary function, we have again (as of course we should) that the evolution of \(\lambda\) is arbitrary. By the last equation, the constraint \(\pi=0\) further implies that \(f_{i}(q,v(q,p,\lambda,t),t)=0\). These are the secondary constraints. Considering the definitions (33) of the momenta \(p_{k}\), it must be the case that for a solution of the equations of motion, they are zero, i.e., \(p_{k}=0\). So these constraints \(f_{i}=0\) must amount to \(p_{k}=0\). Using these relations, the equations of motion can be simplified to
\[\dot{q}_{k} =v_{k}(q,0,\lambda,t), \tag{43}\] \[\dot{p}_{k} =0,\] (44) \[\dot{\lambda} =u,\] (45) \[\dot{\pi} =0. \tag{46}\]
In the first relation (43), there is a dependence on \(\lambda\). However, because of (33), the \(v_{k}\) depend on \(\lambda\) only through \(p_{k}/\lambda\) and hence with \(p_{k}=0\), this implies that there is no \(\lambda\)-dependency of \(v_{k}\). So equation (43) amounts to the equations \(f_{i}(q,\dot{q},t)\) expressed in the form \(\dot{q}_{k}=g_{k}(q,t)\) for certain functions \(g_{k}\).
Actually, given equations of motion of the form \(\dot{q}_{k}=g_{k}(q,t)\), the Hamiltonian formulation can be done with the Hamiltonian
\[H=\sum_{k}p_{k}g_{k}(q,t), \tag{47}\]
together with the constraints \(p_{k}=0\). The equations of motion then immediately reduce to \(\dot{q}_{k}=g_{k}(q,t)\) and \(p_{k}=0\).
The Hamiltonian is a constant of the motion, but in this case it is a trivial one as it vanishes along a solution.
In the next section, we will provide an example of this Hamiltonian formulation for Bohmian mechanics.
## 4 Application to Bohmian mechanics
Bohmian mechanics concerns the motion of point-particles whose velocity depends on the wave function (Bohm and Hiley 1993, Durr, Goldstein, and Zanghi 2012, Durr and Teufel 2009, Holland 1993). The wave function satisfies the usual Schrodinger equation, whereas the particles satisfy the so-called guidance equations. In the case of a single particle (which we consider here for mere notational simplicity), denoting the position of the particle at time \(t\) by \({\bf X}(t)\) and the wave function by \(\psi({\bf x},t)\), the dynamics is
\[\dot{\bf X}(t)={\bf v}^{\psi}({\bf X}(t),t), \tag{48}\]
where
\[{\bf v}^{\psi}({\bf x},t)=\frac{1}{m}{\rm Im}\left(\frac{\mathbf{ \nabla}\psi({\bf x},t)}{\psi({\bf x},t)}\right), \tag{49}\]
\[{\rm i}\frac{\partial\psi({\bf x},t)}{\partial t}=-\frac{1}{2m}\nabla^{2}\psi ({\bf x},t)+V({\bf x})\psi({\bf x},t). \tag{50}\]
The Schrodinger equation can be derived from the Lagrangian \(L_{S}=\int d^{3}x{\cal L}_{S}\), with \({\cal L}_{S}\) the Lagrangian density given by
\[{\cal L}_{S}=\frac{1}{2}\psi^{*}\left({\rm i}\frac{\partial\psi}{\partial t} +\frac{1}{2m}\nabla^{2}\psi-V\psi\right)+{\rm c.c.} \tag{51}\]
While the Euler-Lagrange equations can be found by formally treating \(\psi\) and \(\psi^{*}\) as independent fields, this Lagrangian is better viewed as a function of the real and imaginary part of \(\psi\), which _are_ independent (Brown and Holland 2004b).
The Bohmian dynamics cannot be derived from a Lagrangian that depends only on \({\bf X}\) and \(\psi\). So a Lagrangian requires the introduction of auxiliary variables. There have
been attempts by Squires (1994) and Holland (2001, 2006, 2020) to write down such a Lagrangian. However, these proposals do not exactly recover the Bohmian dynamics, but rather some generalized dynamics, for which the Bohmian trajectories are only a subset of the possible allowed trajectories.
Squires (1994) considers the Lagrangian density7
Footnote 7: Squires considers a further addition to this Lagrangian given by a constant \(k\) times the standard non-relativistic particle Lagrangian. Only the special case \(k=0\) is presented here.
\[{\cal L}_{\rm Sq}={\cal L}_{S}+\mathbf{\lambda}\cdot\left(\dot{\bf X}- {\bf v}^{\psi}\right)\delta({\bf x}-{\bf X}). \tag{52}\]
Variation of the action with respect to \(\lambda\) yields the guidance equation. Variation with respect to \(\psi\) and \({\bf X}\) respectively gives
\[{\rm i}\frac{\partial\psi}{\partial t}=-\frac{1}{2m}\nabla^{2}\psi+V\psi- \frac{{\rm i}}{2m\psi^{*}}\mathbf{\lambda}\cdot\mathbf{\nabla }\delta({\bf x}-{\bf X}), \tag{53}\]
\[\frac{d\mathbf{\lambda}}{dt}+\mathbf{\nabla}\left[\mathbf{\lambda}\cdot{\bf v}^{\psi}({\bf x})\right]\big{|}_{{\bf x}={\bf X }}={\bf 0}. \tag{54}\]
So while the guidance equation is obtained, the Schrodinger equation gets an extraneous \(\lambda\)-dependent contribution. Only in the case \(\mathbf{\lambda}={\bf 0}\) the Bohmian dynamics is recovered. For \(\mathbf{\lambda}\neq{\bf 0}\) the Schrodinger equation is not satisfied.
Holland (2020) considers8
Footnote 8: Holland considers a similar Lagrangian in (Holland 2001, Holland 2006) using a different parameterization of the wave function.
\[{\cal L}_{\rm Ho}=\frac{1}{2}u^{*}\left({\rm i}\frac{\partial\psi}{\partial t }+\frac{1}{2m}\nabla^{2}\psi-V\psi\right)+{\rm c.c.}+\left(\frac{1}{2}m\dot{ \bf X}\cdot\dot{\bf X}-V-Q^{\psi}\right)\delta({\bf x}-{\bf X}), \tag{55}\]
where
\[Q^{\psi}({\bf x},t)=-\frac{1}{2m}\frac{\nabla^{2}|\psi({\bf x},t)|}{|\psi({ \bf x},t)|} \tag{56}\]
is the quantum potential. In this case the complex field \(u\) is introduced as the Lagrange multiplier. As a result, variation with respect to \(u\) gives the Schrodinger equation, while variation with respect to respectively \({\bf X}\) and \(\psi\) yields
\[m\ddot{\bf X}=-\mathbf{\nabla}(V({\bf x})+Q^{\psi}({\bf x},t))\big{|} _{{\bf x}={\bf X}}, \tag{57}\]
\[{\rm i}\frac{\partial u}{\partial t}=-\frac{1}{2m}\nabla^{2}u+Vu+2\frac{ \partial Q^{\psi}}{\partial\psi^{*}}\bigg{|}_{{\bf x}={\bf X}}. \tag{58}\]
While this action yields the desired Schrodinger equation, it does not yield the guidance equation. The Newtonian-like equation (57) follows from the Bohmian dynamics by taking the time derivative of the guidance equation (using also the Schrodinger equation). But (57) also allows for non-Bohmian solutions, where the guidance equation does not hold, i.e., where the velocity is different from \({\bf v}^{\psi}\). Holland considers the guidance equation as an extra constraint on the dynamics (which can be imposed at an initial
time). This is necessary for empirical adequacy (Colin and Valentini 2014, Goldstein and Struyve 2015).
As explained in the introduction, Lagrange multipliers can be introduced to enforce both the guidance equation and the Schrodinger equation. But as can readily be checked, the following Lagrangian density already yields the Bohmian dynamics
\[\mathcal{L}_{\mathrm{B}}=\mathcal{L}_{S}+\frac{\lambda}{2}\left(\dot{\mathbf{X }}-\mathbf{v}^{\psi}\right)\cdot\left(\dot{\mathbf{X}}-\mathbf{v}^{\psi} \right)\delta(\mathbf{x}-\mathbf{X}). \tag{59}\]
So there is no need to introduce Lagrange multipliers to enforce the Schrodinger equation. The variable \(\lambda\) evolves freely and is considered a gauge variable in the context of the theory of constrained dynamics.
Before turning to the Hamiltonian formulation, let us have a look at possible Noether currents. The action \(S_{\mathrm{B}}=\int dtd^{3}x\mathcal{L}_{\mathrm{B}}\) is invariant under Galilean transformations (with the usual transformations for \(\mathbf{X}\) and \(\psi\) (Durr and Teufel 2009). However, the Noether currents corresponding to space and time translations, rotations and boosts, are just the usual currents associated to the Schrodinger equation, because the \(\lambda\)-dependent part of these currents is proportional to \(\dot{\mathbf{X}}-\mathbf{v}^{\psi}\) and hence vanishes for a solution.
## 5 Hamiltonian formulation of Bohmian mechanics
The Hamiltonian formulation outlined in section 3 can now be applied to the Bohmian Lagrangian (59), with only an extra complication arising from the treatment of the Schrodinger part, which can however be found elsewhere, e.g. (Gergely 2002).
Writing \(\psi=\psi_{1}+\mathrm{i}\psi_{2}\), with \(\psi_{1}\) and \(\psi_{2}\) real, the conjugate momenta are
\[\pi_{1}=\frac{\delta L_{\mathrm{B}}}{\delta\dot{\psi}_{1}}=\psi_{2},\quad\pi_ {2}=\frac{\delta L_{\mathrm{B}}}{\delta\dot{\psi}_{2}}=-\psi_{1}, \tag{60}\]
\[P_{i}=\frac{\partial L_{\mathrm{B}}}{\partial\dot{X}_{i}}=\lambda\left[\dot{X }_{i}-v_{i}^{\psi}(\mathbf{X})\right],\quad\pi=\frac{\partial L_{\mathrm{B}} }{\partial\dot{\lambda}}=0. \tag{61}\]
As in section 3, we are dealing with a constrained dynamics because these relations are not invertible to yield the velocities in terms of the phase-space variables. The non-invertible relations yield the primary constraints
\[\pi_{1}-\psi_{2}=0,\qquad\pi_{2}+\psi_{1}=0,\qquad\pi=0. \tag{62}\]
The canonical Hamiltonian is
\[H_{c} =\int d^{3}x\left(\pi_{1}\dot{\psi}_{1}+\pi_{2}\dot{\psi}_{2} \right)+\mathbf{P}\cdot\dot{\mathbf{X}}+\pi\dot{\lambda}-L_{\mathrm{B}}\] \[=\int d^{3}x\psi^{*}\widehat{H}\psi+\mathbf{P}\cdot\mathbf{v}^{ \psi}(\mathbf{X})+\frac{1}{2\lambda}\mathbf{P}\cdot\mathbf{P}, \tag{63}\]
where
\[\widehat{H}=-\frac{1}{2m}\nabla^{2}+V. \tag{64}\]
The total Hamiltonian is
\[H_{T}=H_{c}+\int d^{3}x\left[u_{1}(\pi_{1}-\psi_{2})+u_{2}(\pi_{2}+\psi_{1}) \right]+w\pi, \tag{65}\]
where \(u_{1}({\bf x})\), \(u_{2}({\bf x})\) and \(w\) are arbitrary functions of the phase-space variables. This Hamiltonian determines the equations of motion through the usual Hamilton's equations, together with the constraints (62). It can be simplified by deriving the secondary constraints which follow from the fact that the primary constraints need to be preserved in time. Preservation of \(\pi=0\) leads to the constraint \({\bf P}={\bf 0}\). Using the latter, preservation of the other constraints leads to the constraints
\[u_{1}=\widehat{H}\psi_{2},\qquad u_{2}=-\widehat{H}\psi_{1}. \tag{66}\]
So the arbitrary functions \(u_{1}\) and \(u_{2}\) get determined by the equations of motion and can be substituted in \(H_{T}\) to yield9
Footnote 9: The first part of the Hamiltonian is that of the wave function and could also be written as \(\int d^{3}x\psi^{*}\widehat{H}\psi\) provided the Dirac bracket is used for the Hamilton’s equations rather than the Poisson bracket.
\[H_{1}=\int d^{3}x\left[\pi_{1}\widehat{H}\psi_{2}-\pi_{2}\widehat{H}\psi_{1} \right]+{\bf P}\cdot{\bf v}^{\psi}({\bf X})+\frac{1}{2\lambda}{\bf P}\cdot{ \bf P}+w\pi. \tag{67}\]
The corresponding Hamilton's equations are now (using the constraint \({\bf P}={\bf 0}\)),
\[\dot{\psi}=-{\rm i}\widehat{H}\psi,\qquad\dot{\pi}_{1}=\widehat{H}\pi_{2}, \qquad\dot{\pi}_{2}=-\widehat{H}\pi_{1}, \tag{68}\]
\[\dot{\bf X}={\bf v}^{\psi}({\bf X}),\qquad\dot{\bf P}={\bf 0}, \tag{69}\]
\[\dot{\lambda}=w,\qquad\dot{\pi}=0. \tag{70}\]
So these are the equations of motion of Bohmian mechanics together with equations for \(\lambda\) and the conjugate momenta. Since \(w\) is an arbitrary function, the evolution of \(\lambda\) is again arbitrary.
The variables \(\lambda\) and \(\pi\) do not enter the equations of motion for the other canonical variables. Their presence merely implies the constraint \({\bf P}={\bf 0}\). Keeping the latter constraint, the variables \(\lambda\) and \(\pi\) can be removed by considering the Hamiltonian
\[H_{2}=\int d^{3}x\left[\pi_{1}\widehat{H}\psi_{2}-\pi_{2}\widehat{H}\psi_{1} \right]+{\bf P}\cdot{\bf v}^{\psi}({\bf X}). \tag{71}\]
The corresponding Hamilton's equation are again (68) and (69), taking into account \({\bf P}={\bf 0}\). This is the Hamiltonian formulation of Bohmian mechanics proposed by Vollick (2019).
Note that, in line with what we said earlier about the Noether currents for this case, the conserved energy (i.e., Hamiltonian) is just that of the wave function, since the particle-dependent part in \(H_{2}\) vanishes along a solution.
Conclusion
Laws of motion given by differential equations can always be derived from an action, at least if auxiliary variables are allowed. Moreover, these extra variables can be introduced in such a way that they would be regarded as gauge variables according to the usual approach to gauge theories. So these actions are similar in that respect to theories like Yang-Mills theories or general relativity. However, while it is easy to introduce auxiliary variables as gauge variables, it tends to be very hard to eliminate the gauge for theories like Yang-Mills theories and general relativity on the level of the action as well as the dynamics.
## 7 Acknowledgments
This work is supported by the Research Foundation Flanders (Fonds Wetenschappelijk Onderzoek, FWO), Grant No. G0C3322N. It is a pleasure to thank Thibaut Demaerel, Christian Maes, Sylvia Wenmackers, and two anonymous referees, for useful comments and discussions.
|
2303.04676 | Considerations on the Theory of Training Models with Differential
Privacy | In federated learning collaborative learning takes place by a set of clients
who each want to remain in control of how their local training data is used, in
particular, how can each client's local training data remain private?
Differential privacy is one method to limit privacy leakage. We provide a
general overview of its framework and provable properties, adopt the more
recent hypothesis based definition called Gaussian DP or $f$-DP, and discuss
Differentially Private Stochastic Gradient Descent (DP-SGD). We stay at a meta
level and attempt intuitive explanations and insights \textit{in this book
chapter}. | Marten van Dijk, Phuong Ha Nguyen | 2023-03-08T15:56:27Z | http://arxiv.org/abs/2303.04676v2 | # Considerations on the Theory of Training Models with Differential Privacy
###### Abstract
In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how can each client's local training data remain private? Differential privacy is one method to limit privacy leakage. We provide a general overview of its framework and provable properties, adopt the more recent hypothesis based definition called Gaussian DP or \(f\)-DP, and discuss Differentially Private Stochastic Gradient Descent (DP-SGD). We stay at a meta level and attempt intuitive explanations and insights _in this book chapter_.
Stochastic Gradient Descent (SGD) \(\cdot\) DP-SGD \(\cdot\) Differential Privacy (DP) \(\cdot\) Gaussian DP
## 1 Introduction
Privacy leakage is a big problem in the big-data era. Solving a learning task based on big data intrinsically means that only through a collaborative effort sufficient data is available for training a global model with sufficient clean accuracy (utility). Federated learning is a framework where a learning task is solved by a loose federation of participating devices/clients which are coordinated by a central server [42, 8, 3, 33, 40, 9, 58, 29, 60, 10, 34, 36, 37, 39, 12, 30]. Clients, who use own local data to participate in a learning task by training a global model, want to have privacy guarantees for their local proprietary data. For this reason DP-SGD [1] was introduced as it adapts distributed Stochastic Gradient Descent (SGD)[55] with Differential Privacy (DP)[19, 15, 21, 18].
The optimization problem for training many Machine Learning (ML) models using a training set \(\{\xi_{i}\}_{i=1}^{m}\) of \(m\) samples can be formulated as a finite-sum minimization problem as follows
\[\min_{w\in\mathbb{R}^{d}}\left\{F(w)=\frac{1}{m}\sum_{i=1}^{m}f(w;\xi_{i}) \right\}. \tag{1}\]
The objective is to minimize a loss function with respect to model parameters \(w\). This problem is known as empirical risk minimization and it covers a wide range of convex and non-convex problems from the ML domain, including, but not limited to, logistic regression, multi-kernel learning, conditional random fields and neural networks.
We want to solve (1) in a distributed setting where many clients have their own local data sets and the finite-sum minimization problem is over the collection of all local data sets. A widely accepted approach is to repeatedly use the SGD [50, 46, 47] recursion
\[w_{t+1}=w_{t}-\eta_{t}\nabla f(w_{t};\xi), \tag{2}\]
where \(w_{t}\) represents the model after the \(t\)-th iteration; \(w_{t}\) is used in computing the gradient of \(f(w_{t};\xi)\), where \(\xi\) is a data sample randomly selected from the data set \(\{\xi_{i}\}_{i=1}^{m}\) which comprises the union of all local data sets.
This approach allows each client to perform local SGD recursions for the data samples \(\xi\) that belong to the client's local training data set. The updates as a result of the SGD recursion (2) are sent to a centralized server who aggregates all received updates and maintains a global model. The server regularly broadcasts its most recent global model so that clients can use it in their local SGD computations. This allows each client to use what has been learned from the local data sets at the other clients. This leads to good accuracy of the final global model.
Each client is doing SGD recursions for a batch of local data. These recursions together represent a local round and at the end of the local round a local model update (in the form of an aggregate of computed gradients during the round) is transmitted to the server. The server in turn adds the received local update to its global model - and once the server receives new updates from (a significant portion of) all clients, the global model is broadcast to each of the clients. When considering privacy, we are concerned about how much information these local updates reveal about the used local data sets. Each client wants to keep its local data set as private as possible with respect to the outside world which observes round communication (the outside world includes all other clients as well).
Rather than reducing the amount of round communication such that less sensitive information is leaked, differential privacy [19, 15, 21, 18] offers a solution in which each client-to-server communication is obfuscated by noise. If the magnitude of the added noise is not too much, then a good accuracy of the global model can still be achieved albeit at the price of more overall SGD iterations needed for convergence. On the other hand, only if the magnitude of the added noise is large enough, then good differential privacy guarantees can be given. This leads to a friction between desired differential privacy and desired utility/accuracy.
Section 2 starts discussing DP-SGD [1], which implements differentially private mini-batch SGD. Section 3 explains differential privacy with various (divergence based) measures and properties. Section 4 continues detailing the state-of-the-art hypothesis testing based differential privacy, called \(f\)-DP [13], applied to DP-SGD. We conclude with open questions in Section 5.
## 2 Differential Private SGD (DP-SGD)
We analyse the Gaussian based differential privacy method, called DP-SGD, of [1] in a distributed setting with many clients and a central aggregating server. A slightly generalized description of DP-SGD is depicted in Algorithm 1. The main goal of DP-SGD is to hide whether the collection of transmitted round updates \(\bar{U}\) corresponds to data set \(d\) versus a neighboring data set \(d^{\prime}\); sets \(d\) and \(d^{\prime}\) are called neighbors if they differ in exactly one element. In order to accomplish this, DP-SGD introduces noise, which we will see comes in two flavors _clipping noise_ and _Gaussian noise_.
### Clipping
Rather than using the gradient \(a_{h}=\nabla f(w,\xi_{h})\) itself, DP-SGD uses its clipped version \([\nabla f(w,\xi_{h})]_{C}\) where
\[[x]_{C}=x/\max\{1,\|x\|/C\}.\]
We call this the _individual clipping_ approach since each computed gradient is individually clipped. Clipping is needed because in general we cannot assume a bound \(C\) on the gradients (for example, the bounded gradient assumption is in conflict with strong convexity [46]), yet the added gradients in update \(U\) need to be bounded by some constant \(C\) in order for the DP analysis of [1] to go through. The reason is that clipping introduces a bound on how much \(U=\sum_{h=1}^{m}[a_{h}]_{C}\) gets affected if the differentiating sample between \(d\) and \(d^{\prime}\) is used in its computation. Clipping forces a small distance between an update \(U\) that does not use the differentiating sample and an update \(U^{\prime}\) that computes the same gradients as \(U\) except for one of its gradient computations which uses the differentiating sample. This means that if Gaussian noise is added to \(U\) and \(U^{\prime}\), respectively, then the smaller the distance between \(U\) and \(U^{\prime}\), the harder it is to figure out whether the actually observed noised update originates from \(d\) or \(d^{\prime}\). This leads to a differential privacy guarantee.
Suppose that \(a_{h}\) influences another gradient computation, e.g., \(a_{h+1}\). Then, if the differentiating sample is used in the computation of \(a_{h}\), this affects not only \(a_{h}\) but also \(a_{h+1}\). Even though both \(a_{h}\) and \(a_{h+1}\) will be clipped, this increases the distance between \(U\) and \(U^{\prime}\), hence, this weakens the differential privacy. For this reason, the different gradient computations \(a_{h}\) in \(U\) should be independent of one another. In particular, we do not want to implement classical SGD where the computation of \(a_{h}\) updates the local model \(w\) which is used in the next gradient computation \(a_{h+1}\). This is the reason for implementing mini-batch SGD where each gradient \(a_{h}\) is computed for the same \(w\).
Clipping introduces clipping noise defined as the difference between the clipped gradient \([a_{h}]_{C}\) and the original gradient \(a_{h}\). This affects the rate of convergence. Notice that in general, once convergence sets in, individual gradients tend to get closer to zero. This means that their norms get smaller \(\leq C\). This shows that the clipping noise in updates \(U\) becomes very small close to zero.
```
1:procedure DP-SGD
2:\(N=\) size training data set \(d=\{\xi_{i}\}_{i=1}^{N}\)
3:\(E=\) total number of epochs
4: diminishing step size sequence \(\{\eta_{i}\}\)
5:
6: initialize \(w\) as the default initial model
7:Interrupt Service Routine (ISR): Whenever a new global model \(\hat{w}\) is received, computation is interrupted and an ISR is called that replaces \(w\leftarrow\hat{w}\) after which computation is resumed
8:for\(e\in\{1,\ldots,E\}\)do
9:\(\{S_{b}\}_{b=1}^{N/m}\leftarrow\texttt{Sample}_{m}\) with \(S_{b}\subseteq\{1,\ldots,N\}\), \(|S_{b}|=m\)
10:for\(b\in\{1,\ldots,\frac{N}{m}\}\)do
11: Start of round \((e-1)\frac{N}{m}+b\):
12:for\(h\in S_{b}\)do
13:\(a_{h}=\nabla_{w}f(w;\xi_{h})\)
14:endfor
15:\(U=\sum_{h=1}^{m}[a_{h}]_{C}\)
16:\(\bar{U}\gets U+\mathcal{N}(0,(2C\sigma)^{2}\mathbf{I})\)
17: Transmit \(\bar{U}/m\) to central server
18: Locally update \(w\gets w-\eta_{(e-1)\frac{N}{m}+b}\cdot\bar{U}/m\)
19:endfor
20:endfor
21:endprocedure
```
**Algorithm 1** Differential Private SGD
### Mini-Batch SGD
DP-SGD is constraint to a mini-batch SGD approach where before the start of the \(b\)-th local round in epoch \(e\) a random min-batch \(S_{b}\) of sample size \(|S_{b}|=m\) is selected out of a local data set \(d\) of size \(|d|=N\): In the description of Algorithm 1 the sampling is done by a sampling procedure \(\texttt{Sample}_{m}\) before the start of epoch \(e\) for all rounds together. DP-SGD implements _subsampling_ which chooses a uniformly random subset \(S_{b}\subseteq d\) of size \(m\).
The inner loop computes \(m\) gradients \(a_{h}=\nabla_{w}f(w;\xi_{h})\). Since there are \(N/m\) rounds within an epoch, each epoch has (indeed) a total gradient complexity of \(N=|d|\). We notice that each gradient is computed based on \(w\) which is the last received global model from the server through the interrupt service routine. In the original DP-SGD, a client waits at the start of a round till it receives the global model which includes the aggregated updates of all previous rounds from all clients. The formulation in Algorithm 1 allows for asynchronous behavior, including dropped (or reordering of) messages from the server which can lead to a client missing out on receiving global model versions. More importantly, the server may decide to broadcast global models at a lower rate than the rate(s) at which clients compute and communicate their noised round updates. This allows clients with different compute speeds/resources. Also, the rate at which round updates are computed is not restricted by the throughput of broadcast messages from the server to clients (of course, it remains restricted by the network throughput from the clients through aggregation nodes to the server). This implies that parameter \(m\) can potentially be chosen from the whole range \(\{1,\ldots,N\}\) including very small \(m\) leading to many round updates per epoch or large \(m\) leading to only a couple round updates per epoch. We will later discuss the effect of \(m\) on convergence and accuracy and DP guarantee.
We notice that too much asynchronous behavior will hurt convergence of the mini-batch SGD approach and may lead to worse accuracy of the final global model. For this reason, before starting a round, a client can check into what extent the recently received global model deviates from the locally kept model. If this gets too far apart or if the last received global model happened too many rounds ago, then the client will want to wait till a new global model is received and the interrupt service routine is triggered. This implements the necessary synchronous behavior with respect to convergence and accuracy.
### Gaussian Noise
The clipped gradients \([a_{h}]_{C}\) are summed together in round update \(U\). At the end of each local round the round update \(U\) is obfuscated by adding Gaussian noise
\[\mathcal{N}(0,(2C\sigma)^{2})\]
to each of \(U\)'s vector entries. The resulting noised round update \(\bar{U}\) divided by the mini-batch size \(m\) is transmitted to the server.
For neighboring data sets \(d\) and \(d^{\prime}\), we have that the _sensitivity_ measured as the Euclidean distance between \(U\) based on \(d\) and \(U^{\prime}\) based on \(d^{\prime}\) (see also Section 2.1) is at most \(2C\). An adversary trying to distinguish whether the observed update is from \(d\) or \(d^{\prime}\) needs to figure out whether the observation is from
\[U+\mathcal{N}(0,(2C\sigma)^{2}\mathbf{I})\ \ \text{or}\ \ \ U^{\prime}+\mathcal{N}(0,(2C\sigma)^{2} \mathbf{I}).\]
Since \(\|U-U^{\prime}\|\leq 2C\), this is at best (for the adversary) equivalent to hypothesis testing between \(\mathcal{N}(0,(2C\sigma)^{2})\) and \(\mathcal{N}(2C,(2C\sigma)^{2})\). After dividing by \(2C\), this is equivalent to hypothesis testing between \(\mathcal{N}(0,\sigma^{2})\) and \(\mathcal{N}(1,\sigma^{2})\). We see that any differential privacy guarantee for the round update is characterized by \(\sigma\).
The attentive reader may notice that the original DP-SGD adds \(\mathcal{N}(0,(C\sigma)^{2}\mathbf{I})\), a factor 2 less. This is because its DP analysis and proof assume a slightly different subsampling method. In the original DP-SGD we have that each round selects a random mini-batch of _exactly_\(m\) samples; this leads to the factor \(2\) since \(U\) and \(U^{\prime}\) will differ in one gradient, hence, \(U-U^{\prime}\) cancels all gradients except for one in \(U\) and one in \(U^{\prime}\), both contributing at most \(C\) to the norm \(\|U-U^{\prime}\|\), hence, the factor 2.
However, the software package Opacus [48] implements the sampling of DP-SGD differently: Mini-batches do not have a fixed size, they have a probabilistic size. For each sample \(\xi\in d\), we flip a coin and with probability \(m/N\) we add \(\xi\) to the mini-batch. This means that the _expected_ mini-batch size is equal to \(m\). As a result, the DP analysis of [1] holds true and the factor \(2\) can be eliminated. The reason is that now (in the DP analysis) \(U^{\prime}\) has all the gradients of \(U\) together with one extra gradient based on the single differentiating sample between \(d\) and \(d^{\prime}\). This implies that all gradients in \(U-U^{\prime}\) cancel except for the one based on the differentiating sample, hence, \(\|U-U^{\prime}\|\leq C\).
In the above argument, we _assume_ that the adversary does not learn the actually used mini-batch size otherwise we will again need the factor \(2\) (see also Section 4.5). The observed scaled noised update \(\bar{U}/m\) scales in expectation with the expected norm of a single computed gradient times the used mini-batch size divided by the expected mini-batch size \(m\). This shows how \(\bar{U}/m\) depends on the used mini-batch size where, for large \(m\) and \(N\), it seems reasonable to assume that the adversary cannot gain significant knowledge about the used mini-batch size from \(\bar{U}/m\). We conclude that a probabilistic mini-batch size is a DP technique that offers a factor \(2\) gain. This chapter summarizes the \(f\)-DP framework explained for sampling with fixed mini-batch size leading to the extra factor 2 (the probabilistic approach can be added as a complimentary technique).
### Aggregation at the Server
The server maintains a global model, which we denote by \(\hat{w}\). The server adds to \(\hat{w}\) the received scaled noised round update \(\bar{U}/m\) after multiplying with the round step size* for round \(b\) of epoch \(e\),
Footnote *: The client transmits \((b,e,\bar{U})\) to the server and the server knows an a-priori agreed (with the client) round step size sequence. In practice, the client will only transmit a sparsification or lossy compression of \(\bar{U}\) where small entries are discarded.
\[\eta_{(e-1)\frac{N}{m}+b}\]
(the same as the local model update of \(w\) by the client). This allows a diminishing step size sequence. Notice that dividing by the mini-batch size \(m\) corresponds to \(U\) representing a mini-batch computation in mini-batch SGD.
Each client will select its own DP posture with own selected parameters \(m\), \(C\), \(\sigma\), and own data set \(d\) with its own size \(N\). It makes sense for the server to collect the noised round updates from various clients during consecutive time windows and broadcast updated global models at the end of each window. Rather than adding all the received \(\bar{U}\) within a time window to the global model \(\hat{w}\) (after multiplying with the appropriate client-specific step sizes and dividing by the appropriate client-specific mini-batch sizes), the server will add a mix of the various local updates. The mix is according to some weighing vector giving more weight to those clients whom the server judges having 'better' training data sets for the learning task at hand. In federated learning the server will ask for each time window a random subset of clients to participate in the training. In the above context it makes sense to have the step sizes be diminishing from time window to time window rather than have these be client specific.
### Interrupt Service Routine
The interrupt service routine will replace the locally kept model \(w\) by a received global model \(\hat{w}\). This may happen in the middle of a round. We notice that \(\hat{w}\) depends on previously transmitted noised round updates by the client and other clients. We will discuss how each of these previous noised round updates have a DP guarantee. By the so-called
post-processing lemma, these previously transmitted noised round updates can participate in the current computation of a round update \(U\) through its dependency on the global model \(\hat{w}\) (through the gradients in \(U\)) without weakening the DP guarantee for \(\bar{U}\) (which includes Gaussian noise on top of \(U\)).
Similarly, the client locally updates model \(w\) with \(\bar{U}\) at the end of a round. In next rounds this implies that \(w\) still only depends on previously transmitted noised round updates by the client and other clients, and again by the post-processing lemma the DP guarantees of future noised round updates do not diminish. As soon as a new global model \(\hat{w}\) is received by the interrupt service routine it will overwrite \(w\), that is, the current local model is discarded. This is justified because the newly received global model includes the client's own previously communicated noised updates \(\bar{U}\) (if the corresponding messages were not dropped and did not suffer too much latency), hence, the information of its own local updates is incorporated in the newly received \(\hat{w}\).
### DP Principles and Utility
The strength of the resulting DP guarantee depends on how much utility we are okay with sacrificing. The differential privacy guarantee is discussed in Section 4 and turns out to be approximately equivalent to differentiating between samples from \(\mathcal{N}(0,\sigma^{2})\) and \(\mathcal{N}(\sqrt{E},\sigma^{2})\) (we will also discuss group privacy where \(d\) and \(d^{\prime}\) differ in a group of \(g\) samples, which will have another \(\sqrt{g}\) dependency). This shows that \(\sigma\) should be large enough in order to make hypothesis testing between the two normal distributions unreliable. The reason why we have this DP guarantee is because the principle of using Gaussian noise _bootstraps_ DP for each round, the principle of subsampling in the form of random mini-batches of size \(m\)_amplifies_ DP (because only with probability \(m/N\) a round uses the differentiating sample and can leak privacy in the first place), and the principle of _composition_ of DP guarantees for each round over multiple epochs.
Utility is measured in terms of the (test) accuracy of the final global model and secondary metrics are convergence rate, round complexity \((N/m)\cdot E\) calculated as the total number of rounds per client (communication is costly), total gradient complexity \(E\cdot N\) calculated as the total number of computed gradients per client, information dispersal characterized by the delay or latency of what is learned from local data sets which is calculated as the number \(m\) of gradient computations between consecutive round communications to the server, and client's memory usage+.
Footnote †: The mini-batch computation \(U=\sum_{h=1}^{m}[a_{h}]_{C}\) needs to keep track of all \(m\) gradients \(a_{h}=\nabla_{w}f(w;\xi_{h})\), while the unclipped original mini-batch SGD can keep track of \(\sum_{h=1}^{m}a_{h}=\nabla_{w}\sum_{h=1}^{m}f(w;\xi_{h})\), a single gradient computing thread.
The final accuracy depends on the amount of clipping noise and Gaussian noise, and also depends on the amount of delay (information dispersal) introduced by \(m\): Once convergence sets in, we argued that clipping noise will be small and close to zero. However, each round update \(U\) has noise sampled from \(\mathcal{N}(0,(2C\sigma)^{2}\mathbf{I})\) added to itself. If this noise is small relative to the norm of \(U\), then we expect accuracy not to suffer too much. When convergence progresses, the gradients in \(U\) get closer to zero and therefore the norm of \(U\) gets smaller, which means that the Gaussian noise relative to the norm of \(U\) becomes larger. Since the DP guarantee depends on \(\sigma\) but not on \(C\), we will want to implement a form of (differential private) adaptive clipping where \(C\) is reduced when convergence progresses (we notice that the DP analysis of DP-SGD holds for clipping constants \(C\) that vary from round to round). This will allow us to contain the Gaussian noise relative to the norm of \(U\) when convergence sets in. Experimentation is needed to fine-tune the parameters \(m\), (adaptive) \(C\), and \(\sigma\). Despite fine tuning, we remark that the added clipping and Gaussian noise for differential privacy results in convergence to a final global model with smaller (test) accuracy (than what otherwise, without DP, can be achieved).
The \(\approx G_{\sqrt{gE}/\sigma}\)-DP guarantee for group privacy of Section 4 does not reflect the role of the batch size \(|S_{b}|=m\). This is implicitly captured in \(\sigma\). The following thought experiment shows how: Suppose we increase \(m\) to \(am\), a factor \(a\) larger. Then the norm of updates \(U\) will become a factor \(a\) larger. As a result, with respect to convergence to the final global model, we should be able to cope with a factor \(a\) larger Gaussian noise. That is, by keeping the relative amount of noise with respect to the norm of \(U\) constant, the new updates corresponding to batch size \(am\) can be noised with
\[a\cdot\mathcal{N}(0,(2C\sigma)^{2}\mathbf{I})=\mathcal{N}(0,(2C\sigma\cdot a)^ {2}\mathbf{I}).\]
In fact the communicated averaged noised round update \(\bar{U}/(am)\) has noise
\[a\cdot\mathcal{N}(0,(2C\sigma)^{2}\mathbf{I})/(am)=\mathcal{N}(0,(2C\sigma/m)^ {2}\mathbf{I}),\]
the same as the original communicated averaged noised round update (before the thought experiment). This shows that we can use the factor \(a\) for increasing (1) the clipping constant \(C\) (which reduces the clipping noise which is prevalent at the start of DP-SGD so that convergence can more easily start) and/or increasing (2) the standard deviation \(\sigma\) (which improves the \(G_{\sqrt{gE}/\sigma}\)-DP guarantee as it gets closer to \(G_{0}\) for larger \(\sigma\)); the resulting new clipping constant \(C^{\prime}\) and standard deviation \(\sigma^{\prime}\) satisfy \(2C^{\prime}\sigma^{\prime}=2C\sigma\cdot a\). The disadvantage of increasing the batch size with
a factor \(a\) is a multiplicative factor \(a\) increased delay, i.e., the number of gradient computations between successive round communications to the server is multiplied by a factor \(a\), and this reduces information dispersal and may hurt convergence of the global model. Here, we note that mini-batch SGD is rather robust with respect to large delays, but experiments need to show into what extent \(m\) can be increased without affecting the accuracy of the final global model too much.
Hyperparameter search depends on the used data set. Either we adopt a hyperparameter setting from another similar learning task, or we search for hyperparameters based on the client data sets. In practice, in order to find good parameters \(m\), \(C\), and \(\sigma\), we basically do a grid search by (1) fixing some standard settings (from similar learning tasks) for sample size \(m\), e.g., 16, 32, 64, 128 and 256 etc., (2) fixing some standard settings (from similar learning tasks) for clipping constant \(C\), e.g., 0.001, 0.01, 0.1, etc., and then (3) trying some reasonable settings for \(\sigma\) (based on the client data sets). If the grid search indeed uses client data sets, then we need to make sure that the additional privacy leakage due to the search is small. This is discussed in Appendix D of [1], see also [27].
### Normalization
In practice we will also want to use data normalization [53] as a pre-processing step. This requires computing the mean and variance over all data samples from \(d\). This makes normalized data samples depend on all samples in \(d\). For this reason we need differential private data normalization. That is, a differential private noisy mean and noisy variance is revealed a-priori. This leads to some privacy leakage. The advantage is that we can now re-write \(\mathcal{A}\) as an algorithm that takes as input \(w\), the original data samples \(\{\xi_{h}\}_{h\in S_{b}}\) together with the revealed noisy mean and noisy variance. \(\mathcal{A}\) first normalizes each data sample after which it starts to compute gradients etc. In the \(f\)-DP framework, privacy leakage is now characterized as a trade-off function of the differential private data normalization pre-processing composed with the trade-off function corresponding to the DP analysis of DP-SGD (which does not consider data normalization).
We notice that batch normalization is not compatible with the DP analysis of DP-SGD (since this introduces dependencies among the clipped gradients in \(U\) and the upper bound of \(2C\) on the sensitivity does not hold). On the other hand layer normalization as well as group and instance normalization are compatible (because these only concern single gradient computations).
As a final remark, our discussion assumes that we already know how to represent data samples by extracting features. We can use Principal Component Analysis (PCA) for dimensionality reduction, that is, learning a set of features which we want to use to represent data samples. PCA can be made differentially private [7] in that the resulting feature extraction method (feature transform) has a DP guarantee with respect to the data samples that were used for computing the transform. DP-SGD can be seen as a post-processing after PCA, which is used to represent the local training data samples for which DP-SGD achieves a DP guarantee. In practice, we often already know how to represent the data for our learning task and we already know which function \(f(w;\xi)\) to use, i.e., which neural network topology and loss function to use (due to the success of transfer learning we can adopt data representations and \(f\) from other learning tasks).
## 3 Differential Privacy
In order to prevent data leakage from inference attacks in machine learning [38] such as the deep leakage from gradients attack [63; 62; 23] or the membership inference attack [51; 45; 52] a range of privacy-preserving methods have been proposed. Privacy-preserving solutions for federated learning are Local Differential Privacy (LDP) solutions [1; 2; 44; 54; 28; 14] and Central Differential Privacy (CDP) solutions [44; 25; 41; 49; 61]. In LDP, the noise for achieving differential privacy is computed locally at each client and is added to the updates before sending to the server - in this chapter we only consider LDP. In CDP, a _trusted server_ (aka trusted third party) aggregates received client updates into a global model; in order to achieve differential privacy the server adds noise to the global model before communicating it to the clients.
Differential privacy [19; 15; 21; 18], see [16] for an excellent textbook, defines privacy guarantees for algorithms on databases, in our case a client's sequence of mini-batch gradient computations on his/her training data set. The guarantee quantifies into what extent the output of a client (the collection of updates communicated to the server) can be used to differentiate among two adjacent training data sets \(d\) and \(d^{\prime}\) (i.e., where one set has one extra element compared to the other set).
### Characteristics of a Differential Privacy Measure
In DP-SGD, the client wants to keep its local training data set as private as possible. Each noised round update \(\bar{U}\) leaks privacy. Let us define round mechanism \(\mathcal{M}_{b}\) as the round computation that outputs \(\bar{U}\) for round \(b\). The input of \(\mathcal{M}_{b}\) is data set \(d\) together with an updated local model \(w\). We have the following recursion
\[\bar{U}_{b}\leftarrow\mathcal{M}_{b}(w_{b};d),\]
where \(w_{b}\) is a function of received global model updates which themselves depend on other client's round updates in combination with own previously transmitted round updates \(\bar{U}_{1},\ldots,\bar{U}_{b-1}\). To express this dependency, we use the notation
\[w_{b}\leftarrow\mathsf{W}(\bar{U}_{1},\ldots,\bar{U}_{b-1}),\]
where \(\mathsf{W}\) receives the global models of the server (and in essence reflects the interrupt service routine). We define the overall mechanism \(\mathcal{M}\) as the composition of all round mechanisms \(\mathcal{M}_{b}\), i.e.,
\[\{\bar{U}_{b}\}\leftarrow\mathcal{M}(d)\text{ with }\bar{U}_{b}\leftarrow \mathcal{M}_{b}(\mathsf{W}(\bar{U}_{1},\ldots,\bar{U}_{b-1});d).\]
When defining a DP measure, we will want to be able to _compose_ the DP guarantees for the different round mechanisms \(\mathcal{M}_{b}\): If we can prove that \(\mathcal{M}_{b}(\mathsf{aux};\cdot)\) has a certain DP guarantee, denoted by \(\mathtt{DP}_{b}\), for _all_ aux, then the composition \(\mathcal{M}\) of all round mechanisms \(\mathcal{M}_{b}\) should have a composed DP guarantee
\[\mathtt{DP}_{1}\otimes\mathtt{DP}_{2}\otimes\ldots\otimes\mathtt{DP}_{(N/m) \cdot E}\]
for some composition tensor \(\otimes\) over DP measures.
Once a DP guarantee for mechanism \(\mathcal{M}\) is proven, we do not want it to weaken due to _post-processing_ of the output of \(\mathcal{M}\). In particular, the central server uses the output of \(\mathcal{M}\) for keeping track of and computing a final global model for the learning task at hand. This final model should still have the same (or stronger) differential privacy posture. Let us denote the post-processing by a procedure \(\mathtt{P}\). If \(\mathcal{M}\) has DP guarantee \(\mathtt{DP}\), then we want \(\mathtt{P}\circ\mathcal{M}\) to also have DP guarantee \(\mathtt{DP}\) (this is called the post-processing lemma),
\[[\mathtt{DP}\text{ for }\mathcal{M}]\;\Rightarrow\;[\mathtt{DP}\text{ for } \mathtt{P}\circ\mathcal{M}].\]
We want our DP measure to be compatible with _subsampling_: We want to be able to show that if a round mechanism \(\mathcal{M}_{b}\) has guarantee \(\mathtt{DP}\) without subsampling, then \(\mathcal{M}_{b}\circ\mathtt{Sample}_{m}\) has an 'easy' to characterize amplified guarantee \(\mathtt{DP}^{\prime}\), \(\mathtt{DP}^{\prime}\geq\mathtt{DP}\).'
Finally, we want a differential privacy measure which fits our intuition, in particular, how privacy should be characterized and in what circumstances an attacker can learn private information from observed mechanism outputs. Differential privacy measures are about the difficulty of distinguishing whether the observed output \(o\) is from the distribution \(\mathcal{M}(d)\) or from the distribution \(\mathcal{M}(d^{\prime})\), where \(d\) and \(\bar{d}^{\prime}\) are neighboring data sets in that they have all but one differentiating sample in common. The DP guarantee measures in to what extent
\[\mathtt{Pr}[o\sim\mathcal{M}(d)]\;\;\text{ and }\;\;\mathtt{Pr}[o\sim \mathcal{M}(d^{\prime})]\]
are alike for _all_ neighboring \(d\) and \(d^{\prime}\). Here, we want to reflect the intuition that for more likely observations \(o\) the two probabilities should be close together while for unlikely observations \(o\) we care less whether the two probabilities are close. This reflects how we think about the adversary: Only in rare unlikely cases, a lot or all privacy may leak, while in the common case there is very little privacy leakage. In cryptology we would want to interpret 'rare' as a negligible probability in some security parameter and in the common case we want the two probabilities/distributions to be'statistically close' with their distance negligible in some security parameter. Such strong guarantees cannot be extracted from DP analysis where we control privacy leakage in exchange for utility/accuracy; we cannot make privacy leakage negligible.
The DP measure is characterized in terms of probabilities and statistics. This is referred to as static security or information theoretical security and allows an adversary with unbounded computational resources in order to differentiate between the hypothesis \(o\sim\mathcal{M}(d)\) and hypothesis \(o\sim\mathcal{M}(d^{\prime})\). For completeness, in cryptology we also have the notion of computational security meaning that the difficulty of differentiating the two hypotheses can be reduced to solving a computational hard problem (and, since the brightest mathematicians and computer scientists have not been able to find an algorithm which solves this problem efficiently with practical computational resources, we believe that the attacker cannot solve this problem in feasible time). Computational security allows one to obtain security guarantees where the attackers advantage or success is negligible in some security parameter.
The above expresses individual privacy. We can generalize towards group privacy by considering data sets \(d\) and \(d^{\prime}\) that differ in at most \(g\) samples. In this case we say that a mechanism has a DP guarantee with respect to a group of \(g\) samples.
### \((\epsilon,\delta)\)-Differential Privacy
A randomized mechanism \(\mathcal{M}:D\to R\) is \((\epsilon,\delta)\)-DP (Differentially Private) [18] if for any adjacent \(d\) and \(d^{\prime}\) in \(D\) and for any subset \(S\subseteq R\) of outputs,
\[\mathtt{Pr}[\mathcal{M}(d)\in S]\leq e^{\epsilon}\cdot\mathtt{Pr}[\mathcal{M} (d^{\prime})\in S]+\delta, \tag{3}\]
where the probabilities are taken over the coin flips of mechanism \(\mathcal{M}\).
Historically, differential privacy was introduced [18] and first defined as \(\epsilon\)-DP [19] which is \((\epsilon,\delta)\)-DP with \(\delta=0\). In order to achieve \(\epsilon\)-DP even an unlikely set \(S\) of outputs needs to satisfy (3) for \(\delta=0\). This means that the tail distributions of \(\mathtt{Pr}[\mathcal{M}(d)\in S]\) and \(\mathtt{Pr}[\mathcal{M}(d^{\prime})\in S]\) cannot differ more than a factor \(e^{\epsilon}\). This is a much too strong DP requirement, since the probability to observe an output that corresponds to unlikely tail events is already very small to begin with. Therefore, \(\delta\) was introduced so that tail distributions with probability \(\leq\delta\) do not need to be close together within a factor \(e^{\epsilon}\). This allows one to achieve the more relaxed \((\epsilon,\delta)\)-DP guarantee where an \(\epsilon\)-DP guarantee cannot be proven.
The privacy loss incurred by observing an output \(o\) is given by
\[L^{o}_{\mathcal{M}(d)\|\mathcal{M}(d^{\prime})}=\ln\left(\frac{\mathtt{Pr}[ \mathcal{M}(d)=o]}{\mathtt{Pr}[\mathcal{M}(d^{\prime})=o]}\right). \tag{4}\]
As explained in [21] (\(\epsilon\), \(\delta\))-DP ensures that for all adjacent \(d\) and \(d^{\prime}\) the absolute value of privacy loss will be bounded by \(\epsilon\) with probability at least \(1-\delta\) (with probability at most \(\delta\), observation \(o\) is part of the tail); \((\epsilon,\delta)\)-DP allows a \(\delta\) probability of 'catastrophic privacy failure' and from a cryptographic perspective we want this negligibly small. However, when using differential privacy in machine learning we typically use \(\delta=1/N\) (or \(1/(10N)\)) inversely proportional with the data set size \(N\) (this seems to correspond well with the intuition when a local update should cause an unlikely/tail observation due to the nature of the specific batch of local data samples that was used in the computation of the local update). Concerning parameter \(\epsilon\), the larger \(\epsilon\) the more certain the adversary is about which of \(d\) or \(d^{\prime}\) caused observation \(o\).
Compared to \((\epsilon,0)\)-DP, the relaxation by \(\delta\) allows an improved and asymptotically tight analysis of the cumulative privacy loss incurred by composition of multiple differentially private mechanisms; [17] states an advanced composition theorem (a factor half improvement over [20]): For all \(\epsilon,\delta,\delta^{\prime}\geq 0\), the class of \((\epsilon,\delta^{\prime})\)-DP mechanisms satisfies
\[(\sqrt{2k\ln(1/\delta)}\cdot\epsilon+k\epsilon(e^{\epsilon}-1)/2,k\delta^{ \prime}+\delta)\text{-DP}\]
under \(k\)-fold adaptive composition. This means that only for \(k\leq(1-\delta)/\delta^{\prime}\) the privacy failure probability remains bounded to something smaller than 1.
For group privacy, the literature shows \((g\epsilon,ge^{g-1}\delta)\)-DP for groups of size \(g\). Here, we see an exponential dependency in \(g\) due to the \(ge^{g-1}\) term in the privacy failure probability. This means that only for very small \(\delta\), the failure probability remains bounded to something smaller than \(1\).
We conclude that \(k\)-fold composition and group privacy for group size \(g\) only lead to useful bounds for relatively small \(k\) and \(g\). If we restrict ourselves to a subclass of mechanisms, then we may be able to prove practical DP bounds for composition and group privacy for much larger and practical \(k\) and \(g\). We will define such subclasses by imposing properties on the privacy loss.
### Divergence Based DP Measures
In order to get better trade-offs for composition and group privacy we want to weigh the tail distribution of unlikely observations in such a way that more unlikely observations are allowed to leak even more privacy. So, rather than weighing all unlikely observations equally likely, which results in the privacy failure probability \(\delta\), we want to be more careful. This will allow improved DP bounds for composition and group privacy.
The first idea is to treat the loss function (4) as a random variable \(Z\) and note that in a \(k\)-fold composition we observe \(k\) drawings of random variable \(Z\). Due to the law of large numbers, the average of these drawings will be concentrated around the mean of the loss function. This leads to the notion of Concentrated Differential Privacy (CDP) first introduced in [17] by framing the loss function as a subgaussian random variable after subtracting its mean. This was re-interpreted and relaxed by using Renyi entropy in [4] and its authors followed up with the notion zero-CDP (zCDP) in [5]: A mechanism \(\mathcal{M}\) is \(\rho\)-zCDP if, for all \(\alpha>1\), the Renyi divergence
\[\mathtt{D}_{\alpha}(\mathcal{M}(d)\|\mathcal{M}(d^{\prime}))=\frac{\ln(\mathbb{ E}_{\alpha\sim\mathcal{M}(d)}[e^{(1-\alpha)Z}])}{1-\alpha}\text{ with }Z=L^{o}_{\mathcal{M}(d)\|\mathcal{M}(d^{\prime})}\]
satisfies
\[\mathtt{D}_{\alpha}(\mathcal{M}(d)\|\mathcal{M}(d^{\prime}))\leq\rho\alpha. \tag{5}\]
This DP guarantee requires the tail of \(Z\) to be subgaussian, i.e., \(\mathtt{Pr}[Z>t+\rho]<e^{-t^{2}/(4\rho)}\) for all \(t\geq 0\) (the tail behaves like \(Z\sim\mathcal{N}(\rho,2\rho)\)). If the loss function satisfies this property for a collection of \(k\) mechanisms (each of the mechanisms is \(\rho\)-zCDP), then their \(k\)-fold adaptive composition is \(k\rho\)-zCDP. If a mechanism is \(\rho\)-zCDP for individual privacy, then it is \(g^{2}\rho\)-zCDP for groups of size \(g\). This shows that if we can prove that our DP principles lead to a subgaussian tail of the loss function \(Z\), then we obtain interpretable DP guarantees even for large \(k\) and \(g\).
After the introduction of \(\rho\)-zCDP, Renyi DP (RDP) was introduced by [43]; \((\omega,\tau)\)-RDP requires
\[\mathtt{D}_{\alpha}(\mathcal{M}(d)\|\mathcal{M}(d^{\prime}))\leq\tau\text{ for all }\alpha\in(1,\omega).\]
Here, \(\alpha=1\) bounds the geometric mean of \(e^{Z}\), \(\alpha=2\) bounds the arithmetic mean of \(e^{Z}\), \(\alpha=3\) bounds the quadratic mean of \(e^{Z}\), etc., and \(\alpha=\infty\) bounds the maximum value of \(e^{Z}\) which is equivalent to \((\tau,0)\)-DP. RDP also leads to simple computable composition and group privacy. The advantage of zCDP over RDP is that it covers all \(\alpha\) at once: Larger \(\alpha\) put more weight on the tail of \(Z\), also the mean gets larger. This means that \(\tau\) in the RDP definition should increase with \(\alpha\) and this is realized by zCDP by setting \(\tau=\rho\alpha\) for all \(\alpha\in(1,\infty)\).
The above discussion leads naturally to the definition of \((\rho,\omega)\)-tCDP [6]: A mechanism is \(\omega\)-truncated \(\rho\)-CDP if it satisfies (5) only for \(\alpha\in(1,\omega)\). tCDP requires \(Z\) to be subgaussian near the origin (like zCDP), i.e., \(\mathtt{Pr}[Z>t+\rho]<e^{-t^{2}/(4\rho)}\) for all \(0\leq t\leq 2\rho(\omega-1)\), but only subexponential in \(Z\)'s tail, i.e., we get the weaker subexponential tail bound \(\mathtt{Pr}[Z>t+\rho]\leq e^{(\omega-1)^{2}\rho}e^{-(\omega-1)t}\). This relaxes zCDP while still obtaining interpretable DP guarantees for composition and group privacy, and also subsampling.
The main concern with each of the divergence based DP measures is a lack of transparency of how the attacker can best distinguish the hypotheses \(o\sim\mathcal{M}(d)\) and \(o\sim\mathcal{M}(d^{\prime})\). The next section introduces the \(f\)-DP framework which provides a hypothesis testing based approach. It introduces trade-off functions that capture all the information needed for fully characterizing privacy leakage; a trade-off function can be used to derive any divergence based DP guarantee like the ones discussed above (but not the other way around), see Appendix B in [13]. Rather than extracting a divergence based DP guarantee from a trade-off function for DP-SGD, we will keep the trade-off function itself as it turns out to have a simple form with an easy transparent interpretation.
## 4 Gaussian Differential Privacy
Dong et al. [13] introduced the state-of-the-art DP formulation based on hypothesis testing. From the attacker's perspective, it is natural to formulate the problem of distinguishing two neighboring data sets \(d\) and \(d^{\prime}\) based on the output of a DP mechanism \(\mathcal{M}\) as a hypothesis testing problem:
\[H_{0}:\text{ the underlying data set is }d\quad\text{ versus }\quad H_{1}:\text{ the underlying data set is }d^{\prime}.\]
Here, neighboring means that either \(|d\setminus d^{\prime}|=1\) or \(|d^{\prime}\setminus d|=1\). More precisely, in the context of mechanism \(\mathcal{M}\), \(\mathcal{M}(d)\) and \(\mathcal{M}(d^{\prime})\) take as input representations \(r\) and \(r^{\prime}\) of data sets \(d\) and \(d^{\prime}\) which are 'neighbors.' The representations are mappings from a set of indices to data samples with the property that if \(r(i)\in d\cap d^{\prime}\) or \(r^{\prime}(i)\in d\cap d^{\prime}\), then \(r(i)=r^{\prime}(i)\). This means that the mapping from indices to data samples in \(d\cap d^{\prime}\) is the same for the representation of \(d\) and the representation of \(d^{\prime}\). In other words the mapping from indices to data samples for \(d\) and \(d^{\prime}\) only differ for indices corresponding to the differentiating data samples in \((d\setminus d^{\prime})\cup(d^{\prime}\setminus d)\). In this sense the two mappings (data set representations) are neighbors.
We define the Type I and Type II errors by
\[\alpha_{\phi}=\mathbb{E}_{o\sim\mathcal{M}(d)}[\phi(o)]\text{ and }\beta_{\phi}=1- \mathbb{E}_{o\sim\mathcal{M}(d^{\prime})}[\phi(o)],\]
where \(\phi\) in \([0,1]\) denotes the rejection rule which takes the output of the DP mechanism as input. We flip a coin and reject the null hypothesis with probability \(\phi\). The optimal trade-off between Type I and Type II errors is given by the trade-off function
\[T(\mathcal{M}(d),\mathcal{M}(d^{\prime}))(\alpha)=\inf_{\phi}\{\beta_{\phi}\;: \;\alpha_{\phi}\leq\alpha\},\]
for \(\alpha\in[0,1]\), where the infimum is taken over all measurable rejection rules \(\phi\). If the two hypotheses are fully indistinguishable, then this leads to the trade-off function \(1-\alpha\). We say a function \(f\in[0,1]\rightarrow[0,1]\) is a trade-off function if and only if it is convex, continuous, non-increasing, and \(0\leq f(x)\leq 1-x\) for \(x\in[0,1]\).
We define a mechanism \(\mathcal{M}\) to be \(f\)-DP if \(f\) is a trade-off function and
\[T(\mathcal{M}(d),\mathcal{M}(d^{\prime}))\geq f\]
for all neighboring \(d\) and \(d^{\prime}\). Proposition 2.5 in [13] is an adaptation of a result in [59] and states that a mechanism is \((\epsilon,\delta)\)-DP if and only if the mechanism is \(f_{e,\delta}\)-DP, where
\[f_{e,\delta}(\alpha)=\min\{0,1-\delta-e^{\epsilon}\alpha,(1-\delta-\alpha)e^{- \epsilon}\}.\]
We see that \(f\)-DP has the \((\epsilon,\delta)\)-DP formulation as a special case. It turns out that the original DP-SGD algorithm can be tightly analysed by using \(f\)-DP.
### Gaussian DP
In order to proceed, [13] first defines Gaussian DP as another special case of \(f\)-DP as follows: We define the trade-off function
\[G_{\mu}(\alpha)=T(\mathcal{N}(0,1),\mathcal{N}(\mu,1))(\alpha)=\Phi(\Phi^{-1 }(1-\alpha)-\mu),\]
where \(\Phi\) is the standard normal cumulative distribution of \(\mathcal{N}(0,1)\). We define a mechanism to be \(\mu\)-Gaussian DP if it is \(G_{\mu}\)-DP. Corollary 2.13 in [13] shows that a mechanism is \(\mu\)-Gaussian DP if and only if it is \((\epsilon,\delta(\epsilon))\)-DP for all \(\epsilon\geq 0\), where
\[\delta(\epsilon)=\Phi(-\frac{\epsilon}{\mu}+\frac{\mu}{2})-e^{\epsilon}\Phi( -\frac{\epsilon}{\mu}-\frac{\mu}{2}). \tag{6}\]
Suppose that a mechanism \(\mathcal{M}(d)\) computes some function \(u(d)\in\mathbb{R}^{n}\) and adds Gaussian noise \(\mathcal{N}(0,(c\sigma)^{2}\mathbf{I})\), that is, the mechanism outputs \(o\sim u(d)+\mathcal{N}(0,(c\sigma)^{2}\mathbf{I})\). Suppose that \(c\) denotes the sensitivity of function \(u(\cdot)\), that is,
\[\|u(d)-u(d^{\prime})\|\leq c\]
for neighboring \(d\) and \(d^{\prime}\); the mechanism corresponding to one round update in Algorithm 1 has _sensitivity_\(c=2C\). After projecting the observed \(o\) onto the line that connects \(u(d)\) and \(u(d^{\prime})\) and after normalizing by dividing by \(c\), we have that differentiating whether \(o\) corresponds to \(d\) or \(d^{\prime}\) is in the best case for the adversary (i.e., \(\|u(d)-u(d^{\prime})\|=c\)) equivalent to differentiating whether a received output is from \(\mathcal{N}(0,\sigma^{2})\) or from \(\mathcal{N}(1,\sigma^{2})\). Or, equivalently, from \(\mathcal{N}(0,1)\) or from \(\mathcal{N}(1/\sigma,1)\). This is how the Gaussian trade-off function \(G_{\sigma^{-1}}\) comes into the picture.
### Subsampling
Besides implementing Gaussian noise, DP-SGD also uses sub-sampling: For a data set \(d\) of \(N\) samples, \(\texttt{Sample}_{m}(d)\) selects a subset of size \(m\) from \(d\) uniformly at random. We define convex combinations
\[f_{p}(\alpha)=pf(\alpha)+(1-p)(1-\alpha)\]
with corresponding \(p\)-sampling operator
\[C_{p}(f)=\min\{f_{p},f_{p}^{-1}\}^{**},\]
where the conjugate \(h^{*}\) of a function \(h\) is defined as
\[h^{*}(y)=\sup_{x}\{yx-h(x)\}\]
and the inverse \(h^{-1}\) of a trade-off function \(h\) is defined as
\[h^{-1}(\alpha)=\inf\{t\in[0,1]\mid h(t)\leq\alpha\} \tag{7}\]
and is itself a trade-off function (as an example, we notice that \(G_{\mu}=G_{\mu}^{-1}\) and we say \(G_{\mu}\) is symmetric). Theorem 4.2 in [13] shows that if a mechanism \(\mathcal{M}\) on data sets of size \(N\) is \(f\)-DP, then the subsampled mechanism \(\mathcal{M}\circ\texttt{Sample}_{m}\) is \(C_{m/N}(f)\)-DP.
The intuition behind operator \(C_{p}\) is as follows. First, \(\texttt{Sample}_{m}(d)\) samples the differentiating element between \(d\) and \(d^{\prime}\) with probability \(p\). In this case the computations \(\mathcal{M}\circ\texttt{Sample}_{m}(d)\) and \(\mathcal{M}\circ\texttt{Sample}_{m}(d^{\prime})\) are different and hypothesis testing is possible with trade-off function \(f(\alpha)\). With probability \(1-p\) no hypothesis testing is possible and we have trade-off function \(1-\alpha\). This leads to the convex combination \(f_{p}\).
Second, we notice if \(h=T(\mathcal{M}(d),\mathcal{M}(d^{\prime}))\), then \(h^{-1}=T(\mathcal{M}(d^{\prime}),\mathcal{M}(d))\). Therefore, if \(\mathcal{M}\) is \(f\)-DP (which holds for all pairs of neighboring data sets, in particular, for the pairs \((d,d^{\prime})\) and \((d^{\prime},d)\)), then both \(h\geq f\) and \(h^{-1}\geq f\) and we have a symmetric upper bound \(\min\{h,h^{-1}\}\geq f\). Since \(f\) is a trade-off function, \(f\) is convex and we can compute a tighter upper bound: \(f\) is at most the largest convex function \(\leq\min\{h,h^{-1}\}\), which is equal to the double conjugate \(\min\{h,h^{-1}\}^{**}\). From this we obtain the definition of operator \(C_{p}\).
### Composition
The tensor product \(f\otimes h\) for trade-off functions \(f=T(P,Q)\) and \(h=T(P^{\prime},Q^{\prime})\) is well-defined by
\[f\otimes h=T(P\times P^{\prime},Q\times Q^{\prime}).\]
Let \(y_{i}\leftarrow\mathcal{M}_{i}(\text{aux},d)\) with \(\text{aux}=(y_{1},\ldots,y_{i-1})\). Theorem 3.2 in [13] shows that if \(\mathcal{M}_{i}(\text{aux},.)\) is \(f_{i}\)-DP for all aux, then the composed mechanism \(\mathcal{M}\), which applies \(\mathcal{M}_{i}\) in sequential order from \(i=1\) to \(i=T\), is \((f_{1}\otimes\ldots\otimes f_{T})\)-DP. The tensor product is commutative.
As a special case Corollary 3.3 in [13] states that composition of multiple Gaussian operators \(G_{\mu_{i}}\) results in \(G_{\mu}\) where
\[\mu=\sqrt{\sum_{i}\mu_{i}^{2}}.\]
### Tight Analysis DP-SGD
We are now able to formulate the differential privacy guarantee of original DP-SGD since it is a composition of subsampled Gaussian DP mechanisms. Theorem 5.1 in [13] states that DP-SGD as introduced in [1] is
\[C_{m/N}(G_{\sigma^{-1}})^{\otimes T}\text{-DP},\]
where \(T=(N/m)\cdot E\) is the total number of local rounds. Since each of the theorems and results from [13] enumerated above are exact, we have a tight analysis. This leads in [64] to a (tight) differential privacy accountant (using complex characteristic functions for each of the two hypotheses based on taking Fourier transforms), which can be used by a client to keep track of its current DP guarantee and to understand when to stop helping the server to learn a global model. Because the accountant is tight, it improves over the momentum accountant method of [1].
### Strong Adversarial Model
We assume an adversary who knows the differentiating samples in \(d\setminus d^{\prime}\) and \(d^{\prime}\setminus d\), but who a-priori (before mechanism \(\mathcal{M}\) is executed) may only know (besides say a 99% characterization of \(d\cap d^{\prime}\)) an estimate of the number of samples in the intersection of \(d\) and \(d^{\prime}\), i.e., the adversary knows \(|d\cap d^{\prime}|+noise\) where the noise is large enough to yield a'sufficiently strong' DP guarantee with respect to the size of the used data set (\(d\) or \(d^{\prime}\)). Since \(\mathcal{M}\) does not directly reveal the size of the used data set, we assume (as in prior literature) that the effect of \(N=|d|\neq N^{\prime}=|d^{\prime}|\) contributes at most a very small amount of privacy leakage, sufficiently small to be discarded in our DP analysis: That is, we may as well assume \(N=N^{\prime}\) in our DP analysis.
In this setting of \(N=N^{\prime}\) the DP analysis in prior work considers an adversary who can mimic mechanism \(\mathcal{M}\circ\textsf{Sample}_{m}\) in that it can replay into large extent how \(\textsf{Sample}_{m}\) samples the used data set (\(d\) or \(d^{\prime}\)): We say a round has \(k\) differentiating data samples if \(\textsf{Sample}_{m}\) sampled a subset of indices which contains exactly \(k\) indices of differentiating data samples from \((d\setminus d^{\prime})\cup(d^{\prime}\setminus d)\). The adversary knows how \(\textsf{Sample}_{m}\) operates and can derive a joint probability distribution \(\mathbb{P}\) of the number of differentiating data samples for each round within the sequence of rounds that define the series of epochs during which updates are computed. We consider two types of strong adversaries in our proofs when bounding trade-off functions:
Adversary \(\mathcal{A}_{0}\) does not know the exact instance drawn from \(\mathbb{P}\) but who is, in the DP proof, given the ability to realize for each round the trade-off function \(f_{k}(\alpha)\) that corresponds to hypothesis testing between \(\mathcal{M}\circ\textsf{Sample}_{m}(d)\) and \(\mathcal{M}\circ\textsf{Sample}_{m}(d^{\prime})\) if \(\textsf{Sample}_{m}\) has selected \(k\) differentiating samples in that round. Adversary \(\mathcal{A}_{0}\) in the DP analysis that characterizes \(f_{k}(\alpha)\) is given knowledge about the mapping from indices to values in \(d\) or \(d^{\prime}\). Here (as discussed before), the mapping from indices to values in \(d\cap d^{\prime}\) is the same for the mapping from indices to values in \(d\) and the mapping from indices to values in \(d^{\prime}\). Furthermore, the adversary can replay how \(\textsf{Sample}_{m}\) samples a subset of \(m\) indices from++\(\{1,\ldots,N=N^{\prime}\}\), and it knows all the randomness used by \(\mathcal{M}\) before \(\mathcal{M}\) adds Gaussian noise for differential privacy (this includes when and how the interrupt service routine overwrites the local model). This strong adversary represents a worst-case scenario for the 'defender' when analyzing the differential privacy of a single round. For DP-SGD this analysis for neighboring data sets leads to the argument of Section 4.2 where with probability \(p\) (i.e., \(k=1\)) the adversary can achieve trade-off function \(f(\alpha)\) and with probability \(1-p\) (i.e., \(k=0\)) can achieve trade-off function \(1-\alpha\) leading ultimately to operator \(C_{p}\). This in turn leads to the trade-off function \(C_{m/N}(G_{\sigma^{-1}})^{\otimes T}\) with
\(p=m/N\), which is _tight for adversary_\(\mathcal{A}_{0}\). We notice that adversary \(\mathcal{A}_{0}\) is used in DP analysis of current literature including the moment accountant method of [1] for analysing \((\epsilon,\delta)\)-DP and analysis of divergence based DP measures.
In the DP analysis adversary \(\mathcal{A}_{0}\) is given knowledge about the number \(k\) of differentiating samples when analysing a single round. That is, it is given an instance of \(\mathbb{P}\) projected on a single round. We notice that in expectation the sensitivity (see Section 4.1) of a single round as observed by adversary \(\mathcal{A}_{0}\) for neighboring data sets is equal to \((1-p)\cdot 0+p\cdot 2C=(m/N)\cdot 2C\) and this gives rise to an 'expected' trade-off function \(G_{1/(\sigma N/m)}\). Composition over \(c^{2}(N/m)^{2}\) rounds gives \(G_{1/\sigma}\). This leads us to believe that \(C_{m/N}(G_{\sigma^{-1}})^{\otimes T}\) converges to \(G_{c\cdot h(\sigma)}\) for \(T=c^{2}(N/m)^{2}\to\infty\) (or, equivalently, \(\sqrt{T}\cdot m/N=c\) with \(T\to\infty\) and \(N\to\infty\)) where \(h(\sigma)\) is some function that only depends on \(\sigma\). This intuition is confirmed by Corollary 5.4 in [13], and is also indirectly confirmed by [56] which shows that DP-SGD is \((\epsilon,\delta)\)-DP for \(\sigma=\sqrt{2(\epsilon+\ln(1/\delta))/\epsilon}\) for a wide range of parameter settings \(N,m,T\) with \(T\) at most \(\approx\epsilon(N/m)^{2}/2\), and which matches Corollary 5.4 in [13] in that the upper bound on \(T\) can at most be a constant factor \(\approx 8\) larger (without violating the corollary).
We define the second type of adversary \(\mathcal{A}_{1}\) (first introduced in [57]) as one who has knowledge about a full instance of \(\mathbb{P}\), not just a projection of \(\mathbb{P}\) on a single round as for adversary \(\mathcal{A}_{0}\). This allows a DP analysis that into some extent computes a convex combination of trade-off functions that each characterize all the rounds together as described by an instance of \(\mathbb{P}\). This gives adversary \(\mathcal{A}_{1}\) more information and the resulting DP guarantees should be weaker compared to the analysis based on adversary \(\mathcal{A}_{0}\) (because adversary \(\mathcal{A}_{1}\) considers a more worst-case leakage scenario). Since each epoch has \(N/m\) rounds and since \(m/N\) is equal to the probability that \(\texttt{Sample}_{m}\) selects a differentiating sample in a round when considering neighboring data sets, we have that a single epoch of \(N/m\) rounds has in expectation exactly one round with one differentiating sample while all other rounds only use non-differentiating samples. This is a composition of \(G_{\sigma^{-1}}\) with \(N/m-1\) times \(G_{0}\). We have that the trade-off function for \(c^{2}\cdot(N/m)\) rounds has in expectation a trade-off function \(G_{c/\sigma}\). This shows convergence to some \(G_{c\cdot h(\sigma)}\) for \(T=c^{2}\cdot(N/m)\to\infty\) rounds. This is indeed a weaker statement compared to the one for adversary \(\mathcal{A}_{0}\). A detailed discussion about DP guarantees resulting from an analysis based on adversary \(\mathcal{A}_{1}\) is given in Section 4.7 (and turns out valuable as it adds more insight).
Clearly, a weaker (than \(\mathcal{A}_{0}\)) adversary with less capability (less knowledge of the used randomness by \(\texttt{Sample}_{m}\) and \(\mathcal{M}\)) achieves a trade-off function \(\geq C_{m/N}(G_{\sigma^{-1}})^{\otimes T}\) closer to \(1-\alpha\). It remains an open problem to characterize realistic weaker adversaries that lead to larger (lower bounds of) trade-off functions.
### Group Privacy
Theorem 2.14 in [13] analyzes how privacy degrades if \(d\) and \(d^{\prime}\) do not differ in just one sample, but differ in \(g\) samples. If a mechanism is \(f\)-DP, then it is
\[[1-(1-f)^{\circ g}]\text{-DP}\]
for groups of size \(g\) (where \(\circ g\) denotes the \(g\)-fold iterative composition of function \(1-f\), where \(1\) denotes the constant integer value \(1\) and not the identity function, i.e., \((1-f)(\alpha)=1-f(\alpha)\)). This is a tight statement in that _there exist_\(f\) such that the trade-off function for groups of size \(g\) cannot be bounded better. In particular, for \(f=G_{\mu}\) we have \(G_{g\mu}\)-DP for groups of size \(g\).
The intuition behind the \([1-(1-f)^{\circ g}]\)-DP result is that the adversary can create a sequence of data sets \(d_{0}=d\), \(d_{1}\),..., \(d_{g-1}\), \(d_{g}=d^{\prime}\) such that each two consecutive data sets \(d_{i}\) and \(d_{i+1}\) are neighboring. We know that \(T(\mathcal{M}(d_{i}),\mathcal{M}(d_{i+1}))\geq f\). For each rejection rule we may plot a point (in x and y coordinates)
\[(\mathbb{E}_{\o\sim\mathcal{M}(d_{i})}[\phi(o)],\ \mathbb{E}_{\o\sim\mathcal{M}(d_{i+1} )}[\phi(o)]).\]
Since \(f(\alpha)\) is a lower bound on the Type I vs Type II error curve, the resulting collection of points is upper bounded by the curve \(1-f(\alpha)\). We have that \(\alpha=\mathbb{E}_{\o\sim\mathcal{M}(d_{i})}[\phi(o)]\) is mapped to
\[\mathbb{E}_{\o\sim\mathcal{M}(d_{i+1})}[\phi(o)]\leq 1-f(\alpha)=(1-f)(\alpha).\]
By transitivity, we have that \(\alpha=\mathbb{E}_{\o\sim\mathcal{M}(d=d_{0})}[\phi(o)]\) is mapped to
\[\mathbb{E}_{\o\sim\mathcal{M}(d^{\prime}=d_{g})}[\phi(o)]\leq(1-f)^{\circ g}( \alpha).\]
This yields the lower bound
\[T(\mathcal{M}(d),\mathcal{M}(d^{\prime}))\geq 1-(1-f)^{\circ g}\]
on the Type I vs Type II error curve.
Let \(\phi[\alpha]\) denote a rejection rule that realizes the mapping from
\[\alpha=\mathbb{E}_{\o\sim\mathcal{M}(d_{i})}[\phi[\alpha](o)]\ \ \text{to}\ \ (1-f)(\alpha)=\mathbb{E}_{\o\sim\mathcal{M}(d_{i+1})}[\phi[\alpha](o)].\]
Then the mapping from \((1-f)^{\circ i}(\alpha)=\mathbb{E}_{o\sim\mathcal{M}(d_{i})}[\phi(o)]\) to \((1-f)^{\circ(i+1)}(\alpha)=\mathbb{E}_{o\sim\mathcal{M}(d_{i+1})}[\phi(o)]\) is realized by \(\phi=\phi[(1-f)^{\circ i}(\alpha)]\). This shows that the lower bound \(1-(1-f)^{\circ g}\) is tight only if we can choose all \(\phi[(1-f)^{\circ i}(\alpha)]\) equal to one another. This is not the case for DP-SGD for which it turns out that this lower bound is not tight at all; rather than a multiplicative factor \(g\) as in the mentioned \(G_{g\mu}\)-DP guarantee we have a \(\sqrt{g}\) dependency for adversary \(\mathcal{A}_{1}\)[57] (and this should also hold for the seemingly weaker adversary \(\mathcal{A}_{0}\)). This is done by considering how, due to sub-sampling, the \(g\) differentiating samples are distributed across all the rounds within an epoch and how composition of trade-off functions across rounds yields the \(\sqrt{g}\) dependency.
### DP-SGD's Trade-Off Function
Assuming adversary \(\mathcal{A}_{1}\), recent work [57] shows that for \(g=1\), DP-SGD is \(h\)-DP for
\[h\approx G_{\sqrt{(1+1/\sqrt{2E})E}/\sigma}\]
and if DP-SGD is \(h\)-DP, then it is upper bounded by \(h\leq\bar{h}\) with
\[\bar{h}\approx G_{\sqrt{(1-1/\sqrt{2E})E}/\sigma}.\]
The approximations become tight if \(e^{-E}\) tends to zero. Notice that we do not need to compute the subsampling operator (which is only specific to adversary \(\mathcal{A}_{0}\)) and composition tensor in order to get a good approximation. The approximation is easy to interpret as it behaves like \(G_{\sqrt{E}/\sigma}\) as opposed to general \(f\)-DP theory which has, cited from [13], "the disadvantage is that the expressions it yields are more unwieldy: they are computer evaluable, so usable in implementations, but do not admit simple closed form."
We remind the reader that we can directly infer \((\epsilon,\delta)\)-DP guarantees from (6); function \(\delta(\epsilon)\) turns out to be completely independent from the data set size \(N\), hence, see Section 3.2, setting \(\delta(\epsilon)=1/N\) favorably biases smaller data sets. Appendix B in [13] shows how to infer divergence based DP guarantees. In particular, \(G_{\sqrt{E}/\sigma}\)-DP implies \((\omega,\frac{E}{2\sigma^{2}}\cdot\omega)\)-RDP (Renyi differential privacy) for any \(\omega>1\), hence, we have
\[\frac{E}{2\sigma^{2}}\text{-zCDP}.\]
As noted in [57] the resulting \(G_{\sqrt{E}/\sigma}\)-DP guarantee for individual privacy scales with the square root of the total number \(E\) of epochs, and does not depend on the explicit number of rounds executed within these epochs. In other words, even though more local updates can be observed by the adversary, this turns out not to lead to more privacy leakage. This is because each local update is based on a smaller batch of training data samples which in itself leads to less privacy leakage per local update. We remind the reader about our discussion in Section 2.6 where we show that the role of mini-batch size \(m\) (and, hence, the total number of rounds \(N/m\) per epoch) is implicitly captured in \(\sigma\).
For group privacy with \(g\geq 1\), [57] shows a similar approximate DP guarantee as for \(g=1\) where everywhere \(E\) is substituted by \(gE\); we have that DP-SGD with sampling based on'shuffling' is \(G_{\sqrt{gE}/\sigma}\)-DP for groups of size \(g\). This leads to a \(\sqrt{gE}\) dependency and not a linear dependency in \(g\) (and notice that we obtain \(gE/(2\sigma^{2})\)-zCDP with a linear dependency on \(g\) rather than \(g^{2}\)).
## 5 Future Work
We are still in the midst of bringing DP-SGD to practice where we want to achieve good convergence to and accuracy of the final global model and where we have a strong DP guarantee (the trade-off function should be close to \(1-\alpha\) which represents random guessing between the two hypotheses). Towards finding a good balance between utility and privacy, we discuss a couple future directions in next subsections.
### Using Synthetic Data
One main problem is that local data is used for training models for various learning tasks. Each application of DP-SGD will leak privacy since the local data set is being re-used. One way to control and be in charge of the amount of privacy leakage is to have data samples in local client data expire according to some expiration date (per sample). This is problematic because in our current data economy, data is a valuable asset which we do not want to give a limited lifetime.
In order to cope with this problem, a client may decide to not use its own local data set in each of these DP-SGD instantiations. Instead, differential private GAN [26] modeling can be used to learn a distribution model based on a
local data set that generates synthetic data with a similar distribution. Due to the post-processing lemma, we can freely use the synthetic data in any optimization algorithm and FL approach. This circumvents multiple use of DP-SGD, but requires the design of differential private GAN which produces 'high' quality synthetic data. This is an open problem: GAN modeling is itself a learning task which can use the DP-SGD approach for the discriminator (which is very noise sensitive). Here, we use DP-SGD only once and as soon as a GAN model is learned, it can be published and transmitted to the central server who uses the GAN models from all clients to generate synthetic samples on which it trains a global model for a learning task of its choice. Of course, as a caveat, working with synthetic data may not lead to a global model with good test accuracy on real data. Notice that by using synthetic data we avoid the FL paradigm altogether since the large amounts of data distributed over clients is now compressed into (relatively short transmittable) representations that code GAN models.
In the same line of thinking, if differentially private GAN models do not lead to high quality synthetic data, then we will want to research other general methods for pre-processing local data that filter or hide features that are considered privacy sensitive. This brings us back to the basics of how a membership or inference attack is actually implemented in order to understand what type of information should be filtered out for making reconstruction of certain types of private data hard or unreliable.
### Adaptive Strategies
We need to fine tune parameters and this can be done during DP-SGD's execution: Consecutive segments of multiple rounds may work with their own \(m\), \(\sigma\), and \(C\). Into what extent does an adaptive approach work, where the current convergence rate and test accuracy (preferably based on public data at the server so as not to leak additional privacy) of the current global model is used to determine \((m,\sigma,C)\) for the next segment?
In Section 2.6 we discussed the benefit of adaptive reducing the clipping constant \(C\) (based on prior rounds or based on using a DP approach within a round to collect information that influences the choice of the used \(C\) in that round). Similarly, since a smaller \(\sigma\) directly influences the amount of noise added to the global model and therefore influences its accuracy negatively, it makes sense to reduce \(\sigma\) once convergence has been achieved. After reducing \(\sigma\), new convergence to an improved global model may start. The problem is that a lower \(\sigma\) leads to more privacy leakage. For this reason we want to lower \(\sigma\) to a smaller \(\hat{\sigma}\) only for e.g. the final epoch; this yields for adversary \(\mathcal{A}_{1}\) a trade-off function approximately equal to
\[G_{\sqrt{g(E-1)}/\sigma}\otimes G_{\sqrt{g}/\hat{\sigma}}=G_{\sqrt{g}\sqrt{(E- 1)/\sigma^{2}+1/\hat{\sigma}^{2}}}.\]
We may decide to choose a significantly smaller \(\hat{\sigma}=\sigma/\sqrt{E+1}\) which yields \(G_{\sqrt{2gE}/\sigma}\)-DP, sacrificing a factor \(\sqrt{2}\) in the differential privacy guarantee. The significantly smaller \(\hat{\sigma}\) will likely improve the accuracy of the final global model during the last epoch. Notice that, when adapting \(C\) and \(\sigma\), it also makes sense to fine tune \(m\) accordingly as the sensitivity to a lack of information dispersal may reduce for smaller \(C\) and \(\sigma\).
We may also modify the noise distribution: DP-SGD selects noise \(N\) from a Gaussian distribution. before adding \(N\) to the round update, we may replace \(N\) by \(a\cdot\texttt{arsinh}(a^{-1}\cdot N)\) where \(\texttt{arsinh}(x)=\ln(x+\sqrt{x^{2}+1})\) as suggested for tCDP [6]. The result resembles the same Gaussian but with exponentially faster tail decay and this may help improving the convergence to and accuracy of the final global model. Here, we notice that for the same reason of faster tail decay, DP-SGD chooses to use Gaussian noise over Laplace noise.
Finally, DP-SGD can be placed in a larger algorithmic framework with DP guarantees for a more general clipping strategy (including 'batch clipping') which allows more general optimization algorithms (beyond mini-batch SGD), and more general sampling strategies (in particular a sampling strategy based on'shuffling') [57].
It remains an open problem to unveil adaptive strategies possibly in a more general algorithmic framework that optimally balance utility and differential privacy. Here we prefer to discover adaptive strategies that proactively provide the DP guarantee based on changed parameter settings, i.e., we do not want to change parameters based solely on utility and discover later (by using a differential privacy accountant) that this has violated or is about to violate our privacy budget.
### DP Proof: A Weaker Adversarial Model
Section 4.5 explains strong adversarial models used in the DP analysis under which the derived DP guarantee is tight. In practice, this is too strong. In general, we may assume a weaker adversary with less capability in terms of knowledge about the used randomness by \(\texttt{Sample}_{m}\) and \(\mathcal{M}\). By explicitly stating the knowledge of a weaker adversary in combination with assumptions on the data set itself, we may be able to derive an \(f\)-DP guarantee with \(f(\alpha)\) closer to \(1-\alpha\). It remains an open problem to exploit such a line of thinking.
### Computing Environment with Less Adversarial Capabilities
In order to impose restrictions on adversarial capabilities we may be able to use confidential computing techniques such as secure processor technology [11], homomorphic computing with secret sharing and/or secure Multi-Party Computation (MPC) [32], and possibly even hardware accelerated fully homomorphic encryption [24, 22]; for a survey see [31, 35]. These techniques hide round updates in encrypted form. Hence, only the final global model itself (if it is published) or querying the final global model (if it is kept private) can leak information about how local data sets shaped the final model. This means that CDP, see Section 3, is still needed. CDP has a better trade-off between privacy and utility compared to LDP as discussed in this chapter. However, confidential computing does not come for free: Either we need to assume a larger Trusted Computing Base (TCB) in the form of trusted hardware modules or processors at the clients, intermediate aggregators, and server or we need a Trusted Third Party (TTP). E.g., in the secure MPC solution of [32] the generation of Beaver triples is outsourced to a TTP otherwise impractical additional communication among clients and server is needed (for an oblivious transfer phase in MPC). We are still studying balanced and practical combinations of confidential computing techniques including the use of differential privacy.
|
2310.13232 | Interaction Screening and Pseudolikelihood Approaches for Tensor
Learning in Ising Models | In this paper, we study two well known methods of Ising structure learning,
namely the pseudolikelihood approach and the interaction screening approach, in
the context of tensor recovery in $k$-spin Ising models. We show that both
these approaches, with proper regularization, retrieve the underlying
hypernetwork structure using a sample size logarithmic in the number of network
nodes, and exponential in the maximum interaction strength and maximum
node-degree. We also track down the exact dependence of the rate of tensor
recovery on the interaction order $k$, that is allowed to grow with the number
of samples and nodes, for both the approaches. We then provide a comparative
discussion of the performance of the two approaches based on simulation
studies, which also demonstrates the exponential dependence of the tensor
recovery rate on the maximum coupling strength. Our tensor recovery methods are
then applied on gene data taken from the Curated Microarray Database (CuMiDa),
where we focus on understanding the important genes related to hepatocellular
carcinoma. | Tianyu Liu, Somabha Mukherjee | 2023-10-20T02:42:32Z | http://arxiv.org/abs/2310.13232v2 | # Interaction screening and pseudolikelihood approaches for tensor learning in Ising models
###### Abstract.
In this paper, we study two well known methods of Ising structure learning, namely the pseudolikelihood approach and the interaction screening approach, in the context of tensor recovery in \(k\)-spin Ising models. We show that both these approaches, with proper regularization, retrieve the underlying hypernetwork structure using a sample size logarithmic in the number of network nodes, and exponential in the maximum interaction strength and maximum node-degree. We also track down the exact dependence of the rate of tensor recovery on the interaction order \(k\), that is allowed to grow with the number of samples and nodes, for both the approaches. Finally, we provide a comparative discussion of the performance of the two approaches based on simulation studies, which also demonstrate the exponential dependence of the tensor recovery rate on the maximum coupling strength.
Key words and phrases:structure learning, Ising model, tensor, interaction screening, pseudolikelihood
## 1. Introduction
The \(k\)-spin Ising model [4, 35, 20, 9, 32] is a discrete exponential family for modeling dependent binary (\(\pm 1\)-valued) data exhibiting multi-body interactions, which can be thought of as taking place along the hyperedges of a \(k\)-uniform hypergraph. It is a generalization of the classical 2-spin Ising model, that was originally introduced by physicists for modelling ferromagnetism [26], and has since then found extensive applications in diverse areas such as social sciences, image processing, computational biology, neural networks, spatial statistics and election forecasting [3, 18, 23, 22, 25, 31, 27, 28]. However, pairwise interactions are often not adequate to capture all the complex dependencies that arise in real-world network structures, such as peer-group effects and multi-atomic interactions on crystal surfaces, which necessitates the use of higher order tensor Ising models for modeling such complex relational frameworks. The multi-body interactions in tensor Ising models are captured by an interaction tensor, which in many practical cases, forms the adjacency tensor of an underlying hypergraph, whose nodes are thought of as being sites for binary-valued variables or spins, that interact with each other along the hyperedges of this hypergraph. A fundamental problem in the area of Ising model learning is to recover this underlying interaction structure given access to multiple samples from this model, on which a significant amount of literature has developed for the classical 2-spin Ising models, during the past two decades (see, for example, [2, 10, 16, 24, 36, 37, 30]). Variants of the complete structure learning problem have also experienced growing interest over the past few years, some related notable works being graph property testing by Neykov et al. [14, 34], identity and
Introduction
In this paper, we study the interaction screening of a single-layer \(k\)-spin Ising model with a large number of random variables. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model, and the model is a generalization of the model to the Ising model. The model is a generalization of the model to the Ising model.
### The Tensor Recovery Problem
The \(k\)-spin or \(k\)-tensor Ising model is a probability distribution on the set \(\{-1,1\}^{p}\), defined as:
\[\mathbb{P}_{\mathbf{J}}(\mathbf{x}):=\frac{1}{Z(\mathbf{J})}e^{H(\mathbf{x})}\quad(\mathbf{x}\in\{-1,1\}^{p}) \tag{1}\]
where \(\mathbf{J}:=((J_{r_{1},...,r_{k}}))_{(r_{1},...,r_{k})\in[p]^{k}}\) denotes a \(k\)-fold tensor with \([p]:=\{1,\ldots,p\}\),
\[H(\mathbf{x}):=\sum_{(r_{1},...,r_{k})\in[p]^{k}}J_{r_{1},...,r_{k}}x_{r_{1}}\ldots x _{r_{k}},\]
and \(Z(\mathbf{J})\) is a normalizing constant required to ensure that \(\mathbb{P}_{\mathbf{J}}\) is a valid probability distribution. Hereafter, we will assume that the tensor \(\mathbf{J}\) satisfies the following properties:
1. \(\mathbf{J}\) is symmetric, i.e., \(J_{r_{1},...,r_{k}}=J_{r_{\sigma(1)},...,r_{\sigma(k)}}\) for every \((r_{1},\ldots,r_{k})\in[p]^{k}\) and every permutation \(\sigma\) of \(\{1,...,k\}\),
2. \(\mathbf{J}\) has zeros on the _diagonals_, i.e., \(\mathbf{J}_{r_{1}...r_{k}}=0\), if \(r_{s}=r_{t}\) for some \(1\leqslant s<t\leqslant k\).
Also, throughout the paper, we will denote the maximum entry of \(\mathbf{J}\) by \(\beta\), i.e.
\[\beta:=\max_{(r_{1},...,r_{k})\in[p]^{k}}J_{r_{1},...,r_{k}}\.\]
The aim of this paper is to recover the unknown tensor \(\mathbf{J}\) given access to samples \(\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots,\mathbf{x}^{(n)}\) from this model. Based on the works [30, 38] in the context of Ising matrix recovery, in this paper, we analyze the performance of two approaches, namely the _regularized interaction screening estimator_ and the _regularized pseudolikelihood estimator_ on tensor recovery in higher-order Ising models.
The regularized interaction screening estimator (RISE) of the neighborhood of a particular node \(r\in[p]\) is defined as:
\[\hat{\mathbf{J}}_{r,I}:=\arg\min_{\mathbf{J}_{r}\in\mathbb{R}^{\binom{p-1}{k-1}}} \mathcal{S}(\mathbf{J}_{r};\mathfrak{X}^{n})+\lambda\|\mathbf{J}_{r}\|_{1} \tag{2}\]
where \(\mathbf{J}_{r}:=((J_{r,r_{1},...,r_{k-1}}))_{(r_{1},...,r_{k-1})\in T_{r}}\) and
\[\mathcal{S}(\mathbf{J}_{r};\mathfrak{X}^{n}):=\frac{1}{n}\sum_{i=1}^{n}\exp\left( -kx_{r}^{(i)}m_{r}(\mathbf{x}^{(i)})\right),\]
with \(m_{r}(\mathbf{x}):=\sum_{(r_{1},...,r_{k-1})\in[p]^{k-1}}J_{r,r_{1},...,r_{k-1}}x _{r_{1}}\ldots x_{r_{k-1}}\) and
\[T_{r}:=\{(r_{1},\ldots,r_{k-1})\in([p]\setminus\{r\})^{k-1}:1\leq r_{1}< \ldots<r_{k-1}\leq p\}.\]
Note that the objective function in (3) is convex, and hence, (3) is a convex optimization problem.
On the other hand, the regularized pseudolikelihood estimator (RPLE) of the neighborhood of the node \(r\in[p]\) is defined as:
\[\hat{\mathbf{J}}_{r,P}:=\arg\min_{\mathbf{J}_{r}\in\mathbb{R}^{\binom{p-1}{k-1}}}\ell (\mathbf{J}_{r};\mathfrak{X}^{n})+\lambda\|\mathbf{J}_{r}\|_{1} \tag{3}\]
where
\[\ell(\mathbf{J}_{r};\mathfrak{X}^{n}):=-\frac{1}{n}\sum_{i=1}^{n}\log\mathbb{P}_{\mathbf{J }}(x_{r}^{(i)}|\mathbf{x}_{\setminus r}^{(i)})\]
and \(\mathbf{x}_{\setminus r}^{(i)}:=(x_{t}^{(i)})_{t\neq r}\). A straightforward computation shows that:
\[\mathbb{P}_{\mathbf{J}}(x_{r}|\mathbf{x}_{\setminus r})=\frac{\exp(kx_{r}m_{r}(\mathbf{x} ))}{2\cosh(kx_{r}m_{r}(\mathbf{x}))}\]
and hence, one has:
\[\ell(\mathbf{J}_{r};\mathfrak{X}^{n})=-\frac{1}{n}\sum_{i=1}^{n}\left\{kx_{r}^{(i) }m_{r}(\mathbf{x}^{(i)})-\log\cosh(kx_{r}^{(i)}m_{r}(\mathbf{x}^{(i)}))-\log 2\right\}\]
In this paper, we study the performances of both the tensor-structure learners RISE \(\hat{\mathbf{J}}_{r,I}\) and RPLE \(\hat{\mathbf{J}}_{r,P}\). In Section 2, we prove rates of consistency of the RISE and the RPLE, and show explicit dependence of these rates and the minimum sample size requirement on the maximum coupling strength \(\beta\) and the tensor interaction factor \(k\).
### Related Work on Parametric Inference in Ising Models
A closely related problem is the task of infering the _inverse temperature_ and the _external magnetic field_ parameters from the Ising model:
\[\mathbb{P}(\mathbf{x})\propto e^{\beta\mathbf{x}^{\top}\mathbf{J}\mathbf{x}+h\sum_{i=1}^{n}x _{i}}\quad(\mathbf{x}\in\{-1,1\}^{p})\]
given access to a single sample from such a model. There has been a flurry of works in the past two decades in the area of parametric inference in Ising models, starting with the seminal paper due to [15], who, inspired by [7, 6], first applied the pseudo-likelihood approach for parameter estimation in general spin-glass models, and showed \(\sqrt{p}\)-consistency of the maximum pseudolikelihood estimator in this model with \(h=0\), at low temperatures. This was followed by improved results on the rate of consistency and joint estimation of \((\beta,h)\) in [8] and [21]. Recently, some of these results were also extended to tensor Ising models in [32] and [33]
### Structure of the Paper
The rest of the paper is organized as follows. In Section 2, we state the main results of this paper. We begin by discussing about a general theory of \(\ell^{1}\)-penalized \(M\)-estimators from [1], that will help us prove the main results in this paper. In the same section, we use this general theory to prove theoretical guarantees for the RPLE and the RISE. In Section 3, we provide some simulation studies to demonstrate the comparative performance of these two approaches.
## 2. Main Results
In this section, we state and prove the main results of this paper. To begin with, we sketch a general theory about \(\ell^{1}\)-penalized \(M\)-estimators from [1], that is crucial in proving our main results,
### A general theory of \(\ell^{1}\)-penalized \(M\)-estimators
We use a framework from [1] for general \(\ell^{1}\)-regularized \(M\)-estimators to establish that our method works consistently. It turns out that imposing just the following two conditions on the loss function is enough to control the error of the \(\ell_{1}\)-regularized \(M\)-estimator:
\[\hat{\mathbf{J}}_{r}:=\arg\min_{\mathbf{J}_{r}\in\mathbb{R}^{\binom{p-1}{k-1}}}\mathcal{ L}(\mathbf{J}_{r};\mathfrak{X}^{n})+\lambda\|\mathbf{J}_{r}\|_{1}\]
for a general convex and differentiable loss function \(\mathcal{L}\).
**Condition 1**.: _Let \(\lambda\) be the \(\ell_{1}\)-penalty parameter. Then, the gradient of the objective function at the true neighborhood \(\mathbf{J}_{r}\) satisfies:_
\[2\|\nabla\mathcal{L}(\mathbf{J}_{r};\mathfrak{X}^{n})\|_{\infty}\leqslant\lambda\]
Condition 1 ensures that if the maximum degree of the hypergraph with (weighted) adjacency \(\mathbf{J}\) is \(d\), then the difference \(\eta_{r}:=\hat{\mathbf{J}}_{r}-\mathbf{J}_{r}\) lies within the two-sided cone:
\[K:=\left\{\eta\in\mathbb{R}^{\binom{p-1}{k-1}}\ \middle|\ \|\eta\|_{1} \leqslant 4\|\eta_{S}\|_{1}\right\}, \tag{4}\]
where \(S\) denotes the indices of all non-zero entries of \(\mathbf{J}_{r}\). Note that (4) follows from Lemma 1 and Example 1 in [1].
**Condition 2** (Restricted Strong Convexity).: _There exists \(R>0\) such that for all \(\eta_{r}\in K\) with \(\|\eta_{r}\|_{2}\leqslant R\), there exists a constant \(\kappa>0\), such that_
\[\mathcal{L}(\mathbf{J}_{r}+\eta_{r};\mathfrak{X}^{n})-\mathcal{L}(\mathbf{J}_{r}; \mathfrak{X}^{n})-\langle\nabla\mathcal{L}(\mathbf{J}_{r};\mathfrak{X}^{n}),\eta_ {r}\rangle\geqslant\kappa\|\eta_{r}\|_{2}^{2}.\]
The second condition guarantees that the loss function is strongly convex in a conically restricted neighborhood of \(\mathbb{R}^{\binom{p-1}{k-1}}\). We will verify Conditions 1 and 2 for the RISE and the RPIE later. With these two conditions in hand, we can finally proceed to bound the error of estimation of the neighborhood parameter \(\mathbf{J}_{r}\). The following proposition (Theorem 1 in [1]) serves as the fundamental result behind our main theorem about the theoretical guarantee of the RPIE:
**Proposition 1**.: _Under Conditions 1 and 2 with \(R>3\sqrt{d}\lambda/\kappa\), we have:_
\[\|\hat{\mathbf{J}}_{r}-\mathbf{J}_{r}\|_{2}\leqslant\frac{3\lambda\sqrt{d}}{\kappa}.\]
The following restricted eigenvalue condition on the tensor covariance structure is also necessary for our analysis:
**Assumption 1**.: _Let \(Q:=\mathbb{E}[\mathbf{X}_{\cdot r}\mathbf{X}_{\cdot r}^{\top}]\) where \(\mathbf{X}_{\cdot r}:=(X_{r_{1}}\ldots X_{r_{k-1}})_{(r_{1},\ldots,r_{k-1})\in T_{ r}}\). Assume that there exists a constant \(\alpha>0\), such that:_
\[\inf_{\mathbf{v}\in K\setminus\{\mathbf{0}\}}\frac{\mathbf{v}^{\top}Q\mathbf{v}}{\|\mathbf{v} \|_{2}^{2}}\geq\alpha\.\]
Note that, a sufficient condition for Assumption 1 to hold, is that the minimum eigenvalue of \(Q\) is bounded below by \(\alpha\). With this structure in hand, we are finally ready to prove the theoretical guarantees of the RISE and RPIE.
### The Regularized Pseudo-Likelihood Estimator
In this section, we analyze the performance of the RPLE. The main result about the rate of convergence of the RPLE is stated below:
**Theorem 1**.: _Suppose that \(d\) denotes the maximum degree of the hypernetwork with adjacency \(\mathbf{J}\), and the regularization parameter \(\lambda\) is chosen as:_
\[\lambda:=4\sqrt{2}k!\sqrt{\frac{\log\left(4\binom{p-1}{k-1}/\varepsilon\right) }{n}}.\]
_Then, there exist constants \(M_{1},M_{2}>0\), such that for every node \(r\in V\) and any \(\varepsilon\in(0,1)\), if_
\[n>M_{1}\frac{d^{2}}{\alpha^{2}}\max\left\{e^{4k!\beta d},1\right\}\log\frac{ \binom{p-1}{k-1}}{\varepsilon},\]
_then the following holds with probability at least \(1-\varepsilon\):_
\[\|\hat{\mathbf{J}}_{r,P}-\mathbf{J}_{r}\|\leqslant\frac{M_{2}\sqrt{d}e^{2k!\beta d}}{ \alpha k!}\sqrt{\frac{\log\frac{\binom{p-1}{k-1}}{\varepsilon}}{n}}.\]
Proof.: Let \(\gamma\in(0,1)\) which will be chosen suitably later. In view of Lemma 1, we know that with probability at least \(1-\gamma\varepsilon\), Condition 1 is satisfied with
\[\lambda:=2k!\sqrt{\frac{8\log\left(2\binom{p-1}{k-1}/\gamma\varepsilon\right) }{n}}\.\]
Next, note that for some \(\eta_{r}\in K\), one has by the Cauchy-Schwarz inequality,
\[\|\eta_{r}\|_{1}\leq 4\sqrt{d}\|\eta_{r}\|_{2}\.\]
Define \(R:=c/\sqrt{d}k!\), where \(c\) is a constant to be chosen later. Then, in view of Lemma 4, Condition 2 is satisfied with
\[\kappa:=\frac{\alpha(k!)^{2}e^{-2k!\beta d}}{4(1+4c)}\]
with probability at least \(1-(1-\gamma)\varepsilon\) as long as \(n>2^{11}\frac{d^{2}}{\alpha^{2}}\log\frac{2\binom{p-1}{k-1}^{2}}{(1-\gamma)\varepsilon}\). Now, we can apply Proposition 1 as long as \(\frac{c}{\sqrt{d}k!}>3\frac{\sqrt{d}\lambda}{\kappa}\), which is equivalent to the condition:
\[n>\frac{C_{1}d^{2}}{\alpha^{2}}e^{4k!\beta d}\log\left(\frac{2\binom{p-1}{k-1 }}{\gamma\varepsilon}\right)\]
for some constant \(C_{1}>0\). In that case, we have:
\[\|\hat{\mathbf{J}}_{r,P}-\mathbf{J}_{r}\|\leqslant\frac{L\sqrt{d}e^{2k!\beta d}}{ \alpha k!}\sqrt{\frac{\log\frac{2\binom{p-1}{k-1}}{\gamma\varepsilon}}{n}}\]
for some constant \(L>0\). Theorem 1 now follows, by taking for example, \(\gamma=1/2\) and \(c=1\)
### The Regularized Interaction Screening Estimator
In this section, we analyze the performance of the RISE. The main result about the rate of convergence of the RISE is stated below:
**Theorem 2**.: _Suppose that \(d\) denotes the maximum degree of the hypernetwork with adjacency \(\mathbf{J}\), and the regularization parameter \(\lambda\) is chosen as:_
\[\lambda=2\sqrt{2}k!e^{k!\beta d}\sqrt{\frac{\log\frac{4\left(p-1\right)}{ \varepsilon}}{n}},\]
_Then, there exist constants \(M_{1},M_{2}>0\), such that for every node \(r\in V\) and any \(\varepsilon\in(0,1)\), if_
\[n>M_{1}\frac{d^{2}}{\alpha^{2}}\max\{e^{4k!\beta d},1\}\log\frac{\left(p-1 \right)}{\varepsilon},\]
_the following properties hold with probability at least \(1-\varepsilon\):_
\[\|\hat{\mathbf{J}}_{r,I}-\mathbf{J}_{r}\|\leqslant\frac{M_{2}\sqrt{d}e^{2k!\beta d}}{ \alpha k!}\sqrt{\frac{\log\frac{\left(p-1\right)}{\varepsilon}}{n}}.\]
Proof.: In view of Lemma 5, we know that with probability at least \(1-\frac{\varepsilon}{2}\), Condition 1 is satisfied with
\[\lambda:=2\sqrt{2}k!e^{k!\beta d}\sqrt{\frac{\log\frac{4\left(p-1\right)}{ \varepsilon}}{n}}\]
If we set \(R=\frac{2}{\sqrt{d}k!}\), then in view of Lemma 7, Condition 2 is satisfied with
\[\kappa:=\frac{\alpha(k!)^{2}e^{-k!\beta d}}{20}\]
with probability at least \(1-\frac{\varepsilon}{2}\) as long as \(n>2^{11}\frac{d^{2}}{\alpha^{2}}\log\frac{4\left(p-1\right)^{2}}{\varepsilon}\). Now, we can apply Proposition 1 as long as \(\frac{2}{\sqrt{d}k!}>3\frac{\sqrt{d}\lambda}{\kappa}\), which is equivalent to the condition:
\[n>\frac{Cd^{2}}{\alpha^{2}}e^{4k!\beta d}\log\frac{4\left(p-1\right)}{ \varepsilon}\]
for some constant \(C>0\). In that case, we have:
\[\|\hat{\mathbf{J}}_{r,I}-\mathbf{J}_{r}\|\leqslant\frac{D\sqrt{d}e^{2k!\beta d}}{ \alpha k!}\sqrt{\frac{\log\frac{4\left(p-1\right)}{\varepsilon}}{n}}\]
for some constant \(L>0\), which completes the proof of Theorem 2.
## 3. Simulation Studies
In this section, we present some numerical experiments that illustrate the comparative performance of the two tensor recovery algorithms. The Julia package _GraphicalModel-Learning_ is modified and used to learn the Ising models with the interaction screening and pseudo-likelihood approaches. The performances of the RPLE and RISE are dependent on the regularization coefficient \(\lambda\), which was set as \(\lambda:=c\sqrt{\log(4\binom{p-1}{k-1}/\varepsilon)/n}\), where \(c\) is a constant tuned according to the Bayesian Information Criterion (BIC). To be precise, for each node \(r\), the optimal value of \(\lambda\) is tuned by minimizing the BIC value:
\[\text{BIC}_{r}(\lambda):=\mathcal{L}_{p}(\hat{J}_{r,\lambda};\mathfrak{X}^{n} )+\text{df}(\lambda)\log p\]
over a grid of values of \(\lambda\), where \(\mathcal{L}_{p}\) denotes the objective function, which is \(\mathcal{S}\) for the RISE and \(\ell\) for the RPLE, and \(\hat{J}_{r,\lambda}\) is the estimate corresponding to the penalty parameter \(\lambda\).
Figure 1. Plot of the estimation error (vertical axis) against the sample size \(n\) (horizontal axis). The top-left window corresponds to the case \(\beta=1\), the top-right window to the case \(\beta=1.5\), the bottom-left window to the case \(\beta=2\) and the bottom-right window to the case \(\beta=2.5\).
For our numerical experiments, we work with the Ising model on \(3-\)regular, \(3-\)uniform hypergraphs on \(16\) nodes. In Figure 1, we plot the estimation error of the recovered hypergraphs against the sample size \(n\), for four values of the maximum coupling intensity \(\beta\), namely \(1,1.5,2,2.5\). We observe that the estimation errors are higher for both the RPLE and the RISE for higher values of \(\beta\), as expected. Further, the simulations suggest that the RISE performs better than the RPLE in terms of the estimation error, which leaves open the natural question of whether this better performance of the RISE can be demonstrated theoretically. In Figure 2, we plot the estimation error for a fixed sample size \(n=10^{5}\) against varying maximum coupling strengths \(\beta\). The figure clearly shows an exponentially increasing dependence of the estimation error on \(\beta\), as suggested in Theorems 1 and 2.
|
2310.08174 | Mapping Water on the Moon and Mars using a Muon Tomograph | The search for water on the Lunar and Martian surfaces is a fundamental
aspect of space exploration, contributing to the understanding of the history
and evolution of these celestial bodies. However, the current understanding of
the distribution, concentration, origin, and migration of water on these
surfaces is limited. Moreover, there is a need for more detailed data on these
aspects of Lunar and Martian water. The natural flux of cosmic-ray muons,
capable of penetrating the planetary surface, offers a method to study the
water-ice content, composition, and density of these surfaces. In this paper,
the author presents a novel approach to address these knowledge gaps by
employing cosmic-ray muon detectors and backscattered radiation. The study
describes a cutting-edge muon tracking system developed by GScan and highlights
the results of preliminary simulations conducted using GEANT4. These findings
suggest that muon tomography could be a potential tool for investigating
water-ice content on the Lunar and Martian surfaces, pointing to new avenues
for space science exploration. | Olin Lyod Pinto, Jörg Miikael Tiit | 2023-10-12T10:01:31Z | http://arxiv.org/abs/2310.08174v1 | # Mapping Water on the Moon and Mars using a Muon Tomograph
###### Abstract
The search for water on the Lunar and Martian surfaces is a fundamental aspect of space exploration, contributing to the understanding of the history and evolution of these celestial bodies. However, the current understanding of the distribution, concentration, origin, and migration of water on these surfaces is limited. Moreover, there is a need for more detailed data on these aspects of Lunar and Martian water. The natural flux of cosmic-ray muons, capable of penetrating the planetary surface, offers a method to study the water-ice content, composition, and density of these surfaces. In this paper, the author presents a novel approach to address these knowledge gaps by employing cosmic-ray muon detectors and backscattered radiation. The study describes a cutting-edge muon tracking system developed by GScan and highlights the results of preliminary simulations conducted using GEANT4. These findings suggest that muon tomography could be a potential tool for investigating water-ice content on the Lunar and Martian surfaces, pointing to new avenues for space science exploration.
A 2022
union tomography, machine learning, monte carlo simulations, GEANT4
DOI: 10.31526/JAIS.2022.ID
## 1 Muography in Space
The exploration of celestial bodies beyond Earth has captivated the curiosity of scientists and space enthusiasts for centuries. One fundamental aspect of such exploration is the search for water, a vital resource for sustaining life and enabling future human missions. In recent years, the Moon and Mars have emerged as primary targets for this quest, with extensive efforts to understand the distribution and nature of water on these celestial bodies.
Histocially, the Moon was regarded as a desiccated and arid world without significant water resources. However, a paradigm shift occurred in 2007 when scientific observations hinted at the presence of water in the Lunar mantle. The Lunar Prospector mission, launched in 1998, detected elevated hydrogen concentrations at the Moon's poles, suggesting the existence of water [1]. Subsequent missions, such as Chandrayaan-1 in 2009, confirmed the presence of surface water-ice in certain permanently shadowed craters near the Lunar poles [2]. The LCROSS impact experiment further estimated the water content in the Moon's regolith to be 5.7% by weight [3]. More recently, the Stratospheric Observatory for Infrared Astronomy (SOFIA) detected molecular water in the illuminated regions of the Moon [4]. Despite these exciting findings, the mechanisms responsible for water containment within the Lunar and Martian subsurface and the potential extraction methods remain subjects of ongoing debate and investigation.
Like the Moon, Mars has also been a target for water exploration. Various forms of water, including ice in the polar caps, glaciers at lower latitudes, and subsurface permafrost, have been detected on the Martian surface [5]. These discoveries have sparked interest in understanding the extent and distribution of Martian water resources, as they hold implications for future human colonization and sustained exploration on the Red Planet.
To address the challenges of finding water on the Moon and Mars, novel techniques and instrumentation are being developed. One such method gaining prominence is muon tomography, a non-invasive imaging technique that utilizes cosmic rays to probe the interior structure of planetary bodies. Muons, which are high-energy particles generated by cosmic ray interactions on the Lunar or Martian surface, can penetrate significant depths and may backscatter. By measuring the attenuation and backscattering of muons, we can infer the presence and distribution of water within these celestial bodies.
The motivation for utilizing muon tomography in space exploration lies in its ability to provide valuable insights into the subsurface composition of the Moon and Mars. This non-destructive technique offers the potential to precisely map the distribution of water and other dense materials, allowing us to better understand the geological processes and history of these planetary bodies. Moreover, the knowledge gained from muon tomography studies can aid in identifying regions with higher water concentrations, facilitating the planning of future missions and resource utilization.
From a scientific standpoint, deploying a muon tomography instrument serves many possibilities for various investigations and avenues of research. As an example, drilling into the sub-surface poses insurmountable challenges. However, the problems of drilling could likely be overcome relatively easily by using the natural flux of the cosmic muons (or other charged particles, as well
as neutrons). This approach would allow for a faster, more efficient, and possibly higher accuracy analysis compared to traditional physical sample methods. Furthermore, due to the high penetration capabilities of cosmic ray particles, the muon tomography methods carry great potential for gathering information from deeper layers than drills can reach.
This article aims to present a study using the muon tomography method in space. It discusses the current understanding of Lunar and Martian water resources, the significance of utilizing muon tomography in space exploration, and the potential implications of the findings. This research contributes to a broad understanding of the feasibility of finding water content on planetary surfaces and for future exploration endeavours.
## 2 Detector Technology
The GScan Muon Tracker is based on scintillating fibres, as they are widely used and offer several advantages, making them suitable for a wide variety of applications. The GScan Muon Tracker is a 4-layered prototype hodoscope (3-1 configuration) with 1 mm Saint-Gobain BCF-12 single cladding scintillation fibres [6]. The fibres are arranged as two double-layered fibre mats oriented orthogonally to each other, as shown in 1. This configuration ensures the angular resolution of the system of about one milliradian. The three hodoscope layers in the top part are separated by 75 mm, with the lowest detector plate set 250 mm from the last of the top layers. The configuration provided a 247 mm \(\times\) 247 mm active area for every hodoscope, hence a total volume of interest (VOI) of 247 mm \(\times\) 247 mm \(\times\) 250 mm\({}^{3}\). The data acquisition system consisted of eight CAEN DT5550W boards paired with Ketek (PA3325-WB-0808) and Hamamatsu (S13361-3050AE-08) SiPMs arrays for collecting the scintillation light from the fibres.
It is noteworthy to mention that a comparable strategy is being pursued in the domain of space exploration, specifically with regard to muon tomography. In this context, the approach is distinct, as it exclusively encompasses the utilization of the topmost three detector layers. This refinement is calibrated to align with the unique demands of space-based applications. The table 1 outlines the critical specifications and general requirements envisioned for a muon tomograph tailored for space missions.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Parameter** & **Details** \\ \hline Detector Material & Polysyrene-core scintillating fibres (0.5mm - 1mm diameter) \\ Detector Geometry & total size \(\sim\)1m\({}^{3}\); 10 cm gap between layers, \\ & alternating fiber orientation; minimum of 3 layers \\ Resolution & Position (1mm); Angular (\(\sim\)1 degree); Time (50 -100 ps) \\ Readout Electronics & Radiation-hardened Silicon Photomultipliers (450-550 nm) \\ Support Frame & Aluminium or Titanium \\ Data Processing & Field Programmable Gate Arrays (FPGAs) \\ Power System & Solar panels with Radioisotope Thermoelectric Generators (RTGs) \\ Shielding & Polyethylene for electronics and SiPMs \\ \hline \hline \end{tabular}
\end{table}
Table 1: Specifications and requirements for a muon tomograph designed for a space mission.
Figure 1: a) A photo of the GScan prototype muon tomography consisting of four detector plates (left) in a 3-1 configuration with a volume of interest at the center and b) The assembly of a detector plate. Figure is taken from [6]
## 3 Methodology
The core methodology of this study is hinged on evaluating the potential of utilising upward-travelling muons, in coordination with a strategically-positioned GScan tracker on the planetary surface, to quantify water content in the soil. At the heart of this methodology lies the unique behaviour of muons, which are generated through interactions between cosmic rays and the planetary surface. These highly energetic particles possess the ability to penetrate deep into geological formations, thus serving as instrumental probes for the identification and subsequent in-depth characterisation of underlying structures. Concurrently, the GScan tracker, specifically engineered for muon tracking, functions as an essential apparatus for this study.
The GScan muon tracker is reconfigured, incorporating a top hodoscope. This methodology also emphasises the critical importance of grazing muons, which, due to their shallow interaction angles with subsurface formations, act as sources of backscattered particles, causing scattering events and directional alterations. In order to compile statistically significant data regarding upward muon flux, integrating a large-scale detector is essential. Optimally reflecting the dimensions of the Lunar or Martian terrain under examination, this expansive detector ensures the collection of a considerable number of upward-travelling muons and their interactions with subsurface geological entities. This methodological approach minimises statistical uncertainties, enhancing the overall quality of precision, accuracy, and dependability of the outcomes derived from muon tracking experiments.
The focus of this study extends to specific variables such as hit energy, time, and scattering angle distributions. Hit energy is defined as the energy deposited by the particle in the sensitive volume of the detector; in this case, it is a scintillating plate. Hit time is the time recorded between the detector planes are t\({}_{3}\) - t\({}_{2}\) \(>\) 0 and t\({}_{2}\) - t\({}_{1}\)\(>\) 0. The scattering angle is obtained from the scattering angle \(\theta\) between a reference direction vector (A) and a momentum direction vector (B) is:
\[\theta=\frac{arccos(A\cdot B)}{|A|*|B|} \tag{1}\]
A thorough examination is conducted through a comparative investigation of these distributions across Lunar and Martian surfaces and further encompassing conditions involving dry soil and underlying frozen lake (hereby referred to as "Rock" and "Ice").
## 4 Simulations
The preliminary simulation aimed to establish a fundamental understanding of the potential shortcomings and the overall feasibility of the initially chosen simulation concept. The simulation was performed using the GEANT4 framework [13] to simulate a simplified Lunar/ Martian environment and the passage of particles through both celestial bodies. GEANT4's QGSP_BERT physics list was used to model the interactions of particles with matter.
For the simulation, three scintillator plates (top hodoscope), each with a thickness of 0.2 cm, were used as the active material for particle detection, with the plates placed 10 cm apart to capture the trajectories effectively - the distance and thickness were chosen to replicate the GScan's muon tomography setup. The use of a large-scale detector, matching the diameter of the Moon/Mars, was essential to obtain statistically significant data, improving accuracy and reliability. This large-scale detector enables capturing a substantial number of upward-travelling particles and their interactions with subsurface geological formations. Moreover, it is considered an effective area, therefore, it reduces the amount of computational time significantly. However, the implications of this choice, such as how much exposure time would actually be needed with a realistically sized detector as mentioned in table 1 to achieve enough statistics, must be considered and warrant further investigation. The visualisation of the Moon and Mars model and the detector can be seen in Figure 2.
As the cosmic ray source (Galactic Cosmic Rays, or GCR), the EcoMug model [7] was employed, which approximates the primary cosmic ray flux based on the defined parameters. The composition of the cosmic ray flux was assumed to consist of 85% protons and the remaining 15% of alpha particles - reflecting the average composition observed in cosmic ray measurements. The zenith angle (angle of approach to the Lunar/ Martian surface) was limited to 75 degrees to obtain an increased amount of grazing particles from the surface. The source height was set at 2000 km and 4000 km from the surface of the Moon and Mars. The energy range of the simulated particles spanned from 1 GeV to 100 TeV, covering a broad spectrum of commonly observed energies in cosmic ray measurements [8] - energies beyond 100 TeV are not simulated due to limitations from GEANT4 particle interactions.
The Moon and Mars were modelled as simple geometrical spheres with radii of 1737.4 km and 3390 km, respectively. To accurately depict the surface variations of these two planetary environments, each spherical body was divided into three distinct layers: the crust, mantle, and core. The regolith (topmost surface) for the Moon and Mars was approximately 5 meters thick. Though the Martian regolith was also considered, its specific thickness was modelled according to observed data [9, 10]. The intention behind simulating these diverse layers was to analyse the interactions of particles with the various materials found in these celestial bodies and consequently, to explore the potential for detecting water in different regions. A realistic portrayal of muon production was achieved, with the majority occurring within 1 meter of both the Lunar and Martian soil.
The chemical composition for simulating the Lunar and Martian soil is shown in figure 3, and the simulations included two scenarios - the "dry" Lunar surface (Anorthite Rock) and a case in which a frozen lake 7 km deep at the top of the planetary surface. As the parameters of interest, 5-dimensional information (position in the \(x\), \(y\), \(z\) directions, energy (E), and time (t)) and scattering angle are captured in addition to recording the particle types as the ground truth information. The collected information is initially analysed using the ROOT framework [11], following which further analysis can be performed in Python. The primary goal is to differentiate the backscattered particles from those of forward-travelling and characterise the parametrical differences between the backscattered particles originating from the dry Lunar/Martian surface and those from the frozen lake scenario. Because the particles interact characteristically depending on the media of interest (tied to physical and chemical alterations of the media) and hence carry information about it, the patterns in the parameters of interest give rise to the detection capabilities - the parameter space distributions between the dry soil and frozen lake scenario, in this particular case, should be noticeably different from one another.
Simulating the production of muons (and any other particles of interest produced within any given media) presents a significant challenge, making high-precision modelling of the Lunar/Martian surface imperative. Unlike the Earth, the Lunar environment lacks an atmosphere, causing primary particles to interact directly with the topmost layers of the surface. This direct interaction leads to the generation of muons (and other secondary particles), which differs from the process on Earth; where the majority of muons are produced after interactions with the atmosphere. Consequently, the backscattered muon flux on the Moon is notably smaller compared to Earth, as the absence of atmosphere-produced muons contributes to the reduced overall backscattered flux in the Lunar environment. This distinction in muon production mechanisms is crucial to be considered, helping to develop an improved understanding of the unique aspects of the Lunar surface and its particle interactions - properly accounting for these factors is essential for obtaining accurate results in muon simulations and subsequent analysis.
Figure 3: Chemical composition of Lunar and Martian surface [10, 12].
Figure 2: Simulations a) the Moon and b) Mars. Cosmic rays are coming from the top view. Protons or alpha particles are shown in blue, while secondary particles are represented in green. The Martian scenario does not account for magnetic fields or atmospheric effects.
## 5 Data Analysis
In the simulation of Galactic Cosmic Ray (GCR) events, a total of half a million interactions were investigated, focusing on the properties and distribution of backscattered particles. Out of this dataset, the detectors recorded an average of 10% of the interactions as backscattered events. Photons, as the most prevalent component, constitute half of the backscattered particles, reflecting their significant role in the interaction process. Electrons follow, accounting for 10% of the occurrences, illustrating their substantial presence among the backscattered particles. Protons, known for their stability and significant mass, were observed at a rate of 4%. At the same time, pions were slightly more prevalent at 6%, suggesting a nuanced relationship between these charged particles in the scattering phenomenon. Remarkably, muons were detected at a rate of less than 1%, indicating their rare occurrence in the backscattering process despite being essential secondary products in cosmic ray interactions. Neutrons, with a recording rate of a mere 0.01%, were the least prevalent among the backscattered particles.
### Spectral Analysis
Figure 4 shows the hit energy, time and scattering angle distributions of the detected backscattered particles. These distributions are compared between two distinct materials: Ice and Rock. Furthermore the distribution are extracted for each particle types. We see some discrepancy in shape between the datasets from Ice and Rock.
The comparison cases are important, as the differences in the shape behaviour of backscattered muons and protons, compared to the electrons and pions at different energies, arise from their different energy loss mechanisms within the matter - these differences give rise to the differentiation power of the technology. Electrons and pions have a lower threshold energy for production, allowing them to be produced and detected more efficiently at lower energies compared to muons and protons. Additionally, they lose energy more rapidly through processes such as bremsstrahlung and ionisation as they propagate through the Lunar regolith and get backscattered, leading to a more rapid decrease in their energy. Furthermore, distributional differences can be extracted from the time component (also showing a requirement of roughly 2 to 5 ns time resolution), as well as the scattering angle, providing a good measure of differentiation for particle types based on the interaction modes with the matter of varying density.
We observe distinct patterns in the distributions of various particle properties. Notably, muons exhibit a unique behaviour, characterised by a flat distribution in both energy and time domains. This observation suggests a relatively consistent energy deposition within the detector's sensitive volume and a uniform temporal profile for these particles. The distinctiveness of muons' energy and time distributions might be attributed to their well-established penetration capabilities and their limited interactions with the detector material. In contrast, electrons exhibit markedly different distribution characteristics. These distributions manifest as narrow, well-defined energy, time, and scattering angle peaks. This trend is particularly pronounced when comparing electrons with heavier particles, such as muons, pions, and protons. The narrower peaks indicate that electrons deposit energy within a more confined range and exhibit tightly clustered arrival times, indicative of more predictable interactions and decays. Moreover, the scattering angle distribution for muons underscores their distinct behaviour. The narrower scattering angle distribution suggests that muons are more likely to undergo minimal scattering compared to heavier particles. This behaviour aligns with expectations based on the interplay between particle mass, charge, and the scattering process.
This study unveils intriguing variations between distributions obtained under different scenarios. Specifically, we observe discrepancies at both the peaks and tails of the distributions for these two scenarios. This divergence could be attributed to varying experimental conditions, material properties, or detector responses.
Figure 4: Distribution of Hit energy, time and scattering angle, in comparison between Rock and Ice datasets in the context of the Lunar environment.
The Cramer-von Mises (CvM) test was employed to evaluate the similarities and differences between the distributions of specific parameters, such as energy, time, and angular measures, in rock and ice datasets, as depicted in figure 6. The CvM distance quantifies the divergence between the two distributions, while the p-value provides an indication of the strength of evidence supporting the hypothesis that the two samples originate from different distributions. The results show a substantial difference in the overall energy and a notable difference in pion energy, while other energy subcategories showed no significant differences. A highly significant discrepancy in the overall time was observed, with only a marginal difference in proton time. In terms of angular parameters, an extremely significant difference in the overall angular and backphotons was detected, with a significant difference in backprotons. Other subcategories of angular parameters and a few time subcategories did not exhibit significant differences. These findings offer detailed insights into the distinctiveness and similarities of the examined parameters, highlighting areas that may be pivotal in understanding the underlying differences between rock and ice datasets.
Figure 5: Comparison of distributions for both Lunar and Martian data, detailing the energy deposited (a,d), the time difference between detector planes (b,e), and the scattering angle (c,f). These distributions are obtained from various particles and are contrasted between Ice (represented by solid lines) and Rock (represented by dashed lines) datasets for each celestial body.
### Machine Learning using Simulated Data
In order to determine between the two simulated scenarios - "Rock" and "Ice", machine learning (ML) classification is applied. The use of ML in this context is rooted in its ability to handle complex patterns within large datasets efficiently. Specifically, an ML-based classifier, when optimally constructed, can process and label data with speed and accuracy, accommodating the intricacies of the underlying physical phenomena. For the given task, which is binary in nature, the decision tree (DT) family of methods is well-suited. Decision trees are non-parametric supervised learning methods that enable the creation of a hierarchical structure of decision nodes, performing binary tests based on the provided set of attributes. In this particular case, the random forest iteration of the DT family was applied [14], a method composed of multiple decision trees working in concert, aptly suited for handling the multidimensional data input. In this classification, the data was split into 60% training, 20% validation, and 20% test sets.
In the specific context of identifying water content in the subsurface, the ML model can analyse numerous features extracted from the simulation results. Through a process known as "feature importance determination", the ML model can identify the features carrying the most significant information for making accurate predictions in classifying between ice (signal) and rock (background). The feature importance for various parameters obtained is shown in figure 7a. Based on this analysis, the top 5 features, namely hit time, scattering angle (all particles), scattering angle contributed from photons, time contribution from photons, and energy (all particles), were chosen as they are the most influential in differentiating between ice and rock. These five features were utilised for training and classifying the data, laying the foundations for an efficient system that maximises the result with the least amount of computational power.
Figure 6: Cramér-von Mises test results for various features in the case of Moon scenario.
Having chosen the features to be included in the training and testing of the model, the classification performance of it has to be quantified in some way. One of the possibilities for it is calculating the Receiver Operating Characteristic Curve (ROC Curve) - a measure of the classifier's diagnostic ability at varying discrimination thresholds. By evaluating the True Positive Rate (positive data points classified as positive) and False Positive Rate (negative data points classified as positive) of the model, the Area Under the Curve (AUC) can be determined. AUC further quantifies the strength of distinguishing between the two classes that are being classified - the closer the AUC value to 1, the better the classification performance. In this study, both the Lunar and Martian scenarios were evaluated as shown in figure 6(b). For the Lunar case, the obtained AUC (Area Under the Curve) value of 0.92 (92%) implies that the classifying performance of the developed model was very high, capable of efficiently differentiating between the simulated ice and rock scenarios. Similarly, for the Martian scenario, the model achieved an AUC value of 0.91 (91%), demonstrating robust classification performance in distinguishing between Martian ice and rock formations.
## 6 Conclusion & Outlook
This study highlighted the potential of muon tomography for space exploration, specifically for investigating water-ice content on the Lunar and Martian terrains. Through GEANT4 simulations, the study differentiates between dry and wet lunar surfaces through the analysis of backscattered particles. Moreover, the integration of machine learning offers an innovative approach to distinguish between geological formations such as ice from rock, emphasizing the technique's precision and reliability.
Initial simulations, however, highlighted challenges in capturing the Lunar/ Martian topological and environmental nuances accurately. Future research aimed at these limitations can refine the findings. Enhancements, such as using scintillating fibres and more detector layers, aim to boost sensitivity and resolution. Transitioning from simulations to real-world experiments introduces complexities, including detector discrepancies, cosmic ray variations, and the intricacies of machine learning and geological interpretation. The application of digitization techniques, although slightly affecting the ROC curve's performance, augments the model's realism. Refinements in digitization and inclusion of parameters like track lengths hold promise for improving classification accuracy.
The datasets and analytical scripts used in this study have been made publicly available on Zenodo and can be accessed for further research [15].
## Conflict of Interest
The author declare that there are no conflicts of interest regarding the publication of this paper.
## Acknowledgements
This work is supported in part by the European Space Agency (ESA), under project code ESA CTR No. 4000139808/22/NL/MH/rp. The author is grateful for their valuable contribution.
|
2308.12366 | Continual Zero-Shot Learning through Semantically Guided Generative
Random Walks | Learning novel concepts, remembering previous knowledge, and adapting it to
future tasks occur simultaneously throughout a human's lifetime. To model such
comprehensive abilities, continual zero-shot learning (CZSL) has recently been
introduced. However, most existing methods overused unseen semantic information
that may not be continually accessible in realistic settings. In this paper, we
address the challenge of continual zero-shot learning where unseen information
is not provided during training, by leveraging generative modeling. The heart
of the generative-based methods is to learn quality representations from seen
classes to improve the generative understanding of the unseen visual space.
Motivated by this, we introduce generalization-bound tools and provide the
first theoretical explanation for the benefits of generative modeling to CZSL
tasks. Guided by the theoretical analysis, we then propose our learning
algorithm that employs a novel semantically guided Generative Random Walk (GRW)
loss. The GRW loss augments the training by continually encouraging the model
to generate realistic and characterized samples to represent the unseen space.
Our algorithm achieves state-of-the-art performance on AWA1, AWA2, CUB, and SUN
datasets, surpassing existing CZSL methods by 3-7\%. The code has been made
available here \url{https://github.com/wx-zhang/IGCZSL} | Wenxuan Zhang, Paul Janson, Kai Yi, Ivan Skorokhodov, Mohamed Elhoseiny | 2023-08-23T18:10:12Z | http://arxiv.org/abs/2308.12366v1 | # Continual Zero-Shot Learning through Semantically Guided Generative Random Walks
###### Abstract
Learning novel concepts, remembering previous knowledge, and adapting it to future tasks occur simultaneously throughout a human's lifetime. To model such comprehensive abilities, continual zero-shot learning (CZSL) has recently been introduced. However, most existing methods overused unseen semantic information that may not be continually accessible in realistic settings. In this paper, we address the challenge of continual zero-shot learning where unseen information is not provided during training, by leveraging generative modeling. The heart of the generative-based methods is to learn quality representations from seen classes to improve the generative understanding of the unseen visual space. Motivated by this, we introduce generalization-bound tools and provide the first theoretical explanation for the benefits of generative modeling to CZSL tasks. Guided by the theoretical analysis, we then propose our learning algorithm that employs a novel semantically guided Generative Random Walk (GRW) loss. The GRW loss augments the training by continually encouraging the model to generate realistic and characterized samples to represent the unseen space. Our algorithm achieves state-of-the-art performance on AWA1, AWA2, CUB, and SUN datasets, surpassing existing CZSL methods by 3-7%. The code has been made available here [https://github.com/wx-zhang/IGCZSL](https://github.com/wx-zhang/IGCZSL).
## 1 Introduction
Researchers have devoted significant effort to developing AI learners to mimic human cognition. One such endeavor is zero-shot learning (ZSL), which aims to identify unseen classes without accessing any of their images during training. However, human zero-shot learning abilities improve dynamically over time. As individuals acquire more knowledge of seen tasks, they become better at recognizing unseen tasks. To evaluate the zero-shot learning in such a dynamic seen-unseen distribution, the continual zero-shot learning problem (CZSL) has been proposed [53]. CZSL emulates the continuous learning process of a human's life, where the model continually sees more classes from the unseen world and is evaluated on both seen and unseen classes. This CZSL skill, may it get maturely developed to the world scale, has the potential to accelerate research in species discovery, for example, as known species grow continually, but close to 90% of the species are not yet discovered[55].
Generative models (e.g., GANs[26]) have made significant progress in producing photorealistic images by learning high-dimensional probability distributions. This ability motivated researchers to adapt GANs to ZSL to generate missing data of unseen classes conditioning on unseen semantic information, known as generative-based ZSL. Training the classifier on synthetic unseen samples can reduce model prediction bias towards seen classes and thus achieves competitive zero-shot learning performance [38, 58, 42]. Some CZSL works directly adopt this framework continually, known as transductive continual zero-shot learning [25, 36]. However, in CZSL, the unseen world changes dynamically and unexpectedly, making it unrealistic to use prior knowledge about unseen classes[53]. When we do not assume access to unseen semantic information in the CZSL setting, which is known as _inductive continual zero-shot learning_, most existing methods struggle to perform well, as we show in our experiments. Furthermore, the theoretical understanding of how zero-shot learning benefits from synthetic data is limited, which poses an obstacle to developing purely inductive continual zero-shot methods. Recent analyses of training generative models with synthetic data [8] provide a possible avenue for developing the desired theoretical explanation. This led us to develop a generalization-bound tool to understand the learning mechanism in generative-based CZSL and further develop inductive methods based on it.
In our analysis, we have identified it is crucial to reduce the distance between the generated and actual visual space of unseen classes. This requires the model to generate realistic samples to represent unseen space to augment the training
of the classifier. However, the lack of ground truth semantic descriptions for unseen classes and the lack of previously seen classes data often leads to the generated samples collapsing to the seen classes. A similar problem has been addressed in generating novel style artworks, where GAN training is augmented to encourage the generated styles to deviate from existing art style classes [15, 51, 27, 30, 28, 31]. Drawing inspiration from the improved feature representation achieved by generative models in producing novel art, and the connection between the ability to generate novel styles in art generation and to generate samples to represent the unseen space in generative-based CZSL, we propose a purely inductive, **Generative Random Walk (GRW)** loss, guided only by semantic descriptions of seen classes.
In each continual learning task, we first hallucinate some classes by interpolating on or sampling from a learnable dictionary based on the current and previous classes, with the belief that the realistic classes, both seen and unseen, should be relatable to each other [16, 17]. We then generate samples from the hallucinated classes. To prevent the generated samples of hallucinated classes from collapsing to the seen classes, we apply the GRW loss, as illustrated in Figure 1. We perform a random walk starting from the seen class and moving through generated examples of hallucinated classes for \(R\) steps, as described in detail later in Section 5.2.2. The GRW loss encourages high transition probabilities to the realistic unseen space by deviating from the visual space of the seen classes and avoiding less realistic areas. The resulting representations are both realistic and distinguishable from seen classes, which enhances the generative understanding of unseen classes. This approach is particularly effective when the model is updated continually, as it enables the model to use the newly learned knowledge to improve further the generated examples of hallucinated classes. Our contributions lie in
* We provide a theoretical analysis of continual zero-shot learning. This analysis guides us to use proper signals to make up for the missing unseen information. We present these generalization-bound tools for the analysis in Section 4.
* Guided by the analysis, we develop a method for purely inductive continual zero-shot learning; described in detail in Section 5. Our method, ICGZSL, first provides two ways to hallucinate classes, interpolation of two seen classes and learning a dictionary based on the seen classes. Then, we integrate our introduced semantically guided Generative Random Walk (GRW) loss to generate distinguishable and realistic samples to represent unseen classes.
* We performed comprehensive experiments (Section 6) that demonstrate the effectiveness of our approach. Specifically, our model achieves state-of-the-art results on standard continual zero-shot learning benchmarks, AWA1, AWA2, CUB, SUN, and performs often better than transductive methods
## 2 Related Works
**Inductive and Transductive Zero-Shot Learning.** There are varying degrees of accessibility to unseen information in zero-shot learning. Transductive methods use both unlabeled samples and attributes of unseen classes during
Figure 1: **Semantically guided generative random walk (GRW)**: At each time step, new classes are added to the seen classes space, and the random walk starts from each seen class center (in green) and transitions through generated samples of hallucinated classes (in orange), then the landing probability distribution over the seen classes is predicted. The GRW loss encourages the generated samples from the hallucinated classes to be distinguishable from the seen classes by encouraging the landing probability over seen classes starting from any seen center to be uniformly distributed, and hence hard to classify to any seen class.
training [44, 48]. Semantically transductive methods, on the other hand, only use attributes of unseen classes in training [63, 60]. In the inductive setting, however, no unseen information is allowed to be used(_e.g_., [69, 16, 41, 64]). This can result in a bias towards seen classes [45]. Generative methods, such as those used by [69, 41, 16], can produce unseen samples using only seen class information during training to solve this issue. For example, [16] relate zero-shot learning to human creativity to generate images that deviate from seen classes during training. [7] used unlabeled samples from out-of-distribution data to gather knowledge about unseen data. [54] utilize two variational autoencoders to generate latent representations for visual and semantic modalities in a shared latent space. In contrast, our approach focuses on investigating the relationship between the generated samples of hallucinated classes and the seen classes, which leads to GRW loss.
**Continual Learning.** The majority of continual learning works aim to tackle the problem of catastrophic forgetting, where the data representation becomes biased towards the most recent task in sequential learning. Regularization-based methods [39, 4], structure-based methods [49, 13], and replay-based methods [52, 65] have been proposed to resolve this problem. More recently, research has explored forward transfer in continual learning, with the belief that as knowledge accumulates, higher next-task transferability, as measured by zero-shot assessment, should be attained. Their evaluation space either includes the next task [40] or the whole class space [12]. However, compared to our setting, [40] did not evaluate the model in a generalized manner, and [12] only paid attention to the seen accuracy.
**Continual Zero-shot learning.**[11] introduced A-GEM for continual learning, which was later applied to deal with zero-shot tasks sequentially, laying the foundation for the initial work on CZSL. [53] proposed the inductive CZSL scenario and demonstrated that a class-based normalization approach can improve performance in continual zero-shot learning. Both [21] and [24] explore the CZSL problem, but rely on unseen class descriptions to train a classifier before inference. [36] proposed a generative adversarial approach with a cosine similarity-based classifier that supports the dynamic addition of classes without requiring unseen samples for training. Their approach also relies on unseen class descriptions for seen-unseen deviation, making it a semantically transductive method. This motivated us to explore a purely inductive method for handling seen-unseen deviation and improving the realism of unseen samples.
## 3 Problem Setup and Notations
### Formulation
We start by defining our problem and notations. A labelled dataset is defined as a tuple \(\mathbf{D}=\{(\mathbf{x},\mathbf{a},y)|y=f(\mathbf{x}),(\mathbf{x},\mathbf{a},y)\sim\mathcal{D}\}\), where \(\mathcal{D}\) represents the data distribution. Each data point is a tuple of image feature \(\mathbf{x}\in\mathbb{R}^{d_{x}}\), class attribute \(\mathbf{a}\in\mathbb{R}^{d_{a}}\), and a class label \(y\). Here \(d_{x}\) is the dimension of the visual feature space, and \(d_{a}\) is the dimension of the attribute space. Each distribution has a specific labeling function \(f\). Our goal is to learn a model \(\hat{f}\) on top of \(\mathbf{D}\) to estimate \(f\). We study the continual zero-shot learning setting proposed by [53], where we seek to learn the model \(\hat{f}\) on a stream of tasks. In each task \(t\), the model is learned on the seen dataset \(\mathbf{D}_{s}^{t}\), and is evaluated on both the seen distribution \(\mathcal{D}_{s}^{t}\) and unseen distribution \(\mathcal{D}_{u}^{t}\). Moreover, we assume that the set of seen class and unseen class are disjoint, that is \(\mathcal{D}_{s}\cap\mathcal{D}_{u}=\phi\). This procedure is illustrated in the bottom part of Figure 2.
We use generative models as the backbone. During the training time, the model \(\hat{f}\) is trained on the seen dataset \(\mathbf{D}_{s}\) as well as the synthesized dataset \(\mathbf{D}_{h}\). \(\mathbf{D}_{h}\) is generated by conditioning on hallucinated attributes \(\mathbf{a}_{h}\) and prior \(\mathcal{Z}\sim\mathcal{N}(0,1)\). The labeling function \(f_{h}\) of the generated dataset is a look-up table of the generated features \(\mathbf{x}\in\mathbf{X}_{h}\) and the corresponding attribute condition \(\mathbf{a}_{h}\).
### Notations.
In our theoretical analysis, we use the following notations: 1) We discuss the relationship between the three types of variables, namely, real seen sample, real unseen samples, and generated samples from the hallucinated classes. To specify the variables related to these types of samples, we use subscripts \(\cdot_{s},\cdot_{u},\cdot_{h}\) respectively, _e.g_., \(f_{s},f_{u},f_{h}\) ; 2) We denote the values and model empirically computed by a variable with a hat, _e.g_., \(\hat{f}\); 3) We use superscripts \(\cdot^{t}\) or \(\cdot^{1:t}\) to indicate that a variable is for task \(t\) or for tasks \(1:t\) respectively, _e.g_., \(f_{s}^{t},f_{s}^{1:t}\); 4) \(\mathbf{D}\) is used for the empirical sample set, and \(\mathcal{D}\) is used for the distribution; 5) We use \(N_{s}\) and \(N_{u}\) to denote the number of seen and unseen classes.
In practice, the unseen information, _i.e_., \(\mathbf{a}_{u},\mathbf{D}_{u},N_{u}\), is not available. Therefore, we hallucinate some classes denoted by \(\mathbf{a}_{h}\) and generate samples \(\mathbf{D}_{h}=\{(\mathbf{x}_{h},\mathbf{a}_{h})\}\) by conditioning on these attributes. We use \(N_{h}\) to represent the number of hallucinated classes. Additionally, we do not have access to all the previous data, so \(\mathbf{X}^{1:t}\) refers to the current samples as well as the previous ones in the buffer. We also use generated seen samples \(\cdot_{sg}\) for GAN training.
## 4 Theoretical Analysis
As mentioned in the introduction, we propose using hallucinated classes to represent the unseen space. By training our model on synthetic samples generated from these classes, we improve the model's generalization ability to the actual unseen classes during the testing time of continual zero-shot learning. In this section, we quantify the model's generalization ability by measuring the distance between the synthetic samples that represent the unseen space and the actual un
seen samples. Additionally, we explain our motivation for using a random walk-based method to reduce this distance when no information about the unseen space is available.
### Generalization Bound Inductive Continual Zero-Shot Learning
In this section, we present a generalization bound for a continual zero-shot learning algorithm. Given the entire training distribution, a learning algorithm can output an optimal hypothesis \(h\) that estimates the ground truth labeling function \(f\). However, since the learning algorithm can only be trained on a finite sample from the training set, it outputs an empirical hypothesis \(\hat{h}\) to estimate the ground truth labeling function. We define the generalization error [33] for these two types of hypotheses. We define the actual risk,
\[\epsilon(h,f)=\mathbb{E}_{(\mathbf{x},\mathbf{a})\sim\mathcal{D}\left[\mathbbm{1}_{f( \mathbf{x})\neq h(\mathbf{x},\mathbf{a})}\right]}\enspace, \tag{1}\]
which measures the expected probability of a disagreement between the ground truth and the optimal hypothesis. We also define the empirical risk on the finite sample set \(\mathbf{D}\),
\[\hat{\epsilon}(\hat{h},f)=\frac{1}{|\mathbf{D}|}\sum_{(\mathbf{x},\mathbf{a})\in\mathbf{D}} \mathbbm{1}_{f(\mathbf{x})\neq\hat{h}(\mathbf{x},\mathbf{a})}\enspace, \tag{2}\]
which measures the probability of a disagreement between the ground truth and the empirical hypothesis.
In a continual zero-shot learning algorithm, given a training set \(\mathbf{D}_{s}^{t}\), the algorithm outputs \(\hat{h}\) to estimate \(f_{s}^{1:t}\cup f_{u}\) instead of the ground truth labeling function \(f_{s}\). To begin our analysis, we propose a distance measure between the generated unseen distribution1 and the real unseen distribution \(\bar{d}_{GDB}(\mathbf{D}_{h},\mathbf{D}_{u})\) as follows:
Footnote 1: In the transductive setting, the unseen distribution is generated by conditioning on the unseen semantic information. In our work, we utilize generated samples from hallucinated classes to represent the generated unseen distribution.
**Definition 4.1** (Empirical Generative distance).: _Given the training set \(\mathbf{D}_{s}\) and the synthetic set \(\mathbf{D}_{h}\), the ground truth labeling functions \(f_{s}\), \(f_{h}\), and \(f_{u}\), and the optimal hypothesis \(\hat{h}^{*}=\arg\min_{h\in H}\hat{\epsilon}_{s}(h,f_{s})+\hat{\epsilon}_{h}(h, f_{h})\) obtained by training the model on \(\mathbf{D}_{h}\) and \(\mathbf{D}_{s}\), we can define the distance between \(\mathbf{D}_{h}\) and \(\mathbf{D}_{u}\) as follows:_
\[\bar{d}_{GDB}(\mathbf{D}_{h},\mathbf{D}_{u})=\left|\hat{\epsilon}(\hat{h}^{*},f_{u})- \hat{\epsilon}(\hat{h}^{*},f_{h})\right|\enspace. \tag{3}\]
Our proposed \(\bar{d}_{GDB}\) is a feasible distance measure that satisfies the properties of a pseudo-metric. In the following, we present our generalization bound following [8] for the continual zero-shot learning algorithm, which shows how the generalization ability of the zero-shot learning algorithm is mainly influenced by this distance.
**Theorem 4.2** (Generalization bound of the generative-based CZSL).: _Given the CZSL procedure described in section 3.1, with confidence \(1-\delta\) the risk on the unseen distribution is bounded by_
\[\epsilon(h,f_{u}^{t})\leq \hat{\epsilon}(\hat{h}^{*},f_{s}^{1:t})+\frac{1}{2}d_{h\Delta \mathcal{H}}(\mathcal{D}_{s}^{1:t},\mathcal{D}_{u}^{t})+\bar{\lambda} \tag{4}\] \[+\frac{1}{2}\bar{d}_{GDB}(\mathbf{D}_{u}^{t},\mathbf{D}_{h}^{t})+C(\frac{ 1}{m}+\frac{1}{\delta})\]
_where \(\hat{h}^{*}=\arg\min_{h\in H}\sum_{i=1}^{t}\hat{\epsilon}(h,f_{s}^{i})+\hat{ \epsilon}(h,f_{h}^{t})\), \(\bar{\lambda}=\hat{\epsilon}(\hat{h}^{*},f_{s}^{1:t})+\hat{\epsilon}(\hat{h}^ {*},f_{h}^{t})\)._
In Equation 24, measurement \(d_{\mathcal{H}\Delta\mathcal{H}}\)[6] is used to quantify the difference between two distributions for domain adaptation based on the type of model, and is fixed for a specific problem. \(\bar{\lambda}\) and \(\hat{\epsilon}_{s}(h,f_{s})\) are highly depended on the optimization algorithm. However, if we hallucinate a diverse set of classes, \(f_{u}\) can be compactly supported by \(f_{h}\). If we further generate realistic samples for each of the hallucinated classes, the optimal solution trained on the synthetic set, \(\hat{h}^{*}=\arg\min_{h\in H}\hat{\epsilon}_{s}(h,f_{s})+\hat{\epsilon}_{h}(h, f_{h})\), should perform well on the real unseen dataset. This can lead to a reduction in \(\bar{d}_{GDB}(\mathbf{D}_{u},\mathbf{D}_{h})\) in Equation 3. We will discuss this further in the following section. The detailed derivation of this theorem can be found in A.1.
### Reducing the bound using Markov Chain.
To reduce \(\bar{d}_{GDB}(\mathbf{D}_{h},\mathbf{D}_{u})\) in Equation 24, we need to decrease the difference between \(\mathbf{D}_{u}\) and \(\mathbf{D}_{h}\). One approach proposed by [16] is to hallucinate \(\mathbf{a}_{h}\) as a compact support of \(\mathbf{a}_{u}\). Once we have achieved this, we can further generate high-quality samples to increase \(\mathbb{P}[\mathbf{D}_{u}\subset\mathbf{D}_{h}]\), where the probability is taken over all possible generations.
To quantify the probability value \(\mathbb{P}[\mathbf{D}_{u}\subset\mathbf{D}_{h}]\), we follow the approach of [57] and view the generations as nodes in a Markov chain. We define the transition probability between two states as the probability with which one sample is classified as another. Then, we can bound \(\mathbb{P}[\mathbf{D}_{u}\subset\mathbf{D}_{h}]\) by the self-transition probability using a generalization bound. When the self-transition probability is the same in two sets of generations, we prefer the one with higher diversity quantified by DDP, as suggested by [32] and [14].
For detailed explanations, please refer to A.2. Here, we provide an informal statement.
**Statement 4.3**.: _Finding generated samples from hallucinated classes to "carefully" increase the determinant and the diagonal entries of the transition matrix of the above described Markov Chain can reduce \(\bar{d}_{GDB}\)._
We can now design an algorithm that first hallucinates classes and then generates diverse samples from these classes to represent the unseen space that follows Statement 4.3. However, the transition matrix of the Markov chain described above is intractable to compute in practice. To quantify the transition probability, we adapt the random walk framework [29, 5] originally used in semi-supervised few-shot
learning to generative zero-shot learning with a few yet important changes. Please refer to Appendix B.4 the relation between our work and previous work.
We also make the following two adjustments to the statement to encourage the generated samples from hallucinated classes to be consistently realistic like the real samples. Firstly, we represent the transition matrix among hallucinated classes (noted as \(\mathbf{P}^{X_{h}X_{h}}\in\mathbb{R}^{N_{h}\times N_{h}}\)) in the seen class space using a congruent transformation \(\mathbf{P}^{\mathbf{C}_{x}X_{h}}\mathbf{P}^{X_{h}X_{h}}\mathbf{P}^{X_{h}C_{s}}\), where \(\mathbf{P}^{\mathbf{C}_{x}X_{h}}\in\mathbb{R}^{N_{x}\times N_{h}}\) is the transition probability matrix from seen prototypes to generated samples from hallucinated classes, and \(\mathbf{P}^{X_{h}C_{s}}\) is the opposite. Secondly, hallucinating a compact support of unseen class attributes and encouraging the transition matrix to be diagonal requires a huge number of generations. To reduce this number, we encourage the generated samples of hallucinated classes to have a "relatable deviation" to the seen classes. The relationship between the two types of samples is that both should be realistic. This means that the transition matrix \(\mathbf{P}^{X_{h}X_{h}}\) may not be strictly diagonal, and our goal is to reduce the non-diagonal entries, _i.e_., to reduce the transition probability between different generated samples of hallucinated classes. We further repeat the transition among generated samples of hallucinated classes to further reduce the non-diagonal entries.
In conclusion, our transitions start from the seen prototypes to generated samples of hallucinated classes for \(R\) steps and back to seen prototypes, the transition matrix of which is \(\mathbf{P}^{\mathbf{C}_{x}X_{h}}(\mathbf{P}^{X_{h}X_{h}})^{R}\mathbf{P}^{X_{h}C_{s}}\in \mathbb{R}^{N_{x}\times N_{s}}\). To encourage"relatable deviation" of the generated samples of hallucinated classes from seen classes, we aim to reduce the non-diagonal entries of the transition matrix, as detailed later in Section 5. This approach intuitively prevents the generations from being attracted by any seen classes, and theoretically can reduce the distance \(\bar{d}_{GDB}\). Intentionally, this method also transfers knowledge between seen and hallucinated classes, which is useful for generating realistic images.
## 5 Generative-based Inductive CZSL Approach
**Method overview.** Generative-based inductive CZSL algorithms adopt generative models as their architecture, where seen samples are used to train the classifier to correctly classify seen classes, and generators are trained to generate realistic samples. At the same time, the generator is encouraged to synthesize samples to represent unseen classes to train the classifier to perform classification on these samples. In our work, we can only hallucinate some classes to represent the actual unseen space and generate samples from the hallucinated classes. As guided by our analysis, the key point of inductive zero-shot learning is to generate realistic and diverse samples from hallucinated classes that are deviated from the real seen space. We introduce the generative model backbone in Section 5.1, and how we generate the abovementioned samples in Section 5.2. The overall procedure is shown in Algorithm 1 in Appendix.
### Generative-based CZSL baseline
We follow [36] as our baseline. The model contains a generator \(G(\mathbf{a},\mathbf{z}):\mathbb{R}^{d_{a}+d_{z}}\rightarrow\mathbb{R}^{d_{x}}\) and a discriminator \(D(\mathbf{a}):\mathbb{R}^{d_{a}}\rightarrow\mathbb{R}^{d_{x}}\). The generator takes the semantic information (denoted by \(\mathbf{a}\)) and the prior (denoted by \(\mathbf{z}\)) sampled from a standard normal distribution \(\mathcal{Z}\) as input and outputs visual features. Discriminator projects semantic information \(\mathbf{a}\) into visual space. The conditional adversarial training can be illustrated by the discriminator loss and generator loss as:
\[\begin{split}\mathcal{L}_{D}&=-\mathcal{L}_{\text{ real-fake}}+\lambda_{\text{cls}}\mathcal{L}_{\text{classification}}+\lambda_{\text{val}}\mathcal{R}_{D}, \\ \mathcal{L}_{G}&=\mathcal{L}_{\text{real-fake}}+ \lambda_{\text{cls}}\mathcal{L}_{\text{classification}}+\mathcal{L}_{\text{ inductive}}+\lambda_{\text{rg}}\mathcal{R}_{G}.\end{split} \tag{5}\]
As shown in Figure 2, we use \(\mathcal{L}_{\text{real-fake}}\) to denote the GAN loss that discriminates between the real and fake samples for the current task, and \(\mathcal{L}_{\text{classification}}\) to denote the entropy loss based on cosine similarity that is used to perform the classification of all seen classes up to the current task. The equations for \(\mathcal{L}_{\text{real-fake}}\) and \(\mathcal{L}_{\text{classification}}\) are shown below
\[\begin{split}\mathcal{L}_{\text{real-fake}}&= \mathbb{E}_{(\mathbf{x},\mathbf{a})\sim\mathbf{D}_{s}^{t}}\left[\log(\mathbf{x},D(\mathbf{a})) \right]\\ &-\mathbb{E}_{\mathbf{z}\sim\mathcal{Z},(\mathbf{a},\mathbf{z})\sim\mathbf{D}_{ s}^{t}}\left[\log(G(\mathbf{z},\mathbf{a}),D(\mathbf{a}))\right]\\ \mathcal{L}_{\text{classification}}&=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathbf{D}_{s}^{1:t}}\left[L_{\left(\left(\mathbf{x},D(\mathbf{A}_{s}^{1:t })\right),y\right)\right]\\ &+\mathbb{E}_{\mathbf{z}\sim\mathcal{Z},(\mathbf{a},\mathbf{y})\sim\mathbf{D}_{ s}^{1:t}}\left[L_{\left(\left(G(\mathbf{z},\mathbf{a}),D(\mathbf{A}_{s}^{1:t}) \right),y\right)\right]\end{split} \tag{6}\]
Figure 2: The discriminator embeds attributes \(\mathbf{a}_{s}^{1:t}\) into the real feature space to perform classification with real samples \(x_{s}^{t}\), while the generator produces features \(\mathbf{x}_{h}^{t}\) and \(\mathbf{x}_{sg}^{t}\) conditioning on the corresponding attributes. The real-fake loss and classification loss encourage the generated sample distribution consistent with the real samples. Then the inductive loss applied to the generated feature space, which encourages the characterization of generated samples from hallucinated classes, can reduce the bias towards the current seen classes of the classifier and improve continual zero-shot learning performacne.
where \(\langle\cdot,\cdot\rangle\) represents the cosine similarity, \(\mathbf{A}_{s}^{1:t}\) is a matrix of attributes of seen classes up to the current task, and \(L_{e}\) is the cross-entropy loss. In practice, \(\mathbf{D}_{s}^{1:t}\) consists of the current samples and previous samples in the buffer. We follow [36] for regularization terms \(\mathcal{R}_{D},\mathcal{R}_{G}\) and \(\lambda_{c,rd,rg}\). See B.1 for more details about the baseline algorithm. \(\mathcal{L}_{\text{inductive}}\) with its corresponding \(\lambda_{i}\) is the main component to improve inductive continual zero-shot learning, which will be described in detail in section 5.2.
### Inductive Loss
#### 5.2.1 Hallucinate Attributes
To begin our method, we first hallucinate classes to represent the unseen space. During this procedure, we aim to generate a diverse and compact set of attributes without using any information from the unseen test set.
**Interpolation-based method.** When the attributes are distributed uniformly in the attribute space, which can be compactly supported by the seen attribute, we use interpolation-based method. To hallucinate the attributes at every mini-batch, we use an interpolation-based method that was introduced by [16]. Hallucinated attributes are generated using the formula \(\mathbf{a}_{ug}=\alpha\mathbf{a}_{s_{1}}+(1-\alpha)\mathbf{a}_{s_{2}}\), where \(\alpha\) is drawn from the uniform a distribution \(\mathcal{U}(0.2,0.8)\), and \(\mathbf{a}_{s_{1}}\) and \(\mathbf{a}_{s_{2}}\) are two randomly chosen seen attributes. The sample interval is chosen to be \((0.2,0.8)\) to ensure that the interpolated attributes are not too close to the seen attributes.
**Dictionary-based method.** We further propose to learn an attribute dictionary containing \(N_{s}^{t}\) attribute vectors during training. The use of a learnable dictionary allows the attributes to change more freely in accordance with the loss function. The dictionary is randomly initialized by interpolating seen attributes, and during the computation of GRW loss, we randomly pick attributes from it. This approach is particularly useful for classification at a finer level, where the attributes are more specific.
If the hallucinated class can accurately represent the actual unseen space, which is only accessible during the test time, then the model will have good generalization ability on the test set. We visualize the hallucinated classes to examine if this assumption holds. Please refer to B.2 for the visualization of our hallucinated attributes.
#### 5.2.2 Improve Generation Quality by Inductive Loss
As we discussed in Section 4.2, we use GRW loss to improve the generation quality such that the generated samples are realistic, diverse and characterized. To encourage diversity of the samples generated from the hallucinated attributes, we firstly generate only one sample for each hallucinated attribute. And then, we perform a random walk to compute the transition probability using generated seen samples \(\mathbf{X}_{sg}\) and generated samples from hallucinated samples \(\mathbf{X}_{h}\). The random walk starts from each generative seen class center \(\mathbf{C}_{s}\in\mathbb{R}^{N_{s}^{1:t}\times d_{x}}\) computed by the mean of generated seen samples from the corresponding class attributes, where \(N_{s}^{1:t}\) are the number of seen classes until step \(t\). Then we take \(R\) steps of transitions within generated samples of hallucinated classes \(\mathbf{X}_{h}\) with the final landing probability over seen classes so far. The transition probability matrix from seen class centers to generated samples of hallucinated classes is defined as
\[\mathbf{P}^{C_{s}X_{h}}=\sigma(\langle\mathbf{C}_{s},\mathbf{X}_{h}^{\top}\rangle)\enspace, \tag{7}\]
where \(\langle\cdot,\cdot\rangle\) is a similarity measure, and \(\sigma(\cdot)\) is a softmax operator applied on rows. In practice, we use negative Euclidean distance for similarity, that is, suppose \(\mathbf{x}_{h}\) is the row \(i\) of \(\mathbf{X}_{h}\) and \(\mathbf{c}\) is the class center \(j\),
\[\langle\mathbf{C}_{s},\mathbf{X}_{h}^{\top}\rangle_{i,j}=-\left\lVert\mathbf{x}_{h}-\mathbf{c }\right\rVert^{2}\enspace. \tag{8}\]
Similarly, the transition probability matrix within generated samples of hallucinated classes and from generated samples of hallucinated classes to seen class centers are defined as
\[\mathbf{P}^{X_{h}X_{h}}=\sigma(\langle\mathbf{X}_{h},\mathbf{X}_{h}^{\top}\rangle),\mathbf{P}^ {X_{h}C_{s}}=\sigma(\langle\mathbf{X}_{h},\mathbf{C}_{s}^{\top}\rangle)\enspace. \tag{9}\]
Then the random walk staring from each seen class center and transiting \(R\) steps within generated samples of hallucinated classes and back to seen centers are computed by
\[P^{\mathbf{C}_{s}\mathbf{X}_{h}\mathbf{C}_{s}}(R)=\mathbf{P}^{C_{s}X_{h}}(\mathbf{P}^{X_{h}X_{h}})^ {R}\mathbf{P}^{X_{h}C_{s}} \tag{10}\]
In practice, we set the diagonal values of \(\mathbf{P}^{X_{h}X_{h}}\) to small values and hope to reduce the non-diagonal values. This equals to encourage the probability \(P^{\mathbf{C}_{s}\mathbf{X}_{h}\mathbf{C}_{s}}(R)\) to be uniformly distributed over all the seen classes. We further encourage the probability \(\mathbf{P}^{C_{s}X_{h}}\in\mathbb{R}^{N_{s}^{1:t}\times N_{h}}\) to be uniformly distributed over all the generated examples to encourage as many generations to be visited in the random walk, and hence encourage the diversity. Hence, our _Generative Random Walk_ (GRW) loss is defined by
\[L_{\text{GRW}}=\sum_{r=0}^{R}\gamma^{r}L_{e}(P^{\mathbf{C}_{s}\mathbf{X}_{h}\mathbf{C}_{s} }(r),\mathcal{U})+L_{e}(\mathbf{P}_{v}(C_{s},X_{h}),\mathcal{U}_{v})\enspace, \tag{11}\]
where \(L_{e}(\cdot,\cdot)\) is the cross-entropy loss, \(\mathcal{U}\in\mathbb{R}^{N_{s}^{1:t}\times N_{s}^{1:t}}\) is uniform distribution, \(R\) is the transition steps, and \(\gamma\) is exponential decay. We compute the probability that each generated point be visited by any seen class as \(P_{v}(C_{s},X_{h})=\frac{1}{N_{s}^{1:t}}\sum_{i=0}^{N_{s}^{1:t}}\mathbf{P}_{i}^{C_ {s}X_{h}}\), where \(\mathbf{P}_{i}^{C_{s}X_{h}}\) represents the \(i^{th}\) row of the \(\mathbf{P}^{C_{s}X_{h}}\) matrix. The visit loss is then defined as the cross-entropy between \(P_{v}\) and the uniform distribution \(\mathcal{U}_{v}\in\mathbb{R}^{N_{h}}\), encouraging all the generated examples to be visited. In addition, we empirically found that the GRW loss can also work as a regularizer to encourage the consistency of generated seen visual space as well, which we defined as
\[R_{\text{GRW}}=\sum_{r=0}^{R}\gamma^{r}L_{e}(P^{\mathbf{C}_{s}\mathbf{X}_{sg}\mathbf{C}_{s} }(r),\mathcal{I})+L_{e}(\mathbf{P}_{v}(C_{s},X_{sg}),\mathcal{U}_{v}), \tag{12}\]
where \(\mathcal{I}\) is identity distribution, and \(\mathbf{D}_{sg}\) represents the matrix for generated seen samples.
We numerically show that the random walk-based penalty can reduce \(\tilde{d}_{GDB}\) (Def 4.1) by the relationship between \(\bar{d}_{GDB}\) and \(L_{GRW}\). Details are shown in Appendix B.3.
We also adapt the loss proposed in [16] to directly prevent the generated unseen samples from being classified into seen classes, i.e.,
\[L_{\text{creativity}}=\mathbb{E}_{\mathbf{z}\sim\mathcal{Z},\mathbf{a}_{h}\sim\mathbf{D} _{h}}D_{\text{KL}}(\langle G(\mathbf{z},\mathbf{a}_{h}),D(\mathbf{A}_{s}^{1:t})\rangle| \mathcal{U}), \tag{13}\]
where \(D_{\text{KL}}(\cdot\|\cdot)\) is the KL divergence, \(\mathbf{A}_{s}^{1:t}\in\mathbb{R}^{N^{1:t}_{s}\times d_{a}}\) is the matrix of seen classes attributes vectors until task \(t\), \(\mathbf{a}_{h}\) is hallucinated attributes according to Section 5.2.1, \(\left\langle G(\mathbf{z},\mathbf{a}_{ug}),D(\mathbf{A}_{s}^{1:t})\right\rangle\in\mathbb{ R}^{N^{1:t}_{s}}\) are the logits over seen classes so far for a given \(G(\mathbf{z},\mathbf{a}_{h})\), \(\mathcal{U}\) is the uniform distribution.
**Inductive loss** Combining Equation 11, 12 and 13 our final inductive loss is
\[\mathcal{L}_{\text{inductive}}=\lambda_{c}L_{\text{creativity}}+\lambda_{i}L _{\text{GRW}}+\lambda_{i}R_{\text{GRW}} \tag{14}\]
where \(\lambda_{i}\) is the scaling weight for both the GRW loss term and regularization term.
## 6 Continual Zero-Shot Learning Experiment
### Experiment Setup
**Data Stream and Benchmarks:** We adopt the continual zero-shot learning framework proposed in [53]. In this setting, a \(T\)-split dataset \(D^{1:T}\) forms \(T-1\) tasks. At time step \(t\), the split \(D^{1:t}\) is defined as a seen set of tasks, and the split \(D^{t+1:T}\) is an unseen set of tasks. We conduct experiments on four widely used CGZSL benchmarks for a fair comparison: AWA1 [37], AWA2 [67], Caltech UCSD Birds 200-2011 (CUB)[59], and SUN[43]. We follow [53, 36] for the class split in the continual zero-shot learning setting. More details can be found in Appendix D.
**Baselines, backbone, and training:** We use the method proposed in [36] as the main baseline and compare it with recent CGZSL methods in the setting we mentioned above, including the transductive method Tf-GZSL [22], DVGR [25], A-CGZSL [24], BD-CGZSL [36], and the inductive method CN-CZSL [53], CARNet [23]. 'BD-CGZSL-in' denotes our modified inductive version of [36] by naively removing unseen information. Following [36], we also compare our baseline with the classical continual learning methods EWC [34] and A-GEM [11]. We use vanilla GAN's Generator and Discriminator, both of which are two-layer linear networks. Image features are extracted by ResNet-101, pre-trained on ImageNet 1k. The are attributes from [62] and extracted features are used as our model input. We use a replay buffer with a fixed size of 5k.
We run all experiments for 50 epochs and 64 batch sizes with the Adam optimizer. We use a learning rate of 0.005 and a weight decay of 0.00001. Results reported in Table 1 are based on one NVIDIA Tesla P100 GPU. We select our random walk steps \(R\), weight decay \(\gamma\) and coefficient of inductive loss terms \(\lambda_{i}\) according to prior exploratory zero-shot learning experiments shown in Appendix C.
**Metrics:** We use the mean seen accuracy, mean unseen accuracy and mean harmonic seen/unseen accuracy [53] to measure the zero-shot learning ability. These metrics are defined as follows,
\[\begin{split}\text{mSA}&=\frac{1}{T}\sum_{t=1}^{T} S_{t}(D^{1:t}),\text{mUA}=\frac{1}{T-1}\sum_{t=1}^{T-1}U_{t}(D^{t+1:T})\\ \text{mHA}&=\frac{1}{T-1}\sum_{t=1}^{T-1}H(S_{t}(D^ {1:t}),U_{t}(D^{t+1:T})),\end{split} \tag{15}\]
where \(H(\cdot,\cdot)\) is the harmonic mean and \(S_{t},U_{t}\) are seen and unseen per-class accuracy using the model trained after time \(t\). We also use the backward transfer [11, 66, 53] to measure the continual learning ability, which is defined in [53]
\[\text{BWT}=\frac{1}{T-1}\sum_{t=1}^{T-1}(S_{T}(D^{1:t})-S_{t}(D^{1:t}))\enspace. \tag{16}\]
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{AWA1} & \multicolumn{3}{c}{AWA2} & \multicolumn{3}{c}{CUB} & \multicolumn{3}{c}{SUN} \\ \cline{2-13} Metric & mSA & mUA & mHA & mSA & mUA & mHA & mSA & mUA & mHA & mSA & mUA & mHA \\ \hline EWC (\(cl\)) [34] & 29.4 & 9.0 & 13.8 & 30.8 & 10.5 & 15.8 & 12.2 & 0.8 & 1.3 & 11.6 & 2.6 & 4.1 \\ A-GEM (\(cl\)) [11] & 64.2 & 3.9 & 7.2 & 65.8 & 6.7 & 11.9 & 14.4 & 0.4 & 0.8 & 8.6 & 3.0 & 4.2 \\ \hline Tf-GZSL (\(tr\)) [22] & 70.8 & 27.4 & 37.9 & 78.6 & 28.7 & 41.1 & 46.3 & 30.8 & 35.3 & 15.3 & 30.7 & 18.7 \\ DVGR (\(tr\)) [25] & 65.1 & 28.5 & 38.0 & 73.5 & 28.8 & 40.6 & 44.9 & 14.6 & 21.7 & 22.4 & 10.7 & 14.5 \\ A-CGZSL (\(tr\)) [24] & 71.0 & 24.3 & 35.8 & 70.2 & 25.9 & 37.2 & 34.3 & 12.4 & 17.4 & 17.2 & 6.3 & 9.7 \\ BD-CGZSL (\(tr\)) [36] & 62.9 & 29.9 & 39.0 & 68.1 & 33.9 & 42.9 & 19.8 & 17.2 & 17.8 & 27.5 & 15.9 & 20.0 \\ \hline CN-CZSL (\(in\)) [53] & - & - & - & 33.6 & 6.4 & 10.8 & 44.3 & 14.8 & 22.7 & 22.2 & 8.2 & 12.5 \\ BD-CGZSL-in (\(in\)) [36] & 62.1 & 31.5 & 40.5 & 67.7 & 32.9 & 42.3 & 37.8 & 9.1 & 14.4 & 34.9 & 14.9 & 20.8 \\ CARNet (\(in\)) [23] & 67.6 & 27.4 & 37.0 & - & - & - & 42.4 & 12.4 & 18.8 & 31.5 & 15.9 & 20.9 \\ \hline
**ours** + interpolation & 67.0 & 34.2 & **43.4** & 71.1 & 34.9 & 44.5 & 42.2 & 22.7 & 28.4 & 36.0 & 21.6 & 26.8 \\
**ours** + dictionary & 67.1 & 33.5 & 41.6 & 70.2 & 35.1 & **44.6** & 42.4 & 23.6 & **28.8** & 36.5 & 21.8 & **27.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Our proposed method achieves state-of-the-art results when compared with traditional continual learning method (\(cl\)) recent inductive (\(in\)) methods and even shows competitive results in mHA with recent semantic transductive methods (\(tr\)).
Note that this should only be conducted on seen set, since part of the early unseen set become seen set later. The BWT on unseen set cannot reflect the knowledge retain ability of the model.
### Results
The mean harmonic accuracy of the four benchmarks is shown in Table 1, and the task-wise mHA of the CUB dataset is shown in Figure 3. In coarse-grained datasets AWA1 and AWA2, our proposed learner achieves \(43.4\%\) and \(44.6\%\) in mHA, respectively, surpassing all the current inductive and transductive methods. In the fine-grained datasets and tasks with long steps (CUB, SUN), our method achieves \(28.8\%\) and \(27.1\%\), surpassing all the current CZSL methods. We observe that even though other methods have comparable mSA, they have far lower mUA than ours. We believe that our method achieves this improved knowledge transfer ability from seen visual space to unseen visual space through the proposed inductive learning signals,, \(\mathcal{L}_{\text{inductive}}\). Table 2 displays the backward transfer of different continual zero-shot algorithms, where higher results indicate better knowledge retention. Our model exhibits a strong backward transfer capability, particularly on longer task sequences where it is needed the most. We achieved the highest BWT score of 0.19 on CUB. On SUN, negative BWT scores (i.e., forgetting) are observed in most other models, but our method can still retain knowledge from the past. These results suggest that the analysis tools we created allow us to identify the critical factors for zero-shot learning, and the development of tools for continual learning can improve our ability to retain information.
### Ablation Study
To assess the impact of our novel random walk-based penalties, \(L_{GRW}\) and \(\mathcal{R}_{GRW}\), we conducted ablation experiments; see Table 3. he results in Table 3 indicate that the improvements are mainly attributed to \(L_{GRW}\), while \(\mathcal{R}_{GRW}\) contributes an additional \(1\%\). \(L_{\text{creativity}}\) is also part of the inductive loss. Additionally, removing \(L\)creativity while using our GRW losses has little effect on the performance, as shown in Table 3. More details can be found in Appendix D.
### More CZSL settings
Our focus lies on assessing performance under varying seen/unseen class ratios during knowledge accumulation, which is proposed in [53] and referred to as static setting in [36]. There are other continual zero-shot learning settings proposed in [36], such as dynamic and online settings. In the dynamic setting, the seen and unseen classes dynamically increase, while in the online setting, certain unseen classes are continually converted to seen classes. We find our explored static setting a more informing benchmark for the inductive CZSL skill, as the evaluation after every task is always performed on all classes in the dataset and hence is more challenging. Despite this, we still provide a comparison between our method and the baseline methods on the dynamic and online settings; see Table 4. The results show that our method is superior to the baseline in the dynamic and online settings in almost all datasets, and gains the most improvements in the most challenging static setting.
### Replay Buffer Analysis
Some existing methods [24, 25, 36] tend to use the generative replay method proposed by [21], where the correctly predicted seen generated features from the previous task are
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & AWA1 & AWA2 & CUB & SUN \\ \hline DVGR [25] & tr & 0.09 & 0.10 & -0.07 & -0.20 \\ A-CGZSL [24] & tr & 0.11 & 0.05 & 0.10 & 0.005 \\ BD-CGZSL [36] & tr & 0.18 & 0.14 & 0.13 & -0.02 \\ \hline CN-ZSL [53] & in & - & - & -0.04 & -0.02 \\ BD-CGZSL-in [36] & in & **0.18** & **0.15** & 0.14 & -0.03 \\ \hline
**ours** + interpolation & in & 0.12 & 0.10 & **0.19** & **0.01** \\
**ours** + dictionary & in & 0.11 & 0.11 & **0.19** & **0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Backward transfer of different CGZSL methods, where higher results indicate less forgetting.
\begin{table}
\begin{tabular}{l c c} \hline \hline & interpolation & dictionary \\ \hline with \(R_{\text{GRW}}\) + \(L_{\text{GRW}}\) & **28.4** & **28.8** \\ - \(L_{\text{creativity}}\) & 27.72 & 27.66 \\ \hline w/o \(R_{\text{GRW}}\), \(L_{\text{GRW}}\) & 19.07 & 20.75 \\ - \(L_{\text{creativity}}\) & 14.43 & 14.43 \\ \hline with \(L_{\text{GRW}}\) & 26.73 & 27.39 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of the random walk-based penalty with mH measure on CUB dataset.
Figure 3: Mean Harmonic accuracy up to each task on SUN dataset. Our method outperforms both transductive and inductive methods
stored in buffers. However, the buffer size increases significantly over tasks since a fixed number of samples for each class is stored, and if the model struggles to make accurate predictions for certain classes, samples from these classes are absent in the buffer.
We empirically found that the class-balanced experience replay method proposed by [46] can be extremely helpful. At every task, we save the class attribute in \(\mathbf{A}^{1:t}\), the class center matrix \(\mathbf{C}\), and modify the buffer with current features noted as \(\mathbf{D}_{s}^{1:t}\), such that the buffer is balanced across all the seen classes.
In this comparison on the CUB dataset, we observe in Table 5 that the method using real replay can achieve better harmonic accuracy with a smaller buffer size (around 1/10 of the generative replay buffer size) and comparable backward transfer with a slightly larger buffer size (around 1/5 of the generative replay buffer size). Moreover, the real replay-based method is not as sensitive to the buffer size as the generative replay-based methods. It is worth noting that DVGR, A-CGZSL, and BD-CGZSL typically use generative replay, while only CN-CGZSL uses real replay. In addition, the last column in Table 5 shows that our proposed real replay method can also improve the harmonic accuracy of other methods.
To understand the real replay and generative replay, we extend our analysis by visualizing the distribution of buffer features across various classes in task 2 of the SUN dataset, as illustrated in Figure 4. Real replay approach exhibits a balanced allocation of features across all classes. Conversely, the generative replay technique displays an intriguing pattern, wherein certain classes lack a substantial number of stored features, while others exhibit a twofold increase. Notably, the classes with fewer stored features coincide with instances where the model's performance is suboptimal. This discrepancy can be attributed to the generative replay method's propensity to store exclusively the accurately classified generated data. Consequently, this uneven distribution of stored features could potentially lead to a compromised performance in these classes during subsequent tasks.
## 7 Conclusion and Discussion
In this paper, we focus on inductive continual zero-shot learning (CZSL) to eliminate the need of unseen information for more realistic learning systems. To this end, we developed a framework for the theoretical analysis of generative zero-shot learning, introducing a distance metric to measure the ability of generated samples to represent the unseen space when the unseen information is inaccessible during training. We also proposed a continual zero-shot algorithm, ICGZSL, which can reduce the distance without using unseen information during training. We conducted experiments on four popular continual zero-shot learning benchmarks: AWA1, AWA2, CUB, and SUN. Our approach achieved around \(3\%\) higher harmonic accuracy in the small dataset and around \(7\%\) in the larger dataset compared to previous inductive and transductive methods. These results demonstrate that unseen semantic information is not essential when a well-analyzed seen distribution and method are used.
However, it is important to acknowledge certain limitations in our work. While the developed theoretical bounds and distance measures hold promise for methodical numeric analysis, a more stringent alignment between empirical and anticipated distance measures could substantially enhance algorithmic design. Moreover, the consideration of multi-class classification conditions warrants attention. Additionally, the use of a frozen backbone for image feature extraction, though effective, encourages further exploration into continual learning methods that facilitate viable zero-shot learning capabilities while enabling the backbone to progressively accumulate knowledge.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & setting & AWA1 & AWA2 & CUB & SUN \\ \hline BD-CGZSL & D & 56.9/49.1 & 56.4 & 16.8 & 28.0 \\
**ours + inter.** & D & **60.0** & **58.8** & **32.8** & **41.6** \\
**ours + dic.** & D & 59.7 & 55.5 & 31.8 & 40.2 \\ \hline BD-CGZSL & O & 56.9/49.1 & **53.4** & 28.4 & 33.7 \\
**ours + inter.** & O & **49.6** & 48.5 & **32.3** & **39.6** \\
**ours + dic.** & O & 46.1 & 47.3 & 31.2 & 39.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: mH in dynamic setting (D) and online setting(O)
Figure 4: Comparison of replayed number of features per class in different replay method at task 2 in SUN dataset
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Buffer & \multicolumn{2}{c}{Ours} & \multicolumn{1}{c}{BD-CGZSL (\(r\))} \\ \cline{3-5} & Size & BWT & mHA & mHA \\ \hline generative & 28.5k & 0.14 & 21.06 & 17.76 \\ real & 10k & 0.17 & 28.44 & 27.79 \\ real & 5k & 0.19 & 28.8 & 26.55 \\ real & 2.5k & 0.08 & 26.99 & 26.77 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of generative replay and real replay methods on CUB [59]. Dictionary-based attribute generation is used |
2307.13825 | The Thickness of Electric Current Sheets and Implications for Coronal
Heating | The thickness of current sheets is extremely important, especially as it
relates to the onset of fast magnetic reconnection. Onset determines how much
magnetic free energy can build up in a field before it is explosively released.
This has implications for many phenomena on the Sun and throughout the
universe, including the heating of the solar corona. Significant effort has
been devoted to the question of whether equilibrium current sheets in realistic
geometries have finite or zero thickness. Using a simple force balance
analysis, we show why current sheets without a guide field (2D) and with a
guide field that is invariant in the guide field direction (2.5D) cannot be in
equilibrium if they have both finite thickness and finite length. We then
estimate the conditions under which the tension of a curved line-tied guide
field can facilitate equilibrium in 3D sheets that are finite in all
dimensions. Finally, we argue that some quasi-statically evolving current
sheets undergoing slow stressing (e.g., when the coronal magnetic field is
subjected to photospheric boundary driving) may reach a critical shear, at
which point they lose equilibrium, spontaneously collapse, and reconnect. The
critical shear is generally consistent with the heating requirements of solar
active regions. | James A. Klimchuk, James E. Leake, Lars K. S. Daldorff, Craig D. Johnston | 2023-07-25T21:39:13Z | http://arxiv.org/abs/2307.13825v1 | # The Thickness of Current Sheets and Implications for Coronal Heating
###### Abstract
The thickness of current sheets is extremely important, especially as it relates to the onset of fast magnetic reconnection. Onset determines how much magnetic free energy can build up in a field before it is explosively released. This has implications for many phenomena on the Sun and throughout the universe, including the heating of the solar corona. Significant effort has been devoted to the question of whether equilibrium current sheets in realistic geometries have finite or zero thickness. Using a simple force balance analysis, we show why current sheets without a guide field (2D) and with a guide field that is invariant in the guide field direction (2.5D) cannot be in equilibrium if they have both finite thickness and finite length. We then estimate the conditions under which the tension of a curved line-tied guide field can facilitate equilibrium in 3D sheets that are finite in all dimensions. Finally, we argue that some quasi-statically evolving current sheets undergoing slow stressing - e.g., when the coronal magnetic field is subjected to photospheric boundary driving - may reach a critical shear, at which point they lose equilibrium, spontaneously collapse, and reconnect. The critical shear is generally consistent with the heating requirements of solar active regions.
## Introduction
Many explosive phenomena occurring on the Sun, within the heliosphere, and throughout the universe involve the slow buildup and sudden release of magnetic energy. Solar examples include flares (Kazachenko et al., 2022), coronal mass ejections (Chen, 2011), jets (Raouafi et al., 2016), and the nanoflares that heat the corona to temperatures of several million degrees (Klimchuk, 2015). In a typical scenario, slow forcing at the boundary of the system causes magnetic stresses to grow. Current sheets become thinner until eventually reaching a critical thickness whereupon fast magnetic reconnect sets in and energy is explosively released. On the Sun, the boundary forcing is provided by photospheric flows that displace the footpoints of coronal magnetic field
lines. Chaotic flows associated with turbulent convection are especially relevant to coronal heating.
Half a century ago, Parker (1972) proposed that infinitely thin current sheets - also called singular current sheets and tangential discontinuities - must develop whenever continuous 3D magnetic fields (without separatrices or X-points) are subjected to continuous motions at a line-tied boundary. This provided a straightforward explanation for coronal heating because ubiquitous current sheets would be expected to form and reconnect. Parker called this process "topological dissipation."
Parker's picture is in fact problematic. Minimal stress is built up in the field - and minimal energy is released - if reconnection happens too readily, as would be the case if current sheets were singular when they first form. Even a high occurrence frequency of weak events is inadequate to heat the corona because the time-averaged Poynting flux of energy pumped into the field by photospheric driving depends on the level of stress that is present (Klimchuk, 2015).
The question of whether current sheets have zero or finite thickness is still being actively debated. Most authors disagree with Parker and conclude that the current sheets in line-tied 3D fields are not singular. The scenarios that have been investigated include quasi-static sheet formation from boundary driving (van Ballegooijen, 1985; Antiochos, 1987; Zweibel & Li, 1987; Mikic, Schnack, & Van Hoven, 1989; Craig & Sneyd, 2005; Aulanier, Pariat, & Demoulin, 2005; Zhou et al., 2018; Huang et al., 2022), dynamic sheet formation from instabilities (Longcope & Strauss, 1994; Baty, 1997; Huang, Bhattacharjee, & Zweibel, 2010), and dynamic sheet formation from the relaxation of braided out-of-equilibrium fields (Wilmot-Smith, Hornig, & Pontin, 2009; Pontin & Hornig, 2015). Low (1992) and Ng and Bhattacharjee (1998) argue in favor of Parker. However, none of these studies can be considered definitive or universal.
Here we address the problem from a different approach. We examine the balance of forces within current sheets of finite thickness to determine whether and when equilibrium is possible. We first consider simplified 2D and 2.5D geometries before moving on to fully 3D sheets with line-tied boundary conditions, as applies to the corona.
While we emphasize coronal heating, our results are general. We note that reconnection occurs at different current sheet thicknesses in different physical environments. In nanoflares, the tearing instability that initiates reconnection is fast for sheets much thicker than kinetic scales - so a resistive MHD approach is valid - while in the magnetosphere, kinetic effects are fundamentally important at reconnection onset.
## 2D and 2.5d Current sheets
There exist many published two-dimensional solutions for equilibrium current sheets without a guide field (2D) and with a guide field that is invariant in the guide field direction (2.5D).1 These sheets are either infinitely long (Harris 1962) or infinitely thin (Green 1965; Syrovatskii 1971; Priest 1985). Here, length and thickness refer respectively to the dimensions along and across the sheet in the plane of reconnection. We searched the literature for solutions that are finite in both length and thickness but were unsuccessful. This led us to wonder whether such solutions are possible, and, for the reasons given below, we have concluded that they are not.
Footnote 1: The guide-field direction is perpendicular to what is commonly referred to as the plane of reconnection, which contains the magnetic field components that participate directly in reconnection.
The top panel of Figure 1 shows a 2D equilibrium current sheet of finite length and zero thickness (Green 1965). The sheet is the horizontal line at the center that is bounded by two Y-points. The field is potential everywhere except at the sheet itself and is oppositely directed above and below the sheet. Since there are no plasma forces, the Lorentz force is everywhere zero:
\[\frac{1}{c}(\boldsymbol{J}\times\boldsymbol{B})=0. \tag{1}\]
The Lorentz force can be separated into two terms - one associated with magnetic pressure and the other associated with magnetic tension. These two forces exactly balance at all locations:
\[\nabla\left(\frac{\boldsymbol{B}^{2}}{8\pi}\right)=\ \frac{1}{4\pi}\left( \boldsymbol{B}\cdot\nabla\right)\boldsymbol{B}. \tag{2}\]
The bottom panel of Figure 1 shows the magnetic pressure between two flux surfaces centered on the sheet. Gradients in pressure are offset by magnetic tension. Note the horizontal gradient in pressure along the sheet, with the highest pressure occurring in the middle (\(x\)= 0).
Magnetic tension is often associated with curved fields, but it also plays an important role in diverging and converging fields. To understand magnetic tension, it is instructive to consider the Maxwell stress tensor:
\[T_{ij}=\ \frac{1}{4\pi}\left(B_{i}B_{j}-\ \frac{1}{2}B^{2}\delta_{ij} \right). \tag{3}\]
The Lorentz force is equal to the divergence of the stress tensor, and the integral of the Lorentz force over a volume is, via the divergence theorem, identically equal to the integral over the bounding surface of the normal component of the stress tensor. The concept of magnetic tension refers to the idea that field lines tug on any surface through which they pass. The tugging force is directed along the field line. The first term in Equation 3 is associated with tension, and the second
term is associated with magnetic pressure. Pressure forces act only perpendicular to a surface, whereas tension forces can have both perpendicular and transverse components.
Consider a 2D curved field passing through a cube, as sketched at the top of Figure 2. Magnetic pressure pushes inward on all the faces of the cube. Tension pulls downward and to the left on the left face and downward and to the right on the right face. The leftward and rightward forces cancel, and the net tension force is downward. This may or may not be offset by a difference in pressure and/or vertical tension at the top and bottom faces. Examples of curved fields where the tension and pressure gradient forces are balanced can be seen near the Y-points in Figure 1.
Figure 1: Field lines of a 2D equilibrium magnetic field with an infinitely thin current sheet of finite length (top). Magnetic pressure between two flux surfaces equally spaced above and below the current sheet (bottom).
The bottom of Figure 2 shows a diverging 2D field. The tension in the field lines is predominantly horizontal. However, moving from left to right, the horizontal component of tension decreases while the vertical component increases. Integrated over the faces of the cube, the vertical forces cancel, and there is a net tension force to the left. This is offset by a greater pressure pushing on the left face than the right face. This is the type of magnetic force balance that characterizes the colored region away from the Y-points in Figure 1.
Imagine that we create a finite thickness current sheet by replacing the magnetic field in the colored region with plasma having a gas pressure distribution equal to that of the removed magnetic pressure. The system is no longer in force balance. The horizonal pressure gradient remains, but there is no magnetic tension to offset it. The plasma will respond by flowing horizontally away from the middle of the sheet toward the Y-points in both directions. The sheet will become thinner as plasma evacuates and the external magnetic pressure squeezes in from above and below.
Can our hypothetical current sheet evolve to establish an eventual equilibrium with finite thickness? Because there is no magnetic field within the sheet, there is also no Lorentz force, so the plasma pressure must be uniform throughout the entire sheet. Force balance across the sheet
Figure 2: Cube threaded by a 2D curved magnetic field (top) and 2D diverging field (bottom).
boundary then requires uniform magnetic pressure just outside the boundary to match the uniform plasma pressure just inside. This means the field strength must be constant along the boundary.2
Footnote 2: A common misconception is that tension exerts a force perpendicular to the boundary if the boundary is curved. If the normal to the boundary has index i, then the tension term in the Maxwell stress tensor (Equation 3) – the first term – vanishes in all directions j because \(B=0\).
Figure 3 shows a schematic sketch of the sheet. Points B and C are on the sheet boundary and point A is vertically above B. If the field strengths and magnetic pressures are the same at B and C, then the average magnetic pressure gradient along the path from A to B is greater than along the path from A to C, because the distance is shorter. To have equilibrium, the tension force must also be stronger along AB compared to AC. We can express the tension force (right-hand side of Equation 2) as \(B^{\hat{\cal F}}/(4\pi Rc)\), where \(Rc\) is the radius of curvature of the field. The radius of curvature is larger along AB than along AC and therefore the tension force is weaker, not stronger. Thus, there can be no force balance external to the sheet if the field strength is uniform along the sheet boundary. Force balance is possible only if the field strength decreases toward the Y-points: \(B_{\cal C}<B_{\cal B}\). This is incompatible with uniform plasma pressure inside the boundary. Since no finite thickness equilibrium exists, the sheet must collapse to a singularity.
Note that the presence of external plasma would not change the situation. Magnetic pressure can be replaced by total pressure, and the argument above still holds. Force balance external to the sheet requires nonuniform total pressure just outside the boundary, which is incompatible with uniform plasma (total) pressure just inside. It should also be noted that plasma pressure is constant
Figure 3: Current sheet of finite thickness with points B and C on the sheet boundary and points D and E at the center of the sheet. Point A is vertically above B and D. Red and blue arrows indicate the directions of pressure gradient and tension forces, respectively.
along every field line in an equilibrium because there is no Lorentz force parallel to the field. Thus, the plasma pressure is uniform just outside the boundary and the required nonuniformity is provided by the magnetic pressure. Finally, \(\beta=8\pi P/B^{2}\) is of order 1% in solar active regions, so the plasma has minimal impact on force balance in general. For these reasons, we do not include an external plasma in any of our analysis.
The hypothetical current sheet we have created by replacing all the magnetic field in the colored region of Figure 1 is illustrative but unrealistic. It should more properly be called a plasma sheet because the current is concentrated entirely at the sheet boundary, where the field ends abruptly. A more realistic situation is where the field and magnetic pressure decrease gradually to zero from the boundary toward the center (\(\nu=0\)), while the plasma pressure increases gradually to a maximum at the center. The current is then smoothly distributed throughout the sheet. The well-known Harris sheet (1962) is of this type.
Figure 4 shows simple representations of the middle sections of four different current sheets, corresponding to the vicinity of \(x=0\) in Figure 1. Case I is the original field with an infinitely thin sheet. Case II is the plasma sheet discussed above. Case III is a current sheet where the field gradually transitions to plasma. Case IV is similar to Case III except that the in-plane magnetic field is replaced by a guide field component out of the plane, rather than by plasma. The magnetic field vector rotates smoothly by 180\({}^{\rm o}\) across the sheet in Case IV. A modified version includes an additional uniform guide field, \(B_{\rm g0}\), and the field vector rotates by less than 180\({}^{\rm o}\). This is the situation for the current sheets associated with coronal heating, where a rotation of roughly 20\({}^{\rm o}\) is needed to explain the energy requirements of active regions (e.g., Klimchuk 2015). Note that the sketches are only schematic and do not include the small horizontal gradients that would be expected and that are seen in Figure 1.
If the system has no variation out of the plane, then any guide field that is present must be straight. It therefore behaves like a plasma - it has pressure but exerts no tension force. Case III and Case IV are therefore effectively equivalent in 2D. We discuss the 2D force balance of Case III below, but plasma pressure and guide field pressure are interchangeable, so the conclusions apply equally to Case IV.
We start by considering the forces along the line between points E and C in Figure 3. E is at the Y-point and C is at the sheet boundary. The field is curved away from E and therefore a tension force is directed away from E. To balance this tension force, the total pressure must be smaller at E than C. In Case III, pressure is provided entirely by plasma at E and entirely by magnetic field at C, so \(P_{\rm E}<B_{\rm C}{}^{2}/8\pi\), where \(P\) indicates gas pressure. Force balance external to the sheet requires \(B_{\rm C}{}^{2}/8\pi<B_{\rm B}{}^{2}/8\pi\), as discussed above. Vertical tension is very weak between B and D, so the total pressure must be similar at the two locations, implying \(P_{\rm D}\simeq B_{\rm B}{}^{2}/8\pi\). Following the chain, we find that \(P_{\rm E}<P_{\rm D}\). However, because there is no Lorentz force along the magnetic field to balance this
plasma pressure difference, equilibrium is not possible. Plasma will flow horizontally outward from the middle of the sheet, and the sheet will collapse to a singularity, just as in Case II. The same is true of Case IV.
We conclude that equilibrium current sheets in 2D and 2.5D cannot have both finite length and finite thickness. Force balance cannot be achieved both inside and outside the sheet while at the same time satisfying force balance across the sheet boundary. The sheets must be singular if they have finite length.
## 3D Current sheets with line-tying
The situation is entirely different if a guide field is present and variations are allowed in the out-of-plane direction. In this fully 3D case, the guide field can become curved, which introduces a tension force that was not previously present. Suppose the guide field is line-tied at two ends, i.e., the end points are held at fixed positions. As plasma flows away from the middle of the sheet
Figure 4: Schematic representations of the middle sections of four current sheets: Case I - infinitely thin sheet; Case II quasi-uniform plasma sheet with no internal magnetic field; Case III – current sheet where the in-plane field gradually transitions to plasma moving inward; Case IV - current sheet where the in-plane field gradually transitions to guide field. Small field divergence and small horizontal gradients are expected but not shown.
toward the Y-points, the frozen-in guide field bows outward, as shown schematically in Figure 5. This produces a tension force that opposes the flow. If the force is strong enough, it may balance the horizontal pressure gradient driving the flow and allow an equilibrium to be established, thus preventing a full collapse.
Consider the version of Case IV that includes the additional uniform guide field, \(B_{g\theta}\). We take \(B_{g\theta}\) to be substantially larger than the shear field component external to the sheet, \(B_{x\theta}\) at point B in Figure 3, as appropriate for the coronal heating problem. The in-plane field vanishes at Y-points, so the pressure at point E is \(B_{g\theta}\)\({}^{2}\)/(8\(\pi\)). Because of the minimal in-plane tension between B and D, the pressure at D is approximately \((B_{g\theta}\)\({}^{2}\)+\(B_{x\theta}\)\({}^{2})\)/(8\(\pi\)) and takes the form of an enhanced guide field. The horizontal pressure gradient along the center of the sheet is therefore approximated by \([B_{x\theta}\)\({}^{2}\)/(8\(\pi\))/[\(\lambda\)/2] = B_{x\theta}\)\({}^{2}\)/(4\(\pi\lambda\)), where \(\lambda\) is the sheet length.
The horizontal tension force provided by the guide field is \(B_{g\theta}\)\({}^{2}\)/(4\(\pi R_{c}\)), where \(R_{c}\) is its radius of curvature. The ratio of the tension to pressure forces is therefore \(F_{\theta}\)/\(F_{p}\)\(\approx\)\((\lambda\)/\(R_{c}\))(\(B_{g\theta}\)/\(B_{x\theta}\))\({}^{2}\). Equilibrium is possible as long as \(R_{c}\) and/or \(\lambda\) can adjust to make this ratio unity for a given magnetic shear (\(B_{x\theta}\)/\(B_{g\theta}\)).
Consider an equilibrium sheet in a field that is subjected to slow boundary driving. As the shear component of the field \(B_{x\theta}\) increases, so too does the pressure gradient along the sheet. The plasma
Figure 5: Schematic representation of a 3D current sheet that contains a guide field that is line-tied above and below. Horizontal flows from the middle of the sheet toward the Y-points have caused the field lines to become bowed. The length of the current sheet, \(\lambda\), and separation of the line-tied footpoints, \(L\), are indicated.
responds by slowly moving outward toward the Y-points, further bowing the guide field, increasing its tension, and maintaining a quasi-static force balance.3
Footnote 3: Current layers in the simulation by Aulanier, Pariat, & Demoulin (2006) continue to thin for some time after the boundary driving ceases, indicating a non-trivial deviation from static conditions.
There is an upper limit to the tension force in this scenario that occurs when an initially straight field line passing through the middle of the sheet (\(x=0\)) is displaced all the way to a Y-point at the end. The two field lines in Figure 5 are not far from this limiting state. We can estimate the maximum tension force by assuming a circular arc for the guide field and taking \(\lambda<<L\), where \(L\) is the separation of the line-tied ends (loop length in the coronal heating context). From simple geometry, the minimum radius of curvature is \(R_{c,min}\simeq(1/\lambda)(L/2)^{2}\). Substituting into the expression above for the tension-to-pressure force ratio, we obtain
\[\frac{F_{t}}{F_{p}}\,\leq\,\,4\left(\frac{\lambda}{L}\frac{B_{g0}}{B_{x0}} \right)^{2}\,\,. \tag{4}\]
Equilibrium is not possible when the right-hand side is less than unity because then tension is too weak to balance the pressure gradient. Systems with large footpoint separation (long loops) and strongly sheared fields are more prone to nonequilibrium, i.e., less able to support current sheets of finite thickness.
We can relate Equation 4 to a critical shear for a given sheet length and footpoint separation,
\[\frac{B_{x0}}{B_{g0}}>\left(\frac{B_{x0}}{B_{g0}}\right)_{crit}=\frac{2\lambda }{L}\,, \tag{5}\]
or to a critical footpoint separation for a given length and shear,
\[L>\,\,L_{crit}=2\lambda\left(\frac{B_{g0}}{B_{x0}}\right). \tag{6}\]
When current sheets are free to lengthen - increase \(\lambda\) - they will tend to do so to keep the conditions subcritical and maintain equilibrium. However, geometric constraints may sometimes prevent lengthening beyond a certain point. For example, the current sheets that separate elemental magnetic flux tubes that fill the corona4 can be no longer than a tube "diameter" (the cross sections need not be circular, so "diameter" should not be taken literally). As chaotic photospheric flows twist and tangle the tubes, the magnetic shear across a mutual boundary increases and the sheet may transition from subcritical to supercritical conditions. It will then lose equilibrium, spontaneously collapse, and trigger magnetic reconnection. For a typical coronal loop length of 50,000 km and diameter of 2,000 km, the critical shear is 0.08, corresponding to a field rotation
across the sheet of \(10^{\rm o}\). This is comparable to the roughly \(20^{\rm o}\) needed to explain the energy budget of active regions (Klimchuk, 2015). We note, however, that the elemental tubes that comprise a loop have smaller diameters, implying a smaller critical shear.
We caution that Equations 4, 5, and 6 are highly approximate and depend on the assumption that the guide field takes the shape of a circular arc. The numerical coefficients should be treated with special caution. Nonetheless, the dependencies on current sheet length, shear, and line-tied footpoint separation are intuitively very plausible.
## Discussion
Using a simple force balance analysis, we showed why equilibrium cannot be achieved in 2D and 2.5D current sheets that have both finite length and finite thickness. We then estimated the conditions under which the tension of a line-tied guide field can facilitate 3D equilibrium in current sheets that are finite in all dimensions. We suggested that continuous 3D fields subjected to continuous driving at a line-tied boundary - the "Parker problem" - will contain current sheets of finite thickness until they reach a critical shear, whereupon they lose equilibrium, spontaneously collapse, and reconnect. The value of the critical shear is generally consistent with the observed heating requirements of solar active regions.
Equation 6 expresses the critical conditions in terms of the footpoint separation of the line-tied guide field, or loop length. A critical separation was also reported by Zhou et al. (2018). They investigated the so-called Hahm, Kulsrud, Taylor (HKT) problem involving the current sheet of a simply sheared force-free field that is locally squeezed and allowed to relax to a new equilibrium. The sheet is not bounded by Y-points, but the system is periodic in what would be the \(x\) direction of our figures. We associate the \(x\) dimension with a sheet length \(\lambda\) because the essential physical effects included in our analysis are present in that direction in their simulations as well: (1) the in-plane field expands toward the periodic boundaries, thus providing an outward magnetic pressure gradient along the sheet, and (2) symmetric flows responding to the gradient will not cross the boundaries, thus limiting the bowing of the guide field. Note that Zhou et al. refer to the footpoint separation as a length, not to be confused with our current sheet length \(\lambda\).
Zhou et al. found that the relaxed sheet is singular in 2.5D versions of their model and has finite thickness in 3D line-tied versions. The thickness decreases as the footpoint separation increases. There is a strong suggestion, but not definitive proof, that the 3D sheet becomes singular when the separation exceeds a critical size of \(29\lambda\). In comparison, our Equation 6 predicts a critical separation of \(11\lambda\). We do not consider this difference significant given the approximate nature of our derivation. Furthermore, the magnetic pressure gradient along the Zhou et al. sheet is smaller than it would be if the sheet terminated at Y-points. Less tension is therefore required for force balance, so we would expect a larger critical separation. On the other hand, the effects of line tying tend to be more pronounced near the line-tying boundaries, and the field line curvature will be
relatively reduced away from the boundaries (Zweibel & Boozer 1985; Robertson, Hood, & Lothian 1992). This violates our assumption of a circular shape for the guide field and implies a reduced critical separation or critical shear.
Other studies of 3D line-tied current sheets also find that the sheet thickness decreases with increasing separation of the footpoints (Longcope & Strauss, 1994; Baty 1997; Huang, Bhattacharjee, & Zweibel 2010; Craig & Pontin 2014). To our knowledge, only we and Zhou et al. (2018) have proposed a critical separation beyond which the thickness plummets to zero.
It should be noted that our simple analysis assumes a planar current sheet. The tension that we describe arises from a curved guide field within the plane, and it exerts a force in the x-direction. If the current sheet were itself bowed, there would be an additional guide field curvature force in the y-direction that we do not consider. This can be the case, for example, when twisted flux tubes become kink unstable. Baty (1997) suggests that this different curvature may help prevent singularities from forming. Whether it would extend the critical footpoint separation (loop length) to larger values - or even eliminate it altogether - has yet to be determined. We are skeptical, however, because the force is directed perpendicular to the sheet and so is unable to balance the sheet-aligned pressure gradients that are the root of the problem.
The existence of a critical shear for loss of equilibrium and current sheet collapse is the most important outcome of our work. It offers a possible new explanation for reconnection onset, which is crucial for explaining a wide range of phenomena, including but not limited to coronal heating. It remains to be determined whether current sheets survive as they thin toward the critical value. The tearing instability can be very fast even in relatively thick sheets under coronal conditions (Pucci & Velli 2014; Leake, Daldorff, & Klimchuk 2020). Fast reconnection may set in before the critical shear is reached. We consider Equation 5 to be highly promising, but it must be rigorously and quantitatively evaluated. High resolution numerical simulations will be especially important.
## Acknowledgements
This work was supported by the GSFC Heliophysics Internal Scientist Funding Model competitive work package program and by a grant from the NASA Heliophysics Living With a Star Science Program. We are grateful to Yi-Min Huang and Spiro Antiochos for helpful discussions and to the referees for comments that led to a significantly modified and improved paper. |
2301.11364 | Describing metric-affine theories anew: alternative frameworks, examples
and solutions | In this work we describe metric-affine theories anew by making a change of
field variables. A series of equivalent frameworks is presented and
identifications are worked out in detail. The advantage of applying the new
frameworks is that any MAG theory can be handled as a Riemannian theory with
additional fields. We study the Hilbert-Palatini action using the new field
variables and disclose interesting symmetries under $SO$ transformations in
field space. Then, we use solvable and suitable Riemannian theories as seed
models for solvable MAG theories, restricting ourselves to three examples. We
present a black hole solution with torsion and non-metricity which under a
certain tuning acquires a regular core. A de Sitter universe with the expansion
powered by 3-form torsion, is also reported. | Damianos Iosifidis, Konstantinos Pallikaris | 2023-01-26T19:18:58Z | http://arxiv.org/abs/2301.11364v2 | # Describing metric-affine theories anew: alternative frameworks, examples and solutions
###### Abstract
In this work we describe metric-affine theories anew by making a change of field variables. A series of equivalent frameworks is presented and identifications are worked out in detail. The advantage of applying the new frameworks is that any MAG theory can be handled as a Riemannian theory with additional fields. We study the Hilbert-Palatini action using the new field variables and disclose interesting symmetries under \(SO\) transformations in field space. Then, we use solvable and suitable Riemannian theories as seed models for solvable MAG theories, restricting ourselves to three examples. We present a black hole solution with torsion and non-metricity which under a certain tuning acquires a regular core. A de Sitter universe with the expansion powered by 3-form torsion, is also reported.
+
Footnote †: institutetext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institutext: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institut: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
_University of California, Berkeley, CA 94720, USA_
+
Footnote †: institut: \({}^{\dagger}\)_Mathematics Institute, University of Science and Technology,_
In view of the above, looking for alternative gravity theories is a justified course of action.1 The search for these so-called modified theories of gravity is in essence a search for healthy field equations that differ from those of Einstein. Owing to Lovelock and his undisputed theorem [10; 11], there is a list of assumptions that we need to break (in one or more ways) in order to find such a set of equations. In particular, one of the assumptions is that space-time is a smooth Lorentzian manifold equipped with a time orientation. Therefore, we can dodge the stringent consequences of the theorem by permitting the affine connection to be an independent field variable beyond the metric.
Footnote 1: For a review of the zoo of modified-gravity theories see [7; 8; 9] and references therein.
A general connection has both torsion and non-metricity, and a gravitational theory for the metric and the affine connection is known as a Metric-Affine Gravity (MAG) theory [12]. MAG theories exhibit many attractive features. First, the presence of a new gravitational potential, the independent affine connection, brings gravity conceptually closer to the other interactions whose mediators are gauge connections.2 Second, an intriguing feature of metric-affine theories of gravity is the emergence of a hypermomentum current [13; 14; 15; 16] in the presence of matter couplings to the gauge connection.
Footnote 2: See the notion of affine gauge theory in [12].
This differential form, obtained by varying the matter action with respect to the gauge connection, can be decomposed into the irreducible spin, dilation and shear parts which ought to excite the post-Riemannian structure. In the above sense, MAG theories bring forth an astonishing interplay between matter with non-trivial microstructure and non-Riemannian effects. Finally, note that an interesting discussion has been revived about the status of MAG as a quantum theory though definitive conclusions are yet far from being drawn (see [17; 18; 19] and references therein).3
Footnote 3: See also [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34] for some recent advances in the field.
As with Riemannian theories, one is particularly interested in MAG theories which are solvable, ideally exactly solvable (at least for some symmetry ansatz). If the task of finding exact solutions in Riemannian theories is in most cases a difficult one, then the trouble gets double in MAG because we also have to determine the connection. In fact, perhaps the most persistent obstruction to obtaining exact solutions with non-vanishing torsion and non-metricity in metric-affine theories,4 is the computational complexity one is bound to face when attempting to solve the field equations for the affine connection.
Footnote 4: See [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1777; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234] for some recent advances in the field.
Footnote 4: See [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 89; 91; 85; 87; 89; 88; 86; 89; 92; 89; 93; 80; 83; 85; 89; 94; 80; 86; 87; 88; 89; 95; 88; 96; 89; 97; 98; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 121; 123; 124; 125; 126; 127; 128; 129; 131; 132; 133; 134] and references therein for some examples of black hole solutions with torsion and/or non-metricity.
Footnote 5: See [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 53; 54; 56; 57; 59; 61; 62; 630; 63; 64; 65; 67; 68; 69; 71; 80; 81; 83; 85; 86; 87; 89; 90; 88; 87; 88; 89; 91; 89; 92; 88; 89; 93; 80; 84; 89; 80; 85; 86; 87; 88; 89; 94; 81; 89; 80; 86; 89; 95; 87; 88; 89; 96; 97; 89; 98; 99; 100; 101; 102; 104; 106; 107; 108; 109; 117; 118; 119; 133; 140; 141; 143; 144; 145; 146; 147; 148; 149; 150; 153; 155; 156; 157; 158; 159; 161; 172; 173; 175; 176; 177; 178; 179; 180; 181; 183; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 200; 210; 211; 222; 233; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 270; 28; 28; 297; 298; 281; 299; 30; 310; 320; 331; 331; 332; 333; 334] for some recent advances in the field.
In the dominant part of the MAG literature, the strategy to make the connection dynamical is, roughly put, to consider (at least) quadratic curvature invariants like \(R^{2}\), \(R_{\mu\nu}R^{\mu\nu}\), \(R^{\lambda\rho\mu\nu}R_{\lambda\rho\mu\nu}\), et cetera. This strategy is indeed well-motivated and fairly general, but it can quickly turn any attempt at finding a solution into a nearly impossible task, even for relatively simple (in form) Lagrangians of this sort. The reason behind this is that the affine connection is a very compact package of a large number of degrees of freedom, the dynamics of which are encoded in the components of a single tensor equation and presented in an awfully coupled manner. In fact, among other techniques, one almost always tries to split this master tensor equation into simpler, hopefully decoupled equations by acting on
it with some symmetry projector, or taking traces. Therefore, it may not always be the case that the affine connection is the optimal field variable, beyond the metric, to describe a MAG theory, at least not for all intents and purposes.
In this work, we embrace this point of view and use it as a motivation for our proposal. Our goal is to make a change of field variables that will allow us to trade the connection field equations for an equivalent "decongested" system of simpler field equations obtained by letting an action vary with respect to tensor and vector fields. These tensor and vector fields, used to describe MAG theories anew, will be the irreducible pieces of torsion and non-metricity under the Lorentz group. Ipso facto, they are identified with the fundamental fields, the metric and the affine connection. We then work out a complete mapping between the two frameworks which can be later used as a dictionary. The advantage of the new framework is that we can now handle MAG theories as Riemannian theories with additional fields, at least within the context of the variational problem. These additional fields are part of the space-time geometry it self, and not some external entities.
With the new framework established, we proceed with giving examples of how to construct MAG theories in vacuum which result in a selective and tractable self-excitation of the connection. Although there is no universal prescription, a basic idea underlies all our examples. We take Riemannian theories with additional fields (vectors and tensors), which we exactly know how to solve, and we cast them, after some minor necessary modifications, into MAG theories which _effectively_ yield the same field equations. The role of the additional fields is now performed by the new field variables. Their propagation is tantamount to the excitation of (part of) the post-Riemannian structure. Even though the form of the metric solution in such MAG theories will, more or less, be already known in the gravity literature, the full solution, including the connection, will be novel, for it will in general feature non-zero torsion/non-metricity backgrounds.
Plan of this work.In section 2 we convey the bare minimum in metric-affine theories. Then, in section 3 we present the alternative framework and a detailed mapping between the latter and the ordinary Palatini approach. Using the new framework, we revisit the Hilbert-Palatini action in section 4 hoping for fresh insight, and we introduce a useful variant of the new framework when projective symmetry is at play. Finally, in section 5 we showcase a series of examples where we apply the previously developed frameworks, and we also report solutions therein, concluding in section 6.
## 2 Preliminaries
This section is devoted to a brief communication of the MAG preliminaries. In metric-affine theories the affine connection is an independent field variable beyond the metric. We use it to define a covariant derivative whose action on vector and co-vector fields is given by
\[\nabla_{\mu}V^{\nu} =\partial_{\mu}V^{\nu}+\Gamma^{\nu}_{\lambda\mu}V^{\lambda}, \tag{1a}\] \[\nabla_{\mu}V_{\nu} =\partial_{\mu}V_{\nu}-\Gamma^{\lambda}_{\nu\mu}V_{\lambda}, \tag{1b}\]
where \(\Gamma^{\lambda}_{\mu\nu}\) are the connection symbols. A general affine connection features both torsion and non-metricity given by
\[T^{\lambda}{}_{\mu\nu} = 2\Gamma^{\lambda}_{[\nu\mu]}, \tag{2a}\] \[Q_{\lambda\mu\nu} = -\nabla_{\lambda}g_{\mu\nu}, \tag{2b}\]
respectively. The former introduces twisting; parallel transport along a closed path results in a translation. The latter measures the failure of the metric to be covariantly constant; parallel transport brings about a change in vector norms.
Out of torsion and non-metricity we can construct three vectors and one axial tensor. Regarding torsion, we have the vector \(T_{\mu}=T^{\lambda}{}_{\mu\lambda}\) and the axial tensor
\[S^{\alpha_{1}\ldots\alpha_{n-3}}=-\frac{1}{6(n-3)!}\tilde{\epsilon}^{\alpha_{ 1}\ldots\alpha_{n-3}\lambda\mu\nu}T_{\lambda\mu\nu}. \tag{3}\]
Here, \(\tilde{\epsilon}_{\alpha_{1}\ldots\alpha_{n}}=\sqrt{-\mathfrak{g}}\epsilon_{ \alpha_{1}\ldots\alpha_{n}}\) with \(\epsilon_{\alpha_{1}\ldots\alpha_{n}}\) being the Levi-Civita symbol in \(n\) space-time dimensions. Our convention for the symbol is \(\epsilon_{01\ldots n-1}=1=-\epsilon^{01\ldots n-1}\). In \(n=4\) dimensions, the above axial tensor is known as the torsion pseudo-vector,
\[S^{\alpha}=-\frac{1}{6}\tilde{\epsilon}^{\alpha\lambda\mu\nu}T_{\lambda\mu \nu}. \tag{4}\]
Regarding non-metricity, we have the vector \(Q_{\mu}=Q_{\mu\alpha\beta}g^{\alpha\beta}\), which is proportional to what is often called the Weyl vector in MAG lore, and \(\tilde{Q}_{\mu}=Q_{\alpha\beta\mu}g^{\alpha\beta}\).
Continuing, we define the curvature tensor of the general affine connection as
\[R^{\mu}{}_{\nu\alpha\beta}=\partial_{\alpha}\Gamma^{\mu}_{\nu\beta}+\Gamma^{ \mu}_{\rho\alpha}\Gamma^{\rho}_{\nu\beta}-\alpha\leftrightarrow\beta. \tag{5}\]
From the above we can form three independent contractions,
\[R_{\nu\beta} = R^{\mu}{}_{\nu\mu\beta}, \tag{6a}\] \[\hat{R}_{\alpha\beta} = R^{\mu}{}_{\mu\alpha\beta}=\partial_{[\alpha}Q_{\beta]},\] (6b) \[\tilde{R}^{\lambda}{}_{\alpha} = R^{\lambda}{}_{\mu\alpha\nu}g^{\mu\nu}, \tag{6c}\]
which go by the name Ricci tensor, homothetic-curvature tensor, and co-Ricci tensor, respectively. Notice that only the last contraction requires a metric. Finally, contracting indices once more with the metric, we form the Ricci scalar \(R=R_{\mu\nu}g^{\mu\nu}=\tilde{R}^{\mu}{}_{\mu}\). As per tradition, we will refer to the curvature tensor associated with the Levi-Civita connection as the Riemann tensor. Its single (double) trace will bear the name Riemannian Ricci tensor (scalar).
Furthermore, it is a well-established fact that every affine connection differs from another affine connection by a tensor. Therefore, we can always write a general affine connection as
\[\Gamma^{\lambda}_{\mu\nu}=\tilde{\Gamma}^{\lambda}_{\mu\nu}+N^{\lambda}{}_{ \mu\nu}, \tag{7}\]
where
\[\tilde{\Gamma}^{\lambda}_{\mu\nu}=\frac{1}{2}g^{\rho\lambda}\left(\partial_{ \mu}g_{\nu\rho}+\partial_{\nu}g_{\mu\rho}-\partial_{\rho}g_{\mu\nu}\right) \tag{8}\]
are the Christoffel symbols, and
\[N^{\lambda}{}_{\mu\nu} = \frac{1}{2}g^{\rho\lambda}\left(Q_{\mu\nu\rho}+Q_{\nu\rho\mu}-Q_{ \rho\mu\nu}-T_{\rho\mu\nu}-T_{\nu\mu\rho}-T_{\mu\nu\rho}\right) \tag{9}\]
is the so-called distortion tensor encompassing the non-Riemannian DoF. Torsion and non-metricity can always be traded for the distortion tensor via the relations \(T^{\lambda}{}_{\mu\nu}=-2N^{\lambda}{}_{[\mu\nu]}\) and \(Q_{\lambda\mu\nu}=2N_{(\mu\nu)\lambda}\).
Note that eq. (7) suggests that we can split off any quantity into a Riemannian part and non-Riemannian contributions; this is the reputed post-Riemannian expansion of a quantity. For instance, the post-Riemannian expansion of the curvature tensor reads
\[R^{\mu}{}_{\nu\alpha\beta}=\tilde{R}^{\mu}{}_{\nu\alpha\beta}+2\tilde{\nabla} _{[\alpha}N^{\mu}{}_{[\nu|\beta]}+2N^{\mu}{}_{\lambda[\alpha}N^{\lambda}{}_{ |\nu|\beta]}, \tag{10}\]
where \(\tilde{\nabla}_{\alpha}\) is the Levi-Civita covariant derivative and \(\tilde{R}^{\mu}{}_{\nu\alpha\beta}\) the Riemann tensor. Unless otherwise stated, quantities with a tilde accent will always stand for objects associated with the Levi-Civita connection.
## 3 The alternative framework
Observe that the presence of an affine connection as an independent field variable introduces \(n^{3}\)-many additional _a priori_ DoF. Undeniably, the affine connection, being an essential constituent of the metric-affine geometry, is a meaningful variable to work with; torsion and non-metricity are after all properties of a connection. However, squashing that many degrees into a single field is not always the most convenient option. In this section, we instead distribute them among seven fields which correspond to the irreducible pieces of torsion and non-metricity. This strange way of re-organizing the connection DoF will be suitable for purposes presented during a later stage.
The new fields will of course be identified with the metric and the affine connection, the fundamental field variables in metric-affine theories, thus allowing us -- via this change of field variables -- to describe any MAG theory anew. We will show in full generality that the field equations derived within this new framework imply and are implied by the field equations obtained in the familiar context of the Fundamental (or Palatini) Framework (FF) where the metric and the affine connection are the independent variables. The freedom to switch between different formulations of the same theory will prove to be a great asset in the next sections.
In what follows, \(\hat{a}_{\lambda\mu\ldots}\) denotes the completely traceless part of a tensor \(a_{\lambda\mu\ldots}\), whereas \(\bar{a}_{\lambda\mu\ldots}\) denotes the complement of \(\hat{a}_{\lambda\mu\ldots}\) in \(a_{\lambda\mu\ldots}\), viz., \(\bar{a}_{\lambda\mu\ldots}=a_{\lambda\mu\ldots}-\hat{a}_{\lambda\mu\ldots}\). The irreducible decomposition of the torsion tensor under the Lorentz group yields
\[T_{\lambda\mu\nu}=H_{\lambda\mu\nu}+\hat{t}_{\lambda\mu\nu}+\bar{t}_{\lambda \mu\nu}, \tag{11}\]
where
\[H_{\lambda\mu\nu} = T_{[\lambda\mu\nu]}, \tag{12a}\] \[\hat{t}_{\lambda\mu\nu} = T_{\lambda\mu\nu}-H_{\lambda\mu\nu}-\bar{t}_{\lambda\mu\nu}, \qquad\bar{t}_{\lambda\mu\nu}=\frac{2}{n-1}g_{\lambda[\nu}T_{\mu]}. \tag{12b}\]
Note that instead of the 3-form field \(H_{\lambda\mu\nu}\) one may alternatively use the dual tensor \(S^{\alpha_{1}...\alpha_{n-3}}\) defined in (3).
Similarly, for non-metricity we have
\[Q_{\lambda\mu\nu}=\hat{\pi}_{\lambda\mu\nu}+\bar{\pi}_{\lambda\mu\nu}+\hat{q}_{ \lambda\mu\nu}+\bar{q}_{\lambda\mu\nu}, \tag{11}\]
where
\[\hat{\pi}_{\lambda\mu\nu} = Q_{(\lambda\mu\nu)}-\bar{\pi}_{\lambda\mu\nu},\qquad\bar{\pi}_{ \lambda\mu\nu}=\frac{1}{n+2}g_{(\lambda\mu}\rho_{\nu)}, \tag{12a}\] \[\hat{q}_{\lambda\mu\nu} = Q_{\lambda\mu\nu}-\pi_{\lambda\mu\nu}-\bar{q}_{\lambda\mu\nu}, \qquad\bar{q}_{\lambda\mu\nu}=\frac{2}{3(n-1)}\left(g_{\lambda(\mu}u_{\nu)}-g_ {\mu\nu}u_{\lambda}\right),\] (12b) \[u_{\mu} = \check{Q}_{\mu}-Q_{\mu},\qquad\rho_{\mu}=2\check{Q}_{\mu}+Q_{\mu}. \tag{12c}\]
Using the defining eqs. (2), equations (10) and (12) tell us how to express the irreducible pieces in terms of the metric and the affine connection. The other way around, eqs. (11) and (12) tell us how to express the affine connection in terms of the metric and the irreducible pieces using eqs. (7), (8), and (9).
Since we will work with many fields, we find it befitting to use multi-field notation. Let us introduce two objects, \(O\) and \(A\), with components \(O^{N}_{\lambda\mu\nu}\) and \(A^{I}_{\mu}\), respectively. They are given by
\[O_{\lambda\mu\nu} = \left\{H_{\lambda\mu\nu},t_{\lambda\mu\nu},\pi_{\lambda\mu\nu},q_ {\lambda\mu\nu}\right\}, \tag{13a}\] \[A_{\mu} = \left\{T_{\mu},\rho_{\mu},u_{\mu}\right\}. \tag{13b}\]
Einstein's summation convention will also be adopted for indices \(M,N,...\), which take values in \(\left\{\mathit{1},\mathit{2},\mathit{3},\mathit{4}\right\}\), and for indices \(I,J,...\), which take values in the subset \(\left\{\mathit{2},\mathit{3},\mathit{4}\right\}\).5 We can lower/raise these indices with the reference metrics \(\delta_{MN}\) and \(\delta_{IJ}\), respectively. As above, whenever the capital indices are omitted, the objects should be understood as column vectors in Euclidean space. Finally, the term Alternative Framework (AF) will be coined for the formulation of a MAG theory in terms of the set \(\left\{g,\hat{O}^{N},A^{I}\right\}\) of field variables.
Footnote 5: Note the use of slanted numerals for the value of an internal index as opposed to \(\mu,\nu,...=0,..n-1\).
With all the necessary ingredients at our disposal, let us consider a general \(n\)-dimensional MAG action in the FF, say
\[I[g,\Gamma]=\int\sqrt{-\mathsf{g}}d^{n}x\mathcal{L}, \tag{14}\]
where \(\mathsf{g}\equiv\det g\). We let it vary in order to get
\[\delta I=\int\sqrt{-\mathsf{g}}d^{n}x\left(E_{\mu\nu}\delta g^{\mu\nu}+ \Delta_{\lambda}{}^{\mu\nu}\delta\Gamma^{\lambda}_{\mu\nu}\right)+\text{s.t.}, \tag{15}\]
where s.t. denotes the surface terms arising from integrating by parts. We have also abbreviated the functional derivatives as
\[E_{\mu\nu}=\frac{1}{\sqrt{-\mathsf{g}}}\frac{\delta I}{\delta g^{\mu\nu}}, \qquad\Delta_{\lambda}{}^{\mu\nu}=\frac{1}{\sqrt{-\mathsf{g}}}\frac{\delta I}{ \delta\Gamma^{\lambda}_{\mu\nu}} \tag{16}\]
The field equations read
\[E_{\mu\nu}=0,\qquad\Delta_{\lambda}{}^{\mu\nu}=0, \tag{11}\]
with \(E_{\mu\nu}\) being a symmetric tensor.6
Footnote 6: The delicate issue of surface-term handling is out of the scope of this paper. We rather assume that one has by all means ensured that the variational problem is well-posed.
On the other hand, considering eqs. (8), (9), (20), and (10), we can write the previous action in the AF, namely
\[I\left[g,\hat{O}^{N},A^{I}\right]=\int\sqrt{-\mathsf{g}}d^{n}x\mathcal{L}. \tag{12}\]
Letting \(I\) vary we get
\[\delta I=\int\sqrt{-\mathsf{g}}d^{n}x\left(\hat{E}_{\mu\nu}\delta g^{\mu\nu}+ \hat{\mathcal{O}}^{\lambda\mu\nu}_{N}\delta\hat{O}^{N}_{\lambda\mu\nu}+ \mathcal{A}^{\mu}_{I}\delta A^{I}_{\mu}\right) \tag{13}\]
plus surface terms where \(\hat{\mathcal{O}}_{N}\) and \(\hat{O}_{N}\) belong to the same irreducible tensor subspace as Lorentz tensors. The field equations read
\[\hat{E}_{\mu\nu}=0,\qquad\hat{\mathcal{O}}^{\lambda\mu\nu}_{N}=0,\qquad \mathcal{A}^{\mu}_{I}=0, \tag{14}\]
where \(\hat{E}_{\mu\nu}\) is a symmetric tensor. Observe that the traceless property of \(\hat{O}^{N}_{\lambda\mu\nu}\) must be preserved when the action is varied. This condition can be enforced with a Lagrange multiplier. The result is equivalent to simply demanding that the functional derivative with respect to \(\hat{O}^{N}_{\lambda\mu\nu}\), \(\hat{\mathcal{O}}^{\lambda\mu\nu}_{N}\), must be traceless.
With the above in hand, we turn our attention to finding the identities relating the functional derivatives \(\{\hat{E},\hat{\mathcal{O}}^{N},\mathcal{A}^{I}\}\) to \(\{E,\Delta\}\). These identities will arise via identifications. Expressing eq. (10) in terms of the AF variables, one finds that
\[\hat{\mathcal{O}}^{\lambda\mu\nu}_{I} = -\frac{1}{2}\Delta^{[\lambda\mu\nu]},\qquad\hat{\mathcal{O}}^{ \lambda\mu\nu}_{\hat{Z}}=\hat{D}^{[\mu\nu]\lambda}, \tag{15a}\] \[\hat{\mathcal{O}}^{\lambda\mu\nu}_{\hat{S}} = \frac{1}{2}\hat{\Delta}^{(\lambda\mu\nu)},\qquad\hat{\mathcal{O}} ^{\lambda\mu\nu}_{\hat{J}}=-\hat{D}^{\lambda(\mu\nu)},\] (15b) \[\mathcal{A}^{\mu}_{\hat{Z}} = \frac{1}{n-1}\left(\Delta^{\mu\lambda}{}_{\lambda}-\Delta_{ \lambda}{}^{\mu\lambda}\right),\] (15c) \[\mathcal{A}^{\mu}_{\hat{S}} = \frac{1}{6(n+2)}\left(\Delta_{\lambda}{}^{\lambda\mu}+\Delta_{ \lambda}{}^{\mu\lambda}+\Delta^{\mu\lambda}{}_{\lambda}\right),\] (15d) \[\mathcal{A}^{\mu}_{\hat{J}} = \frac{1}{3(n-1)}\left(2\Delta^{\mu\lambda}{}_{\lambda}-\Delta_{ \lambda}{}^{\lambda\mu}-\Delta_{\lambda}{}^{\mu\lambda}\right), \tag{15e}\]
where \(\hat{D}\) and \(\hat{\Delta}\) are given in eqs. (11) of the appendix. Finally, we also have
\[\hat{E}_{\mu\nu} = E_{\mu\nu}-\frac{\Delta^{\alpha}{}_{(\mu\nu)}-\delta^{\alpha}_{( \mu}\Delta_{\nu)}{}_{\beta}{}^{\beta}}{n-1}\left[\frac{n-1}{6(n+2)}\rho_{ \alpha}+T_{\alpha}+\frac{2}{3}u_{\alpha}\right]+\frac{2\Delta^{[\alpha\beta]} {}_{\beta}{}_{\beta}{}_{\gamma}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta} {}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{} _{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{\delta}{}_{ \delta}{}_{\delta}{}_{\delta}{}_{\delta}{}_{\delta}{}
where we used the identities
\[\dot{t}_{[\mu\nu]\lambda}=-\frac{1}{2}\dot{t}_{\lambda\mu\nu},\qquad\dot{q}_{( \mu\nu)\lambda}=-\frac{1}{2}\dot{q}_{\lambda\mu\nu}. \tag{29}\]
If we let the fields \(\hat{O}^{N}\) and \(A^{I}\) on the right hand side of eq. (28) denote expressions involving the metric and the connection symbols (see eqs. (19) and (20)), the above simply gives us \(\hat{E}\) in terms of the FF quantities.
At this stage, we find it useful to display the "inverted form" of eqs. (27) by expressing \(\Delta\) in terms of \(\hat{\mathcal{O}}_{N}\) and \(\mathcal{A}_{I}\). Using the identity
\[\hat{D}_{[\mu\nu]\lambda}=\hat{D}_{[\mu|\lambda|\nu]}-\hat{D}_{\lambda[\mu\nu]}, \tag{30}\]
we directly obtain
\[\hat{\Delta}^{[\lambda\mu\nu]} = -2\hat{\mathcal{O}}_{I}^{\lambda\mu\nu},\qquad\hat{\Delta}^{( \lambda\mu\nu)}=2\hat{\mathcal{O}}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
Lastly, having set up the new framework, we find it useful to report an interesting correspondence. There exist certain linear connection transformations in the FF which amount to translations of only one irreducible piece at a time (preserving the rest) in the AF. Before disclosing them, let us bring yet another pair of multi-fields to our aid, \(o\) and \(a\), with components \(o^{N}_{\lambda\mu\nu}\) and \(a^{I}_{\mu}\), respectively. Note that \(\hat{\sigma}^{N}\) and \(\hat{O}^{N}\) belong to the same irreducible tensor subspace as Lorentz tensors. After some straightforward algebra we arrive at a 1:1 correspondence between the translations
\[\hat{O}^{\prime N}=\hat{O}^{N}+\hat{\sigma}^{N},\qquad A^{\prime I}=A^{I}+a^{I}, \tag{3.21}\]
in the AF (space-time indices understood, thus omitted) and the linear connection transformations \(\Gamma^{\prime\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\mu\nu}+(\delta\Gamma)^{ \lambda}_{\mu\nu}\) with
\[(\delta\Gamma)_{\lambda\mu\nu} = -\frac{1}{2}\hat{\sigma}^{I}_{\lambda\mu\nu},\qquad(\delta\Gamma) _{\lambda\mu\nu}=-\hat{\sigma}^{\underline{\sigma}}_{\nu\mu\lambda}, \tag{3.22a}\] \[(\delta\Gamma)_{\lambda\mu\nu} = \frac{1}{2}\hat{\sigma}^{\underline{\sigma}}_{\lambda\mu\nu}, \qquad(\delta\Gamma)_{\lambda\mu\nu}=-\hat{\sigma}^{\underline{\sigma}}_{ \lambda\mu\nu},\] (3.22b) \[(\delta\Gamma)_{\lambda\mu\nu} = \frac{2}{n-1}g_{\nu[\mu}a^{2}_{\lambda]},\qquad(\delta\Gamma)_{ \lambda\mu\nu}=\frac{1}{2(n+2)}a^{3}_{(\lambda}g_{\mu\nu)},\] (3.22c) \[(\delta\Gamma)_{\lambda\mu\nu} = \frac{2}{3(n-1)}\left(g_{\mu\nu}a^{4}_{\lambda}-g_{\lambda(\mu}a^ {4}_{\nu)}\right), \tag{3.22d}\]
in the FF.
We also report that under a local Weyl re-scaling of the metric, \(g^{\prime}_{\mu\nu}=\mathrm{e}^{-2\phi(x)}g_{\mu\nu}\), the tensor fields \(\hat{O}^{N}_{\lambda\mu\nu}\) must have conformal weight \(-2\), and thus, transform as the metric, whereas
\[T^{\prime}_{\mu}=T_{\mu},\qquad\rho^{\prime}_{\mu}=\rho_{\mu}+2(n+2)\partial_{ \mu}\phi,\qquad u^{\prime}_{\mu}=u_{\mu}-2(n-1)\partial_{\mu}\phi. \tag{3.23}\]
Clearly, the combination \(\rho_{\mu}+(n+2)u_{\mu}/(n-1)\) is itself a Weyl invariant. It corresponds to \(3(n\tilde{Q}_{\mu}-Q_{\mu})/(n-1)\) in the FF. We shall now proceed with a highly pedagogical example.
## 4 Revisiting the Hilbert-Palatini action
### FF vs. AF
The \(n\)-dimensional Hilbert-Palatini (HP) action reads
\[I_{HP}=\frac{1}{2}\int\sqrt{-\mathsf{g}}d^{n}xR, \tag{4.1}\]
in units \(\hbar=c=M_{Pl}=1\) where \(M_{Pl}\) is the reduced Planck mass. This is the standard FF action which is invariant under the so-called _projective_ transformation
\[\Gamma^{\prime\lambda}_{\ \mu\nu}=\Gamma^{\lambda}_{\mu\nu}+\frac{1}{n-1} \delta^{\lambda}_{\mu}\xi_{\nu}, \tag{4.2}\]
with \(\xi_{\mu}\) being an arbitrary vector field.
The field equations in the FF read
\[2E_{\mu\nu} \equiv R_{(\mu\nu)}-\frac{1}{2}Rg_{\mu\nu}=0, \tag{37a}\] \[2\Delta_{\lambda}{}^{\mu\nu} \equiv \delta^{\nu}_{\lambda}N^{\mu\alpha}{}_{\alpha}-N^{\mu\nu}{}_{ \lambda}-N^{\nu}{}_{\lambda}{}^{\mu}+N^{\alpha}{}_{\lambda\alpha}g^{\mu\nu}=0. \tag{37b}\]
We chose to express the connection field equations in terms of the distortion tensor in order to achieve a more compact output. The invariance of eqs. (37) under (36) can be easily seen from the fact that
\[R^{\prime}_{\mu\nu}=R_{\mu\nu}+\frac{2}{n-1}\partial_{[\mu}\xi_{\nu]},\qquad \Delta_{\lambda}{}^{\lambda\mu}\equiv 0, \tag{38}\]
the right one holding true identically (off-shell).
It is a well-known fact that the solution to \(\Delta_{\lambda}{}^{\mu\nu}=0\) is the affine connection
\[\Gamma^{\lambda}_{\mu\nu}=\tilde{\Gamma}^{\lambda}_{\mu\nu}+\delta^{\lambda} _{\mu}V_{\nu}, \tag{39}\]
where \(V_{\mu}\) is some undetermined vector field. Since
\[\Gamma^{\lambda}_{\mu\nu}=\tilde{\Gamma}^{\lambda}_{\mu\nu}+\delta^{\lambda} _{\mu}\left(V_{\nu}+\frac{1}{n-1}\xi_{\nu}\right) \tag{40}\]
is also a solution, we conclude that the affine connection solving the connection field equations is just the Levi-Civita connection up to the choice of gauge. The effective form of the metric field equations becomes
\[\tilde{R}_{\mu\nu}=\frac{1}{2}\tilde{R}g_{\mu\nu}, \tag{41}\]
i.e., the HP action is effectively Einstein gravity.
On the other hand, in the AF, whenever we write \(R\) we just mean the expression
\[\tilde{R}+R_{T}+R_{V}+\tilde{\nabla}_{\mu}\left(2T^{\mu}+u^{\mu}\right), \tag{42}\]
where \(\tilde{R}\) is the Riemannian Ricci scalar and
\[R_{T} = -\frac{1}{4}H^{2}-\frac{1}{4}\dot{\pi}^{2}+\frac{1}{2}\dot{q}_{ \lambda\mu\nu}\dot{q}^{\lambda\mu\nu}+\frac{1}{2}\dot{t}_{\lambda\mu\nu}\dot{ t}^{\lambda\mu\nu}+\dot{q}^{\lambda\nu\mu}\dot{t}_{\nu\mu\lambda}, \tag{43a}\] \[R_{V} = \frac{n-1}{36(n+2)}\rho^{2}-\frac{n-2}{n-1}T^{2}+\frac{5-2n}{9(n -1)}u^{2}+\frac{1}{18}\rho_{\mu}u^{\mu}-\frac{n-2}{n-1}T_{\mu}u^{\mu}. \tag{43b}\]
Therefore, up to surface terms, our AF action reads
\[I_{HP}=\frac{1}{2}\int\sqrt{-\mathsf{g}}d^{n}x\left(\tilde{R}+R_{T}+R_{V} \right). \tag{44}\]
The analogue of a projective transformation in the AF is comprised of the simultaneous translations
\[A^{\prime I}=A^{I}+a^{I-1}, \tag{45}\]
with
\[\xi_{\mu}\equiv a^{I}_{\mu}=\frac{n-1}{2(n+2)}a^{g}_{\mu}=-\frac{1}{2}a^{g}_{ \mu}. \tag{46}\]
One can easily verify that the above transformations should only affect \(R_{V}\). Since it happens that \(R_{V}\) is invariant, the transformations (4.11) constitute a symmetry of the full action (4.10).
Now, there are two equivalent ways to proceed as we have shown in the previous section. We can either use eqs. (4.3) to reconstruct the field equations in the AF, or we can directly vary the integral (4.10) with respect to the AF field variables (quickest strategy). Both methods lead to the same result, namely the field equations
\[\mathring{\mathcal{O}}_{\,I}^{\lambda\mu\nu} \equiv -\frac{1}{4}H^{\lambda\mu\nu}=0,\qquad\mathring{\mathcal{O}}_{\,Z }^{\lambda\mu\nu}\equiv\frac{1}{2}\left(\mathring{t}^{\lambda\mu\nu}-\mathring {q}^{[\mu\nu]\lambda}\right)=0, \tag{4.13a}\] \[\mathring{\mathcal{O}}_{\,3}^{\lambda\mu\nu} \equiv -\frac{1}{4}\mathring{\pi}^{\lambda\mu\nu}=0,\qquad\mathring{ \mathcal{O}}_{\,4}^{\lambda\mu\nu}\equiv\mathring{\mathcal{O}}_{\,Z}^{(\mu \nu)\lambda}+\frac{1}{8}\mathring{q}^{\lambda\mu\nu}=0,\] (4.13b) \[\mathcal{A}_{\,2}^{\mu} \equiv -\frac{n-2}{n-1}\left(T^{\mu}+\frac{1}{2}u^{\mu}\right)=0,\] (4.13c) \[\mathcal{A}_{\,3}^{\mu} \equiv \frac{n-1}{36(n+2)}\left(\rho^{\mu}+\frac{n+2}{n-1}u^{\mu}\right)=0,\] (4.13d) \[\mathcal{A}_{\,4}^{\mu} \equiv \frac{1}{2}\mathcal{A}_{\,2}^{\mu}+\frac{n+2}{n-1}\mathcal{A}_{\, 3}^{\mu}=0, \tag{4.13e}\]
and
\[2\hat{E}_{\mu\nu} \equiv \tilde{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\left(\tilde{R}+R_{T}+R_{ V}\right)-\frac{3}{4}\left(H_{\mu\alpha\beta}H_{\nu}{}^{\alpha\beta}+\mathring{ \pi}_{\mu\alpha\beta}\mathring{\pi}_{\nu}{}^{\alpha\beta}\right)+ \tag{4.14}\] \[+\mathring{q}_{\alpha\beta\mu}\mathring{q}^{\alpha\beta}{}_{\nu} +\mathring{t}_{\alpha\beta\mu}\mathring{t}^{\alpha\beta}{}_{\nu}+\frac{1}{2} \left(\mathring{q}_{\mu\alpha\beta}\mathring{q}_{\nu}{}^{\alpha\beta}+\mathring {t}_{\mu\alpha\beta}\mathring{t}_{\nu}{}^{\alpha\beta}\right)+\] \[+\mathring{t}_{\alpha\beta(\mu}\mathring{q}_{\nu)}{}^{\alpha\beta }-\mathring{q}^{\alpha\beta}{}_{(\mu}\mathring{t}_{\nu)\alpha\beta}-\mathring {t}_{\beta\alpha(\mu}\mathring{q}^{\alpha\beta}{}_{\nu)}-\] \[-\frac{n-2}{n-1}T_{\mu}T_{\nu}+\frac{n-1}{36(n+2)}\rho_{\mu}\rho _{\nu}+\frac{5-2n}{9(n-1)}u_{\mu}u_{\nu}-\] \[-\frac{n-2}{n-1}T_{(\mu}u_{\nu)}+\frac{1}{18}\rho_{(\mu}u_{\nu)}=0.\]
It is evident that the first two lines in (4.13) suggest that \(\mathring{O}_{\lambda\mu\nu}^{N}=0\). The remaining two independent equations, \(\mathcal{A}_{\,2}^{\mu}=0=\mathcal{A}_{\,3}^{\mu}\), do further imply that
\[u_{\mu}=-2T_{\mu}=-\frac{n-1}{n+2}\rho^{\mu}. \tag{4.15}\]
Hence, the full solution to the system (4.13) is
\[\mathring{O}_{\lambda\mu\nu}^{N}=0,\qquad V_{\mu}\equiv u_{\mu}=-2T_{\mu}=- \frac{n-1}{n+2}\rho^{\mu}, \tag{4.16}\]
where \(V_{\mu}\) is an arbitrary vector field. Since
\[T_{\mu}=-\frac{1}{2}V_{\mu}+\xi_{\mu},\qquad\rho_{\mu}=\frac{n+2}{n-1}\left(2 \xi_{\mu}-V_{\mu}\right),\qquad u_{\mu}=V_{\mu}-2\xi_{\mu} \tag{4.17}\]
is also a solution, we conclude that \(A_{\mu}^{I}=0\) up to the choice of gauge. The effective form of the metric field equations (4.14) becomes
\[\tilde{R}_{\mu\nu}=\frac{1}{2}\tilde{R}g_{\mu\nu}, \tag{4.18}\]
i.e., the HP action in the AF is again, in effect, Einstein gravity.
### Projective symmetry and the AF\({}^{\circ}\)
The careful reader would have already noticed that what is a projective symmetry in the FF manifests itself as a true gauge symmetry in the AF. Indeed, eqs. (4.13) reveal that there are only two independent equations for the triplet \(A_{\mu}\) which rather signals that one of these field variables is after all redundant. First, bear in mind that the combinations
\[T_{\mu}+\frac{1}{2}u_{\mu},\qquad\rho^{\mu}+\frac{n+2}{n-1}u^{\mu}, \tag{4.19}\]
which are invariant under (4.11), correspond to the FF combinations
\[\frac{1}{2}\left(\check{Q}_{\mu}-Q_{\mu}\right)+T_{\mu},\qquad\frac{3}{n-1} \left(n\check{Q}_{\mu}-Q_{\mu}\right), \tag{4.20}\]
respectively, which are invariant under (4.2).
Let us then discuss the idea that for \(n>2\), whenever the projective symmetry is at play, one should favor a doublet
\[B_{\mu}=\left\{\frac{2}{3}\left(T_{\mu}+\frac{1}{2}u_{\mu}\right),\frac{n-1}{ 9\sqrt{n^{2}-4}}\left(\rho^{\mu}+\frac{n+2}{n-1}u^{\mu}\right)\right\} \tag{4.21}\]
over the redundant triplet \(A_{\mu}\). Note that the above choice of \(B^{A}\), where indices \(A,B,...\) assume values in \(\{\,\{\,1,\,2\}\), is not the most general one. Nevertheless, it is the most convenient choice for our purposes here since it casts \(R_{V}\) into the neat form
\[\mathcal{R}_{V}=\frac{9(n-2)}{4(n-1)}\eta_{AB}B_{\mu}^{A}B_{\nu}^{B}g^{\mu\nu}, \tag{4.22}\]
where \(\eta_{AB}\) are the components of the two-dimensional Minkowski metric \(\eta^{(2)}=\text{diag}(-1,1)\).
The affected parts of (4.13) read
\[\mathcal{A}_{2}^{\mu}\equiv-\frac{3(n-2)}{2(n-1)}B_{I}^{\mu}=0,\qquad\mathcal{ A}_{3}^{\mu}\equiv\frac{1}{4}\sqrt{\frac{n-2}{n+2}}B_{2}^{\mu}=0, \tag{4.23}\]
whereas the metric field equations (4.14) are rendered into
\[2\hat{E}_{\mu\nu} \equiv \tilde{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\left(\tilde{R}+R_{T}+ \mathcal{R}_{V}\right)-\frac{3}{4}\left(H_{\mu\alpha\beta}H_{\nu}{}^{\alpha \beta}+\mathring{\pi}_{\mu\alpha\beta}\mathring{\pi}_{\nu}{}^{\alpha\beta} \right)+ \tag{4.24}\] \[+\mathring{q}_{\alpha\beta\mu}\mathring{q}^{\alpha\beta}{}_{\nu} +\hat{t}_{\alpha\beta\mu}\mathring{t}^{\alpha\beta}{}_{\nu}+\frac{1}{2}\left( \mathring{q}_{\mu\alpha\beta}\mathring{q}_{\nu}{}^{\alpha\beta}+\hat{t}_{\mu \alpha\beta}\hat{t}_{\nu}{}^{\alpha\beta}\right)+\] \[+\mathring{t}_{\alpha\beta(\mu}\hat{q}_{\nu)}{}^{\alpha\beta}- \mathring{q}^{\alpha\beta}{}_{(\mu}\mathring{t}_{\nu)\alpha\beta}-\mathring{t}_ {\beta\alpha(\mu}\mathring{q}^{\alpha\beta}{}_{\nu)}+\frac{9(n-2)}{4(n-1)} \eta_{AB}B_{\mu}^{A}B_{\nu}^{B}=0.\]
Interestingly, when written in terms of the \(B^{A}\) fields, the HP action and its field equations exhibit an \(SO(1,1)\) symmetry! Indeed, the field transformation \(B_{\mu}^{\prime}=\Lambda(x)B_{\mu}\) with
\[\Lambda=\begin{pmatrix}\cosh\theta(x)&\sinh\theta(x)\\ \sinh\theta(x)&\cosh\theta(x)\end{pmatrix}, \tag{4.25}\]
preserves both of them. We remark that the manifestation of this transformation as a group action on the field variables is exclusive to the use of the \(B^{A}\)'s to formulate the HP action.
Since the use of these field re-combinations revealed something new, we find it worth to take a step back and generalize the whole thing to another framework which we dub AF\({}^{\circ}\) or "diminished alternative framework". In the AF\({}^{\circ}\), we formulate our MAG theory in terms of the reduced set \(\{g,\hat{O}^{N},B^{A}\}\) of field variables. The doublet \(B_{\mu}\) should consist of two linear combinations of AF vector fields which are invariant under (4.11). Equivalently, it should be comprised of two linear combinations of \(T_{\mu},Q_{\mu},\hat{Q}_{\mu}\) which are invariant under (4.2), the point being that in the AF\({}^{\circ}\) such transformations should constitute an identity operation on our field variables. The most general combinations invariant under (4.11) are
\[B^{I}_{\mu}=\alpha_{I-1}A^{I}_{\mu},\qquad B^{\underline{\,g}}_{\mu}=\beta_{I-1 }A^{I}_{\mu}, \tag{4.26}\]
with
\[\alpha_{\,\mathcal{F}}=\frac{\alpha_{\,I}}{2}+\frac{(n+2)\alpha_{\,2}}{n-1}, \tag{4.27}\]
and ditto for the coefficients \(\beta_{A}\).
Let
\[\mathcal{B}^{\mu}_{A}=\frac{1}{\sqrt{-\mathsf{g}}}\frac{\delta I}{\delta B^{A} _{\mu}}, \tag{4.28}\]
such that the field equations for the field \(B^{A}\) are \(\mathcal{B}^{\mu}_{A}=0\). We can directly make the identifications
\[\mathcal{A}^{\mu}_{I} = \alpha_{I-1}\mathcal{B}^{\mu}_{I}+\beta_{I-1}\mathcal{B}^{\mu}_{ 2}, \tag{4.29}\]
where one has to remember that the coefficients obey the relation (4.27). Clearly, whenever \(\mathcal{B}^{\mu}_{A}=0\), it follows that \(\mathcal{A}^{\mu}_{I}=0\). However, whenever \(\mathcal{A}^{\mu}_{I}=0\), it follows that \(\mathcal{B}^{\mu}_{A}=0\) only when
\[\alpha_{\,I}\beta_{\,\mathcal{F}}-\alpha_{\,\mathcal{F}}\beta_{\,I}\neq 0. \tag{4.30}\]
Therefore, the field equations in the AF\({}^{\circ}\) imply and are, under assumptions, implied by the field equations in the AF or in the FF (if we follow the equivalence chain).
Lastly, let us see exactly how we ended up with (4.21). In terms of the fields \(B^{A}\), as defined in eq. (4.26), we have that
\[R_{V}=\left[f(\beta_{I}^{2},\beta_{\,\mathcal{F}}^{2})B^{I}_{\mu}B^{I}_{\nu}-2 f(\alpha_{\,I}\beta_{\,I},\alpha_{\,2}\beta_{\,\mathcal{F}})B^{I}_{\mu}B^{ \underline{\,g}}_{\nu}+f(\alpha_{\,I}^{2},\alpha_{\,\mathcal{F}}^{2})B^{ \underline{\,g}}_{\mu}B^{\underline{\,g}}_{\nu}\right]g^{\mu\nu}, \tag{4.31}\]
where
\[f(x,y):=\frac{(n-1)^{2}x-36(n^{2}-4)y}{36(n^{2}+n-2)(\alpha_{\,2}\beta_{\,I}- \alpha_{\,I}\beta_{\,\mathcal{F}})^{2}}. \tag{4.32}\]
Moreover, the total divergence in (4.8) assumes the form
\[\frac{2}{\alpha_{\,\mathcal{F}}\beta_{\,I}-\alpha_{\,I}\beta_{\,\mathcal{F}}} \tilde{\nabla}_{\mu}\left(\alpha_{\,2}B^{\mu}_{\,\mathcal{F}}-\beta_{\,2}B^{ \mu}_{\,I}\right). \tag{4.33}\]
To get the above, we expressed \(\alpha_{\,3}\) in terms of \(\alpha_{\,I},\alpha_{\,\mathcal{F}}\) via eq. (4.27), and ditto for the parameters \(\beta_{A}\). Different choices for the parameters \(\alpha_{A},\beta_{A}\) obviously amount to different changes of field variables.
A convenient choice is one for which \(f(\alpha_{I}\beta_{I},\alpha_{\beta}\beta_{2})=0\), namely
\[\beta_{2}=\frac{(n-1)^{2}\alpha_{I}\beta_{I}}{36(n^{2}-4)\alpha_{2}}, \tag{4.34}\]
provided \(n>2\), which yields
\[R_{V}=\frac{(n-2)(n-1)}{(n-1)^{2}\alpha_{I}^{2}-36(n^{2}-4)\alpha_{2}^{2}} \left[-B_{\mu}^{I}B_{\nu}^{I}+\frac{36\alpha_{2}^{2}(n^{2}-4)}{\beta_{I}^{2}( n-1)^{2}}B_{\mu}^{2}B_{\nu}^{2}\right]g^{\mu\nu}. \tag{4.35}\]
Further imposing that
\[\beta_{I}=\frac{6|\alpha_{2}|\sqrt{n^{2}-4}}{n-1}, \tag{4.36}\]
gives
\[R_{V}=\frac{(n-2)(n-1)}{(n-1)^{2}\alpha_{I}^{2}-36(n^{2}-4)\alpha_{2}^{2}} \eta_{AB}B_{\mu}^{A}B_{\nu}^{B}g^{\mu\nu}. \tag{4.37}\]
Finally, we choose
\[\alpha_{2}=\frac{n-1}{18}\sqrt{\frac{9\alpha_{I}^{2}-4}{n^{2}-4}}, \tag{4.38}\]
for later convenience, which leads to
\[R_{V}=\frac{9(n-2)}{4(n-1)}\eta_{AB}B_{\mu}^{A}B_{\nu}^{B}g^{\mu\nu}=:\mathcal{ R}_{V}. \tag{4.39}\]
Note that all of the above parameter choices are in agreement with (4.30) which becomes
\[\frac{2(n-1)}{27\sqrt{n^{2}-4}}\neq 0. \tag{4.40}\]
In terms of the AF fields, our new field variables, \(B^{A}\), read
\[B_{\mu}^{I} = \alpha_{I}T_{\mu}+\frac{n-1}{18}\sqrt{\frac{9\alpha_{I}^{2}-4}{n ^{2}-4}}\rho_{\mu}+\frac{1}{18}\left(9\alpha_{I}+\sqrt{\frac{(9\alpha_{I}^{2} -4)(n+2)}{n-2}}\right)u_{\mu}, \tag{4.41a}\] \[B_{\mu}^{2} = \frac{\sqrt{9\alpha_{I}^{2}-4}}{3}T_{\mu}+\frac{(n-1)\alpha_{I}} {6\sqrt{n^{2}-4}}\rho_{\mu}+\frac{1}{6}\left(\sqrt{\frac{n+2}{n-2}}\alpha_{I} +\sqrt{9\alpha_{I}^{2}-4}\right)u_{\mu}, \tag{4.41b}\]
where we may further fix \(|\alpha_{I}|=2/3\) so that \(B^{2}\) is purely a combination of traces of the non-metricity tensor. This brings us to (4.21). Henceforth, the word AF\({}^{\circ}\) will always mean that we use the specific doublet (4.21).
### \(So(1,2)\) symmetry, torsion/non-metricity rotations, and the d-AF\({}^{\circ}\)
Now, we restrict ourselves to \(n=4\) space-time dimensions where things get a bit more interesting. Via the dualization (2.4), we have a pseudo-vector, and the term \(\propto H^{2}\) can be moved from \(R_{T}\) to \(R_{V}\). In particular, let us introduce the objects
\[\hat{R}_{T} = -\frac{1}{4}\hat{\pi}^{2}+\frac{1}{2}\hat{q}_{\lambda\nu\mu}\hat{ q}^{\lambda\mu\nu}+\frac{1}{2}\hat{t}_{\lambda\mu\nu}\hat{t}^{\lambda\mu\nu}+ \hat{q}^{\lambda\nu\mu}\hat{t}_{\nu\mu\lambda}, \tag{4.42a}\] \[\hat{\mathcal{R}}_{V} = \frac{3}{2}\eta_{\mathcal{A}\mathcal{B}}B_{\mu}^{\mathcal{A}}B_{ \nu}^{\mathcal{B}}g^{\mu\nu}. \tag{4.42b}\]
where the calligraphic indices take values in \(\{1,\,2,\,3\}\), \(\eta_{\mathcal{A}\mathcal{B}}\) are the components of the three-dimensional Minkowski metric, \(\eta^{(3)}=\text{diag}(-1,1,1)\), and we have formed a triplet
\[B_{\mu}=\left\{B_{\mu}^{I},B_{\mu}^{\,2},S_{\mu}\right\}, \tag{101}\]
with \(B^{A}\) given by (119).
The affected parts of (100) read
\[\mathcal{A}_{2}^{\mu}\equiv-\frac{3(n-2)}{2(n-1)}B_{\,I}^{\mu}=0,\qquad \mathcal{A}_{3}^{\mu}\equiv\frac{1}{4}\sqrt{\frac{n-2}{n+2}}B_{\,Z}^{\mu}=0, \tag{102}\]
and
\[\hat{\mathcal{O}}_{\,I}^{\lambda\mu\nu}\equiv\frac{1}{4}\tilde{\epsilon}^{ \lambda\mu\nu\alpha}S_{\alpha}=0, \tag{103}\]
whereas the metric field equations (100) are rendered into
\[2\hat{E}_{\mu\nu} \equiv \tilde{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\left(\tilde{R}+\hat{R}_ {T}+\hat{\mathcal{R}}_{V}\right)-\frac{3}{4}\hat{\pi}_{\mu\alpha\beta}\hat{ \pi}_{\nu}{}^{\alpha\beta}+ \tag{104}\] \[+\hat{q}_{\alpha\beta\mu}\hat{q}{}^{\alpha\beta}{}_{\nu}+\hat{t} _{\alpha\beta\mu}\hat{t}{}^{\alpha\beta}{}_{\nu}+\frac{1}{2}\left(\hat{q}_{ \mu\alpha\beta}\hat{q}_{\nu}{}^{\alpha\beta}+\hat{t}_{\mu\alpha\beta}\hat{t} _{\nu}{}^{\alpha\beta}\right)+\] \[+\hat{t}_{\alpha\beta(\mu}\hat{q}_{\nu)}{}^{\alpha\beta}-\hat{q}{ }^{\alpha\beta}{}_{(\mu}\hat{t}_{\nu)\alpha\beta}-\hat{t}_{\beta\alpha(\mu} \hat{q}{}^{\alpha\beta}{}_{\nu)}+\frac{3}{2}\eta_{\mathcal{A}\mathcal{B}}B_{ \mu}^{\mathcal{A}}B_{\nu}^{\mathcal{B}}=0.\]
Remarkably, when written in terms of the triplet (101), the four-dimensional HP action and its field equations exhibit a larger symmetry under an \(SO(1,2)\) group action mixing the components \(B_{\mu}^{\mathcal{A}}\).
Of particular interest is the transformation \(B_{\mu}^{\prime}=\Lambda(x)B_{\mu}\) with
\[\Lambda=\begin{pmatrix}1&\\ &\cos\theta(x)&\sin\theta(x)\\ &-\sin\theta(x)&\cos\theta(x)\end{pmatrix}, \tag{105}\]
which represents an \(SO(2)\) rotation in the \(\{B_{\mu}^{\,2},S_{\mu}\}\) (field) subspace. Specifically, the invariance of the action under the discrete transformation with \(\Lambda(\theta=\pi/2)\), constitutes an exceptional example of a symmetry under _torsion/non-metricity rotations._ Indeed, omitting the space-time indices, we have
\[\begin{pmatrix}B^{\,I}\\ B^{\,2}\\ S\end{pmatrix}\rightarrow\begin{pmatrix}B^{\,I}\\ S\\ -B^{\,2}\end{pmatrix}, \tag{106}\]
where \(S_{\mu}\) is pure torsion (pseudo-vector) and \(B_{\mu}^{\,2}\) is pure non-metricity (traces), as defined in (4) and (119), respectively. Do further note that if we consider \(B_{\mu}^{\,2}\) and \(S_{\mu}\) as, respectively, the real and imaginary parts of a complex vector
\[\tau_{\mu}=B_{\mu}^{\,2}+iS_{\mu}, \tag{107}\]
we have that
\[\hat{\mathcal{R}}_{V}=-\frac{3}{2}\left(B_{\mu}^{\,I}B_{\nu}^{\,I}-\tau_{\mu} \tau_{\nu}^{\ast}\right)g^{\mu\nu}, \tag{108}\]
where \(\tau^{*}\) is the complex conjugate of \(\tau\). The previous \(SO(2)\) symmetry now manifests itself as a \(U(1)\) under \(\tau^{\prime}_{\mu}=\mathrm{e}^{-i\theta(x)}\tau_{\mu}\).
As a closing remark, let us mention here that in what follows we will often use the torsion pseudo-vector \(S_{\mu}\) instead of \(H_{\lambda\mu\nu}\) as a field variable in our four-dimensional projective-invariant examples. In other words, we will be using the set \(\{g,\hat{O}^{I},B^{\mathcal{A}}\}\) when invoking the \(\mathrm{AF}^{\circ}\), and some clarifications are in order. Letting an action
\[I=\int\sqrt{-\mathsf{g}}d^{4}x\mathcal{L}[g,\hat{O}^{I},\mathcal{B}^{\mathcal{ A}}] \tag{62}\]
vary, we get
\[\delta I=\int\sqrt{-\mathsf{g}}d^{4}x\left(\check{E}_{\mu\nu}\delta g^{\mu\nu }+\hat{O}^{\lambda\mu\nu}_{I}\delta\hat{O}^{I}_{\lambda\mu\nu}+\mathcal{B}^{ \mu}_{\mathcal{A}}\delta B^{\mathcal{A}}_{\mu}\right)+\mathrm{s.t.}. \tag{63}\]
Therefore, the field equations read
\[\check{E}_{\mu\nu}=0,\qquad\hat{\mathcal{O}}^{\lambda\mu\nu}_{I}=0,\qquad \mathcal{B}^{\mu}_{\mathcal{A}}=0, \tag{64}\]
where \(\check{E}_{\mu\nu}\) is a symmetric tensor. Following the preceded steps, one should be able to directly make the identifications
\[\mathcal{B}^{\mu}_{I} = \frac{3}{2}\mathcal{A}^{\mu}_{2},\qquad\mathcal{B}^{\mu}_{2}=6 \sqrt{3}\mathcal{A}^{\mu}_{3},\qquad\mathcal{B}^{\mu}_{3}=\hat{\mathcal{O}}^{I }_{\alpha\beta\gamma}\check{c}^{\mu\alpha\beta\gamma}, \tag{65a}\] \[\mathcal{A}^{\mu}_{4} = \frac{1}{3}\left(\mathcal{B}^{\mu}_{I}+\frac{1}{\sqrt{3}} \mathcal{B}^{\mu}_{2}\right),\qquad\check{E}_{\mu\nu}=\hat{E}_{\mu\nu}+ \mathcal{B}^{3}_{(\mu}S_{\nu)}-\frac{1}{2}g_{\mu\nu}S_{\alpha}\mathcal{B}^{ \alpha}_{\beta}, \tag{65b}\]
which tell us how the functional derivatives in the two frameworks are related. These do once again suffice to prove the equivalence between the field equations in the AF and in this framework which, for the sake of clarity and in lack of a better name, we call \(\mathrm{d}\)-\(\mathrm{AF}^{\circ}\), with the \(\mathrm{d}\) reminding us that we use the dual of \(H_{\lambda\mu\nu}\).
## 5 Exciting the connection: a series of examples
It has been standard practice in the MAG community to motivate actions from a geometric perspective and with full generality in mind. Although this is in general good practice, it often leads to an intractable set of field equations; in the end one will unavoidably sacrifice generality to get results. In this section we propose an different motivation for MAG models. Using the alternative frameworks we presented, we write meaningful field theories propagating some of the new field variables. These are, in essence, MAG theories propagating certain connection DoF in a tractable and controllable manner.
These MAG theories are inspired by Riemannian theories with additional fields. In fact, they yield an effective7 set of field equations which is very much identical to the corresponding system of equations in the Riemannian case, the crucial difference being that instead of additional fields we use specific modes of the distortion tensor. The obvious
advantage is that for a given symmetry ansatz, we exactly know how to solve the differential equations that arise. Hence, one should not expect metric solutions novel in form; the only novelty is that these known metric backgrounds are now part of a larger solution with torsion and non-metricity. In what follows, we will occasionally omit space-time indices (or internal indices) when they are trivially understood.
### The MAGswell theory
The action for a Maxwell field \(A_{\mu}\) coupled to four-dimensional gravity with a cosmological constant is
\[\tilde{I}_{EM}=\frac{1}{2}\int\sqrt{-\mathsf{g}}d^{4}x\left(\tilde{R}-2\Lambda -\frac{1}{2}F^{2}\right), \tag{38}\]
where \(F_{\mu\nu}=2\partial_{[\mu}A_{\nu]}\) is the field-strength tensor. The above integral is invariant under shifts \(A^{\prime}=A+d\theta\), where \(\theta(x)\) is some scalar potential. Variation with respect to the metric yields the metric field equations
\[\tilde{G}_{\mu\nu}+\Lambda g_{\mu\nu} = F_{\mu}^{\ \alpha}F_{\nu\alpha}-\frac{1}{4}F^{2}g_{\mu\nu}, \tag{39}\]
where \(\tilde{G}_{\mu\nu}=\tilde{R}_{\mu\nu}-\frac{1}{2}\tilde{R}g_{\mu\nu}\) is the Einstein tensor. The Maxwell equations and the Bianchi identity can be written as
\[\partial_{\nu}\left(\sqrt{-g}F^{\nu\mu}\right)=0,\qquad\partial_{\nu}\left( \sqrt{-g}\ast F^{\nu\mu}\right)=0, \tag{40}\]
respectively, where \(\ast F^{\mu\nu}=\frac{1}{2}\bar{\epsilon}^{\mu\nu\rho\lambda}F_{\rho\lambda}\) is the Hodge dual of the field-strength tensor.
Here, we wish to write down a more or less similar theory for a massless vector in MAG, the homophonous _MAGswell field_, \(C_{\mu}\), as we may playfully dub it. We will do so by considering the Ricci scalar as our cornerstone and adding proper terms to it. Since the HP action is invariant under projective transformations we would like to retain this feature in the complete action. This will also allow us to work in the AF\({}^{\circ}\). We emphasize that the MAGswell field should be understood as part of the geometry of space-time, i.e., it is not the familiar gauge connection.
Without further ado, let us consider a fairly general projective-invariant candidate for the four-dimensional MAGswell action which in the d-AF\({}^{\circ}\) assumes the form
\[I_{C}=\int\sqrt{-\mathsf{g}}d^{4}x\mathcal{L}_{C}\equiv\int\sqrt{-\mathsf{g}} d^{4}x\left(\mathcal{L}_{HP}+\mathcal{L}_{ct}+\mathcal{L}_{kin}\right), \tag{41}\]
with
\[\mathcal{L}_{HP} = \frac{1}{2}\left(R-2\Lambda\right), \tag{42a}\] \[\mathcal{L}_{ct} = -\frac{3}{4}g^{\mu\nu}B_{\mu}^{A}B_{\nu}^{B}\eta_{AB},\] (42b) \[\mathcal{L}_{kin} = -\frac{1}{4}F_{(C)}^{2}. \tag{42c}\]
Here, \(F_{\mu\nu}^{(C)}=2\partial_{[\mu}C_{\nu]}\) with \(C_{\mu}\equiv\alpha_{A}B_{\mu}^{A}\) being the composite MAGswell field and \(\alpha_{A}\) dimensionless constants, i.e., real non-zero numbers. The curvature scalar \(R\) stands for
\[\tilde{R}+\hat{R}_{T}+\hat{\mathcal{R}}_{V}+3\tilde{\nabla}_{\mu}B_{I}^{\mu} \tag{43}\]
with the constituents given in (4.42). The form of the action in the AF, or the FF, can be easily obtained by remembering that
\[B_{\mu}^{\,I} = \frac{1}{3}\left(2T_{\mu}+u_{\mu}\right)=\frac{1}{3}\left(\tilde{Q} _{\mu}-Q_{\mu}+2T_{\mu}\right),\] (5.7a) \[B_{\mu}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
Indeed, if \(\#_{fgp}(-)\) outputs the number of free gauge parameters in a certain framework, then \(\#_{fgp}(\mathrm{FF})=\#_{fgp}(\mathrm{AF})\) in all cases. The fact that \(\#_{fgp}(\mathrm{AF}^{\circ})=\#_{fgp}(\mathrm{AF})-1\) has to do with the projective-symmetry "charge" being initially absorbed into the field variables of the \(\mathrm{AF}^{\circ}\). Having discussed the symmetries in the different frameworks, we now turn our attention to the field equations.
In the d-\(\mathrm{AF}^{\circ}\), they read
\[\mathring{\mathcal{O}}^{\lambda\mu\nu}_{\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
where \(\tilde{\alpha}\) is a real number, obtaining
\[B^{\,I}=\tilde{\alpha}\langle C\rangle,\qquad B^{\,2}=\frac{1-\tilde{\alpha} \alpha_{\,I}}{\alpha_{\,2}}\langle C\rangle. \tag{5.21}\]
Therefore,
\[\{B^{\,I},B^{\,2}\}=\begin{cases}\{0,\langle C\rangle/\alpha_{\,2}\}&\tilde{ \alpha}=0\\ \{\langle C\rangle/\alpha_{\,I},0\}&\tilde{\alpha}=1/\alpha_{\,I}\\ \{\tilde{\alpha}\langle C\rangle,(1-\tilde{\alpha}\alpha_{\,I})\langle C \rangle/\alpha_{\,2}\}&\tilde{\alpha}\neq 0,1/\alpha_{\,I}.\end{cases} \tag{5.22}\]
Solution in the AF.The next step is to translate the solution into the language of the AF. The field equations tell us that \(\dot{O}^{N}_{\lambda\mu\nu}=0\), and that eq. (5.17a) must hold true. Again, if \(\langle C_{\mu}\rangle+\partial_{\mu}\phi\) is the value of
\[C_{\mu}\equiv\frac{\alpha_{\,2}}{6\sqrt{3}}\rho_{\mu}+\frac{2\alpha_{\,I}}{3} T_{\mu}+\frac{3\alpha_{\,I}+\sqrt{3}\alpha_{\,2}}{9}u_{\mu} \tag{5.23}\]
satisfying (5.17a), then \(A^{I}\) acquires the value \(\langle A^{I}\rangle\) with
\[\langle u_{\mu}\rangle=\frac{18\left(\langle C_{\mu}\rangle+\partial_{\mu} \phi\right)-\sqrt{3}\alpha_{\,2}\langle\rho_{\mu}\rangle-12\alpha_{\,I} \langle T_{\mu}\rangle}{2(3\alpha_{\,I}+\alpha_{\,2}\sqrt{3})}. \tag{5.24}\]
Clearly, the values \(\langle A^{I}\rangle+b^{I-1}\), where the \(b^{I-1}\)'s obey eq. (5.13), are as good. Setting
\[b^{\,I}=\alpha\langle C\rangle-\langle T\rangle,\qquad b^{\,2}=\beta\langle C \rangle-\langle\rho\rangle,\qquad\theta=-\phi, \tag{5.25}\]
where \(\alpha,\beta\) are real numbers, we get
\[T=\alpha\langle C\rangle,\qquad\rho=\beta\langle C\rangle,\qquad u=\frac{18-1 2\alpha\alpha_{\,I}-\beta\alpha_{\,2}\sqrt{3}}{2(3\alpha_{\,I}+\alpha_{\,2} \sqrt{3})}\langle C\rangle. \tag{5.26}\]
We can further identify
\[\tilde{\alpha}=\frac{18+\sqrt{3}(4\alpha-\beta)\alpha_{\,2}}{6(3\alpha_{\,I} +\alpha_{\,2}\sqrt{3})}, \tag{5.27}\]
and we collect the various cases in table 1.
Solution in the FF.The final step is to translate the solution into the language of the familiar Palatini formalism, i.e., to present an affine connection which solves the connection field equations. This reads
\[\Gamma^{\lambda}_{\mu\nu}=\tilde{\Gamma}^{\lambda}_{\mu\nu}+\langle V^{ \lambda}\rangle g_{\mu\nu}-\delta^{\lambda}_{\nu}\frac{\langle C_{\mu}\rangle +\partial_{\mu}\phi-\left(\alpha_{\,I}+\alpha_{\,2}\sqrt{3}\right)\langle V_{ \mu}\rangle}{\alpha_{\,I}-\alpha_{\,2}\sqrt{3}}+\delta^{\lambda}_{\mu}\langle U _{\nu}\rangle, \tag{5.28}\]
where \(\langle C_{\mu}\rangle+\partial_{\mu}\phi\) is the value of
\[C_{\mu}\equiv\frac{3\alpha_{\,I}+2\sqrt{3}\alpha_{\,2}}{9}\tilde{Q}_{\mu}- \frac{6\alpha_{\,I}+\sqrt{3}\alpha_{\,2}}{18}Q_{\mu}+\frac{2\alpha_{\,I}}{3}T_{\mu} \tag{5.29}\]
satisfying (17a). Again, due to the freedom to shift our connection as in (144), and setting
\[a = \frac{24-(4\alpha-\beta)(\alpha_{{}_{I}}-\alpha_{{}_{Z}}\sqrt{3})}{1 2(3\alpha_{{}_{I}}+\alpha_{{}_{Z}}\sqrt{3})}\langle C\rangle-\langle V\rangle, \qquad\theta=-\phi, \tag{180a}\] \[c = \frac{(8\alpha+\beta)\alpha_{{}_{I}}^{2}-4\alpha_{{}_{I}}(3+2 \alpha\alpha_{{}_{Z}}\sqrt{3})-3\alpha_{{}_{Z}}(\beta\alpha_{{}_{Z}}-4\sqrt{3} )}{12(3\alpha_{{}_{I}}^{2}-2\alpha_{{}_{I}}\alpha_{{}_{Z}}\sqrt{3}-3\alpha_{{ }_{Z}}^{2})}\langle C\rangle-\langle U\rangle, \tag{180b}\]
we reach a connection with torsion and non-metricity
\[T^{\lambda}{}_{\mu\nu} = \frac{2\alpha}{3}\delta^{\lambda}_{[\nu}\langle C_{\mu]}\rangle, \tag{181a}\] \[Q_{\lambda\mu\nu} = \frac{\beta}{6}g_{(\lambda\mu}\langle C_{\nu)}\rangle+\frac{18-12 \alpha\alpha_{{}_{I}}-\beta\alpha_{{}_{Z}}\sqrt{3}}{9(3\alpha_{{}_{I}}+\alpha _{{}_{Z}}\sqrt{3})}\left(g_{\lambda(\mu}\langle C_{\nu)}\rangle-g_{\mu\nu} \langle C_{\lambda}\rangle\right). \tag{181b}\]
One can immediately verify that if we decompose the latter under the Lorentz group, we will exactly find (5.26) as the only excited irreducible modes. Therefore, one can again refer to table 1 for the various cases.
An interesting remark is in order. The Lagrangian does undeniably propagate the massless combination \(C_{\mu}\), a spin-1 geometric "boson". This means that part of the post-Riemannian structure gets (self-)excited but it turns out to be impossible to make a gauge-independent statement about specifically which part that is. For example, what appears to be an excitation of only torsional DoF in one gauge, shows up as an excitation of only non-metricity DoF in another. Hence, propagation of the MAGswell field is tantamount to a self-excitation of the connection background with different parts of the latter being excited in the different gauges.
Do also note that the action (134) can be thought of as the massless limit of a massive theory which has an action like (134), but with \(\mathcal{L}_{ct}\) replaced by
\[\mathcal{L}_{mass}=-\frac{1}{2}\left[(\mu^{2}\alpha_{{}_{I}}^{2}-3)B_{\mu}^{{} _{I}}B_{\nu}^{{}_{I}}+2\mu^{2}\alpha_{{}_{I}}\alpha_{{}_{Z}}B_{\mu}^{{}_{I}}B_ {\nu}^{{}_{Z}}+(\mu^{2}\alpha_{{}_{Z}}^{2}+3)B_{\mu}^{{}_{Z}}B_{\nu}^{{}_{Z}} \right]g^{\mu\nu}, \tag{182}\]
\begin{table}
\begin{tabular}{|l l|l|l l l|} \hline \(\alpha\) & \(\beta\) & \(\tilde{\alpha}\) & \(T_{\mu}\) & \(\rho_{\mu}\) & \(u_{\mu}\) \\ \hline \(0\) & \(0\) & \(\frac{3}{3\alpha_{{}_{I}}+\alpha_{{}_{Z}}\sqrt{3}}\) & \(0\) & \(0\) & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check
such that, up to surface terms,
\[I_{C}=\frac{1}{2}\int\sqrt{-\overline{\mathsf{g}}}d^{4}x\left(\tilde{R}-2\Lambda+ \hat{R}_{T}+3S^{2}-\mu^{2}C^{2}-\frac{1}{2}F_{(C)}^{2}\right), \tag{101}\]
always in the d-AF\({}^{\circ}\). Obviously, \(\mathcal{L}_{ct}=\mathcal{L}_{mass}(\mu=0)\). The last two terms in (101) imply that the combination \(C_{\mu}\) behaves as a Proca field with mass \(\mu\). Since the HP action already introduces a mass scale proportional to the Planck mass,8 naturalness criteria suggest that we take \(\mu\) to be of the same order (the composite field \(C_{\mu}\) is part of the space-time geometry, not some external field). The field equations in the d-AF\({}^{\circ}\) are (119) except that \(\tilde{\nabla}_{\nu}F_{(C)}^{\nu\mu}=0\) is replaced by the Proca equation
Footnote 8: Remember that we have set the reduced Planck mass to unity.
\[\tilde{\nabla}_{\nu}F_{(C)}^{\nu\mu}-\mu^{2}C^{\mu}=0. \tag{102}\]
Observe that the massive action and the field equations following from it, do still possess a symmetry under (114) if \(\theta^{A}=0\). This means that the propagated combination \(C_{\mu}\) is a _massive_ vector-boson, and the geometric interpretation of this propagation again falls into the previous scheme, viz, it is subject to the choice of gauge.
Finally, we remark that the MAGswell field is of course by itself not a solid and unique concept. Nevertheless, let us justify why we think that \(C_{\mu}\) is indeed the most general candidate to describe it. First of all, playing the devil's advocate, one could argue that there are more general projective-invariant combinations to take as our \(B^{A}\) fields; we have already shown this in section 4. Indeed, there is simply no physical argument favoring (115) over (114). Sure, the diagonal form of the mass-squared matrix and the emergence of \(SO\) symmetries under transformations in field space are nice features, but they are far from being necessary restrictions. Actually, these features are completely absent here because (i) we have removed the mass terms, and (ii) there are no \(SO\) transformations being a symmetry of (100). However, the real question is if we would gain more insight by considering a more complicated change of field variables. The answer is no, for \(C_{\mu}\) would again be a linear combination of the \(A^{I}\)'s but with different coefficients. Assuming that we would also properly modify \(\mathcal{L}_{ct}\) as to remove algebraic instances of the new \(B^{A}\)'s, it is evident that we would not get qualitatively different results.
Second, one could argue that projective invariance of the action is definitely not mandatory. One could instead add a kinetic term for any of the vector variables in the AF and remove all algebraic instances of this field from the action by introducing a proper counter-term Lagrangian. This theory, which would no longer be invariant under (100) (the AF analogue of what is a projective transformation in the FF), would then propagate the graviton and the specific mode. Fortunately, one can easily prove that the solutions in all these different cases would correspond to the one and only solution in the theory with action (100) in different gauges.
### Non-linear interacting MAG theory, black holes and solitons
To showcase the usefulness of this new approach to MAG for obtaining exact solutions with torsion and non-metricity, let us propose a four-dimensional interacting action with
non-linear dynamics for the MAGswell field \(C_{\mu}\) and the pseudo-vector \(S_{\mu}\), namely
\[I_{NL}=\int\sqrt{-\mathsf{g}}d^{4}x\left[\frac{1}{2}(R-2\Lambda-\hat{\mathcal{R }}_{V})-\gamma_{1}F_{(C)}^{2}-\gamma_{2}F_{(S)}^{2}-\gamma\mathcal{L}_{int} \right], \tag{101}\]
where
\[\mathcal{L}_{int}=\delta_{\nu_{1}\ldots\nu_{4}}^{\mu_{1}\ldots\mu_{4}}F_{\mu_{ 1}\mu_{2}}^{(C)}F_{\mu_{3}\mu_{4}}^{(S)}F_{(C)}^{\nu_{1}\nu_{2}}F_{(S)}^{\nu_{ 3}\nu_{4}}, \tag{102}\]
\(\gamma\) is a positive coupling constant of mass-dimension \(-4\), and \(\gamma_{1},\gamma_{2}\) are positive coupling constants of mass-dimension \(0\). This is the form of the action in the d-AF\({}^{\circ}\). The field strengths can be customarily written using only the partial derivative, i.e., \(F_{(C)}\) as previously defined and \(F_{\mu\nu}^{(S)}=\partial_{[\mu}S_{\nu]}\) with \(S_{\mu}\) given in eq. (4). The term \(R\) stands for \(\tilde{R}+\hat{R}_{T}+\hat{\mathcal{R}}_{V}+\text{t.d.}\), with the constituents defined in eqs. (100). One can always express the action in the FF (or the AF) by recalling eqs. (100). This interacting Lagrangian was proposed in [40] for two distinct potentials in a Riemannian setup. Here, we endow these potentials with a special geometric origin and cast the whole thing as a MAG theory.
In the d-AF\({}^{\circ}\), the field equations read
\[\hat{\mathcal{O}}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
in the FF. Following the steps laid down in the previous section, we expect a connection solution with torsion and non-metricity
\[T^{\lambda}{}_{\mu\nu} = \frac{2\alpha}{3}\delta^{\lambda}_{[\nu}\langle C_{\mu]}\rangle+ \langle S_{\alpha}\rangle\tilde{\epsilon}^{\alpha\lambda}{}_{\mu\nu}, \tag{115a}\] \[Q_{\lambda\mu\nu} = \frac{\beta}{6}g_{(\lambda\mu}\langle C_{\nu)}\rangle+\frac{18-12 \alpha\alpha{}_{I}-\beta\alpha_{Z}\sqrt{3}}{9(3\alpha{}_{I}+\alpha_{Z}\sqrt{3 })}\left(g_{\lambda(\mu}\langle C_{\nu)}\rangle-g_{\mu\nu}\langle C_{\lambda} \rangle\right), \tag{115b}\]
respectively, where \(\langle C\rangle\) and \(\langle S\rangle\) satisfy the non-linear differential equations (114b) and (114c). We remind the reader that our gauge freedom is fully exhausted once we fix values for \(\alpha,\beta\) (see table 1).
Now, let us consider the static spherically-symmetric metric ansatz
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Sigma_{2}^{2}, \tag{116}\]
where \(d\Sigma_{2}^{2}=d\chi^{2}+\sin^{2}\chi dy^{2}\) gives the line element of a two-dimensional spherical section with \(\chi,y\) compact. We also make the following ansatze,
\[C_{\mu}=c(r)\delta^{0}_{\mu},\qquad S_{\mu}=p\cos\chi\delta^{3}_{\mu}, \tag{117}\]
which result in
\[F^{(C)}_{\mu\nu}=c^{\prime}\delta^{10}_{\mu\nu},\qquad F^{(S)}_{\mu\nu}=p\sin \chi\delta^{32}_{\mu\nu}. \tag{118}\]
A prime denotes differentiation with respect to \(r\).
Given the above, eq. (114c) is identically satisfied, while eq. (114b) gives
\[r(8\gamma p^{2}+\gamma_{1}r^{4})c^{\prime\prime}+2(\gamma_{1}r^{4}-8\gamma p^ {2})c^{\prime}=0. \tag{119}\]
This yields the first integral
\[c^{\prime}=-\frac{qr^{2}}{\gamma_{1}r^{4}+8\gamma p^{2}}, \tag{120}\]
where \(q\) is an integration constant. Integrating once more, we get
\[c=c_{0}+\frac{q}{\gamma_{1}r}{}_{2}F_{1}\left[\frac{1}{4},1,\frac{5}{4};- \frac{8\gamma p^{2}}{\gamma_{1}r^{4}}\right], \tag{121}\]
where \({}_{2}F_{1}\) is the Gaussian hypergeometric function [41], and \(c_{0}\) is another constant of integration. Therefore our connection solution is such that its torsion and non-metricity read
\[T^{\lambda}{}_{\mu\nu} = -\frac{\alpha}{3}\delta^{\lambda 0}_{\mu\nu}c+p\cos\chi\tilde{ \epsilon}^{3\lambda}{}_{\mu\nu}, \tag{122a}\] \[Q_{\lambda\mu\nu} = c\left[\frac{\beta}{6}g_{(\lambda\mu}\delta^{0}_{\nu)}+\frac{18- 12\alpha\alpha{}_{I}-\beta\alpha_{Z}\sqrt{3}}{9(3\alpha{}_{I}+\alpha_{Z}\sqrt {3})}\left(g_{\lambda(\mu}\delta^{0}_{\nu)}-g_{\mu\nu}\delta^{0}_{\lambda} \right)\right], \tag{122b}\]
with \(c\) given in (121).
Plugging this into (114d), we find that
\[-\frac{2r}{f}\check{E}_{00}=f^{\prime}+\frac{f}{r}-\frac{k}{r}+\Lambda r+ \frac{2\gamma_{2}p^{2}}{r^{3}}+\frac{2q^{2}r}{\gamma_{1}r^{4}+8\gamma p^{2}}=0. \tag{123}\]
Since
\[\tilde{E}_{11}=-f^{-2}\tilde{E}_{00},\qquad\tilde{E}_{33}=\sin^{2}\chi\tilde{E}_{22}, \tag{108}\]
and
\[\tilde{E}_{22}=\frac{r^{2}}{2f}\tilde{E}_{00}-\left(\frac{r^{3}}{2f}\tilde{E}_{0 0}\right)^{\prime}, \tag{109}\]
we only have to find the solution to eq. (105), which reads
\[f=1-\frac{2M}{r}-\frac{\Lambda r^{2}}{3}+\frac{2\gamma_{2}p^{2}}{r^{2}}+\frac{ 2q^{2}}{\gamma_{1}r^{2}}{}_{2}F_{1}\left[\frac{1}{4},1,\frac{5}{4};-\frac{8 \gamma p^{2}}{\gamma_{1}r^{4}}\right]. \tag{110}\]
The symbol \(M\) stands for yet another integration constant, this time associated with the mass. The very interesting metric background (110) has been extensively studied in [40; 42], and there is no need to discuss it here in depth. In our case, nevertheless, the corrections to the Schwarzschild-(A)dS metric is due to a richer space-time geometry and not due to the introduction of additional fields (like a Maxwell field). In this sense, this is a novel result.
Some comments are in order. Observe that by setting \(p=0\), the background (110) assumes the form
\[f=1-\frac{2M}{r}+\frac{8q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}, \tag{111}\]
and, up to choice of the integration constant \(q\), it is indeed the metric solution in the MAGswell theory with action (101) if we make the ansatz (102) for \(C_{\mu}\). Moreover, the torsion and non-metricity of the connection solution, eqs. (104), acquire the form (106), ergo, we recover the full solution in the MAGswell model, as a special case. Another interesting setup is to consider the action (107) with \(\Lambda=0=\gamma_{1}\). In this case, the connection solution will have torsion and non-metricity (104) with
\[c=c_{0}+qr^{3}, \tag{112}\]
whereas the metric function \(f\) will be
\[f=1-\frac{2M}{r}+\frac{2\gamma_{2}p^{2}}{r^{2}}-\frac{\Lambda_{eff}r^{2}}{3}, \tag{113}\]
where \(\Lambda_{eff}>0\) stands for the effective cosmological constant
\[\Lambda_{eff}=16\gamma p^{2}q^{2}. \tag{114}\]
Finally, non-singular solutions were reported in [40] for a specific choice of the mass parameter \(M\) in a strongly-coupled regime. The need to go to such a regime will not be necessary here; we will just set \(\gamma_{2}=0\) and choose our mass parameter as
\[M=M_{*}:=\frac{\pi q^{2}}{4(2\gamma p^{2}\gamma_{1}^{3})^{1/4}}. \tag{115}\]
Then, eq. (110) assumes the expression
\[f=1-\frac{2M_{*}}{r}-\frac{\Lambda r^{2}}{3}+\frac{2q^{2}}{\gamma_{1}r^{2}}{} _{2}F_{1}\left[\frac{1}{4},1,\frac{5}{4};-\frac{8\gamma p^{2}}{\gamma_{1}r^{4 }}\right], \tag{116}\]
and admits the near-origin expansion
\[f\underset{r\to 0}{=}1-\left(\frac{q^{2}}{12\gamma p^{2}}+\frac{\Lambda}{3} \right)r^{2}+\mathcal{O}(r^{3}). \tag{102}\]
If \(\Lambda\geq 0\) or \(-q^{2}/(4\gamma p^{2})<\Lambda<0\), the presence of a de Sitter core with radius
\[l_{dS}=\frac{2p\sqrt{3\gamma}}{\sqrt{q^{2}+4\gamma p^{2}\Lambda}}, \tag{103}\]
is manifest, ensuring regularity of Riemann-curvature invariants at the origin and completeness in the geodesic sense [43]. A further study of the causal structure of the solution reveals [40] that, for certain values (or ranges thereof) of the coupling/integration constants, eq. (101) describes either a gravitational soliton (horizon-free solution with regular origin), or just a standard black hole solution with an extremal limit. To the best of our knowledge, regular black hole solutions with torsion and non-metricity have not been yet reported in the MAG literature.
Since the actual novelty in the full solution is the existence of a non-trivial connection background, we find it worth to include a few lines about the behavior of the latter in various limits. First, let us write torsion and non-metricity in a coordinate-free manner by introducing a vierbein field \(e^{\mu}_{a}\), with indices \(a,b,...=(0),...,(3)\) and inverse \(e^{a}_{\mu}\) satisfying the orthonormality relation \(g_{\mu\nu}=\eta_{ab}e^{a}_{\mu}e^{b}_{\nu}\). In particular, let us choose it to be diagonal, viz.,
\[e^{a}_{\mu}=\text{diag}\left(\sqrt{f},\frac{1}{\sqrt{f}},r,r\sin\chi\right). \tag{104}\]
Then, the only non-vanishing components of \(T^{a}{}_{bc}=e^{a}_{\lambda}e^{\mu}_{b}e^{\nu}_{c}T^{\lambda}{}_{\mu\nu}\) are
\[T^{(0)}{}_{(1)(2)} = T^{(1)}{}_{(0)(2)}=-T^{(2)}{}_{(0)(1)}=\frac{p\cot\chi}{r},\qquad T ^{(i)}{}_{(0)(i)}=\frac{\alpha c}{3\sqrt{f}}, \tag{105}\]
where \(i,j,...\) take values in \(\{1,2,3\}\). It seems that the \((i)(0)(i)\) components will be singular at the horizon radius \(r=r_{+}\). Fortunately, this can be remedied by fixing the integration constants \(c_{0}\) in (100) as
\[c_{0}=-\frac{q}{\gamma_{1}r_{+}}{}_{2}F_{1}\left[\frac{1}{4},1,\frac{5}{4};- \frac{8\gamma p^{2}}{\gamma_{1}r_{+}^{4}}\right], \tag{106}\]
so that \(c\sim(r-r_{+})\) near the horizon surface. Moreover, all components of the torsion tensor exhibit a \(r^{-1}\) fall-off at asymptotic infinity. Next, we have a single pole at the origin \(r=0\) due to the axial part. This pole persists even in the case of the regular metric solution (101). If we assume that a probe particle with micro-structure follows the auto-parallels, then it is a good question to ask whether this particle is going to "feel" the torsion singularity at the origin. Thankfully, the axial part of torsion drops out of the auto-parallel equation [20], and thus, this singular behavior should not really be a cause for concern! Finally, all components of \(Q_{abc}=e^{\lambda}_{a}e^{\mu}_{b}e^{\nu}_{c}Q_{\lambda\mu\nu}\) are proportional to \(c/\sqrt{f}\). For \(f\) as in (104), this ratio vanishes at all previously discussed radii. On the other hand, in the case of the regular metric (101), it acquires a finite value at the origin. In the regular extremal case, it further is finite also at \(r=r_{+}\).
### Cosmological constant powered by torsion
It is an old fact that the minimal coupling of a 3-form field to Einstein gravity without a cosmological constant leads to Einstein's field equations with a cosmological constant purely derived from a gauge principle [44]. Here, we shall disclose a MAG model with no cosmological constant which also leads to pure gravity with a cosmological constant, the latter now powered by axial torsion.
Let us consider the projective-invariant action
\[I_{H}=\frac{1}{2}\int\sqrt{-\mathsf{g}}d^{4}x\left(R+\frac{1}{4}H^{2}-\frac{1}{ 24}F_{(H)}^{2}\right), \tag{100}\]
where \(F_{\lambda\mu\nu\rho}^{(H)}=4\partial_{[\lambda}H_{\mu\nu\rho]}\). The purpose of the second term in the above integral is to cancel out the mass term for \(H_{\lambda\mu\nu}\) present in the AF expression of the Ricci scalar. This ensures that the action (100) is invariant under the translation
\[H^{\prime}{}_{\lambda\mu\nu}=H_{\lambda\mu\nu}+\partial_{[\lambda}B_{\mu\nu]} \tag{101}\]
which corresponds to the transformation
\[\Gamma^{\prime}{}^{\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\mu\nu}-\frac{1}{2}g^{ \lambda\rho}\partial_{[\rho}B_{\mu\nu]} \tag{102}\]
of the affine connection in the FF, with \(B_{\mu\nu}\) being an arbitrary 2-form field.
In the convenient AF\({}^{\circ}\), the field equations read
\[\dot{\mathcal{O}}^{\lambda\mu\nu}{}_{I} = \frac{1}{6}\tilde{\nabla}_{\alpha}F_{(H)}^{\alpha\lambda\mu\nu}=0,\] (103a) \[\dot{\mathcal{O}}^{\lambda\mu\nu}_{\phantom{\lambda\mu\nu}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
since \(F_{(H)}\) is a top-form in four dimensions. Clearly, equation (108a) implies that \(\chi\) is an integration constant, say equal to \(\chi_{0}\). Consequently, we are left with
\[\tilde{G}_{\mu\nu}+\frac{1}{2}\chi_{0}^{2}g_{\mu\nu}=0, \tag{112}\]
which will determine the metric, and we directly find that
\[\tilde{R}_{\mu\nu}=\frac{\chi_{0}^{2}}{2}g_{\mu\nu}, \tag{113}\]
which is the familiar Riemannian Ricci-curvature condition for Einstein manifolds with positive constant curvature.
As promised, we found a connection solution which features only axial torsion \(H_{\lambda\mu\nu}\). Again, we stress out that this type of torsion has no effect on the auto-parallels, i.e., the latter continue to coincide with the geodesics. We also saw that our field equations do effectively become Einstein's field equations with a positive (effective) cosmological constant, \(\Lambda_{eff}=\chi_{0}^{2}/2\), once we integrate out the connection. The cosmological solution in the absence of matter sources would then be a de Sitter universe with Hubble constant \(H\propto|\chi_{0}|\) where the expansion is now driven by an actual integration constant, powered by torsion, instead of an a priori fixed value.
## 6 Summary and future prospects
We started from the observation that the affine connection is a single field encoding \(n^{3}\)-many off-shell DoF, arguing that, for certain purposes, it might be more efficient to distribute these degrees among more than one fields. We then proceeded with a convenient change of field variables \(\{g,\Gamma\}\to\{g,\dot{O}^{N},A^{I}\}\) going to a framework which we dubbed AF. Besides the metric, the new field variables are the irreducible pieces of the torsion and non-metricity tensors under the Lorentz group. They are thus automatically identified with the fundamental fields \(\{g,\Gamma\}\) in the FF.
We worked out in detail the relations between the functional derivatives in the two frameworks and concluded that, not surprisingly, the field equations in the AF imply and are implied by the field equations in the FF. Hence, the field equations in the AF constitute an equivalent system, and we have the freedom, at any stage, to switch between the different frameworks. To complete the mapping, we further disclosed a correspondence between linear connection transformations in the FF and translations in the AF while we also determined how the \(\dot{O}^{N}\)'s and the \(A^{I}\)'s should transform under a local Weyl re-scaling of the metric.
We then applied the AF to the Hilbert-Palatini action and showed its well-known equivalence to Einstein gravity (up to choice of a gauge) also in the new framework. Observing that a projective transformation of the connection corresponds to simultaneous translations of the \(A^{I}\)'s in the AF, we further argued that the projective symmetry manifests itself as a true gauge symmetry in the new framework, i.e., one of the components of the vector triplet \(A_{\mu}\) is redundant. In particular, this means that any projective-invariant action admits a description in terms of a reduced set of variables \(\{g,\ddot{O}^{N},B^{A}\}\) where the \(B^{A}\)'s are in general
identified with linear combinations of the \(A^{I}\)'s. This led us to develop a useful variant of the AF, which we dubbed diminished AF or AF\({}^{\circ}\) for short.
We saw that there exists a particular choice of combinations \(B^{A}\) which reveals an \(SO(1,1)\) symmetry of the \(n\)-dimensional HP action under a group action on the components of the doublet \(B_{\mu}\). In \(n=4\), the field variables in the AF\({}^{\circ}\) can be re-organized. Using the fact that the dual of the 3-form torsion is a pseudo-vector, the quadruplet \(\hat{O}_{\lambda\mu\nu}\) is reduced to a triplet by handing its first component to the doublet \(B_{\mu}\) which becomes a triplet. This is just a special four-dimensional variant of the AF\({}^{\circ}\), obtained via the change of variables \(\{g,\hat{O}^{N},B^{A}\}\rightarrow\{g,\hat{O}^{I},B^{A}\}\), which we called d-AF\({}^{\circ}\) for the sake of clarity. As it turns out, the HP action proves to be an \(SO(1,2)\)-symmetric action in the d-AF\({}^{\circ}\) where the group action mixes the \(B^{\cal A}\)'s. Actions of the \(SO(2)\) subgroup rotate the elements of a two-dimensional subspace with the discrete version for \(\theta=\pi/2\) interpreted as a rotation of axial torsion to non-metricity, and vice versa.
Observing that any MAG theory in these alternative frameworks can be handled as a Riemannian theory with additional fields, we argued that it is an efficient strategy to use solvable (and suitable) Riemannian theories as "seeds" for solvable MAG theories which propagate the connection in vacuum. As our first example, we drew inspiration from the elegant Einstein-Maxwell theory. We proposed a theory for what we called the MAGswell field, a composite field labeling a projective-invariant linear combination of torsion and non-metricity traces. The naive action should follow from the Maxwell action by replacing \(\tilde{R}\) with \(R\) and the gauge field with the MAGswell field, the latter having nothing to do with a gauge connection. Doing so, one of course notices that what was a \(U(1)\) of the second kind in the Riemannian case does not translate into a symmetry of the MAG theory under locally exact shifts of the MAGswell field. The reason is that the presence of the Ricci scalar makes the field massive. Thus, a counter-term Lagrangian was also included with the sole purpose of removing the mass terms for the constituents of the composite MAGswell field.
We then discussed the symmetries of the MAGswell action in all frameworks. Exactly because the MAGswell field is a composite object, we showed that the action is symmetric under a 2-parameter transformation of the vector variables in the AF\({}^{\circ}\) (or the d-AF\({}^{\circ}\)), which combines a transformation preserving the MAGswell field and one translating it by an exact vector. In the other frameworks, this symmetry shows up as a symmetry under a 3-parameter transformation, a fact attributed to the absorption of the projective-symmetry charge in the diminished AF. We derived the field equations in the d-AF\({}^{\circ}\) and presented the solution in all frameworks, finding a proper expression that captures its form in all gauges. Actually, the reader was provided with table 1 which displays all cases possible, and which proves that the propagation of the MAGswell field, a gauge-independent fact, cannot be tied to a self-excitation of a uniquely determined part of the post-Riemannian structure in a gauge-independent fashion, i.e., different parts of the connection background get excited for different choices of gauge.
After this instructive example, we proceeded with a more complicated theory, this time inspired by quasi-topological electromagnetism [40]. We proposed a Lagrangian with non-linear dynamics for the MAGswell field and the torsion pseudo-vector letting them interact with each other. After briefly discussing the symmetries and deriving the field equations,
we adopted a static and spherically-symmetric metric ansatz, together with compatible ansatze for torsion and non-metricity, in an attempt to recover the black hole solution reported in [40; 42]. The full solution describes a black hole with a non-zero connection background sourcing the post-Schwarzschild contributions to the metric solution. Under a certain tuning of the integration constants, we also showed that this black hole exhibits a regular core and is thus complete in the geodesic sense.
However, assuming that particles with micro-structure follow auto-parallels, we also had to analyze the behavior of the torsion and non-metricity of the solution at all radii of interest. Doing so, we had to fix yet another integration constant to avoid a singular behavior at the horizon radius, but we concluded that there is no remedy for a single pole at the origin due to axial torsion. This pole is inevitable even in the case of the regular black hole. Nevertheless, as we pointed out, the axial piece of torsion drops out from the auto-parallel equation meaning that the probe particle would never be affected by this torsion singularity.
Finally, as our last example, and inspired by the derivation of a cosmological constant from a gauge principle [44], we put forth a simple MAG action for the 3-form torsion. After deriving the field equations we presented a connection solution featuring only axial torsion which powers a positive effective cosmological constant. The cosmological solution in this MAG theory -- in the absence of matter sources -- would be a de Sitter universe with the expansion driven by torsion. We remarked that the effective cosmological constant is an integration constant as opposed to a fixed-value \(\Lambda\) introduced by hand in the action.
The main goal of this work was to communicate the idea that a smart change of field variables can be a really useful strategy when trying to find solvable MAG theories. Indeed, our proposal proves to be a fruitful one, for although we restricted ourselves to showing only three examples, these are suggestive of many more. Writing down simple field theories for the new field variables compared to considering combinations of curvature invariants to make the connection dynamical, is of course a far less general method, albeit a much more targeted and result-oriented one. In the future, we plan to give more examples and solutions which are not necessarily inspired by Riemannian theories. We find it interesting to study kinetic theories for the various tensor modes and also investigate if (and how) Riemannian theories with scalar fields can fit as an inspiration into this description of MAG.
## Appendix A Irreducible decomposition of a rank-3 tensor
The irreducible decomposition of a general rank-3 tensor \(\Delta_{\lambda\mu\nu}\) under the Lorentz group reads
\[\Delta_{\lambda\mu\nu} = \Delta_{[\lambda\mu\nu]}+\mathring{\Delta}_{(\lambda\mu\nu)}+ \mathring{D}_{\lambda[\mu\nu]}+\mathring{D}_{\lambda(\mu\nu)}+\bar{\Delta}_{( \lambda\mu\nu)}+\bar{D}_{\lambda[\mu\nu]}+\bar{D}_{\lambda(\mu\nu)}, \tag{100}\]
where
\[\begin{split}\mathring{D}_{\lambda\mu\nu}\,=&\ \Delta_{\lambda\mu\nu}-\Delta_{( \lambda\mu\nu)}-\Delta_{[\lambda\mu\nu]}-\frac{1}{n-1}g_{\lambda[\mu}\left( \Delta^{\alpha}{}_{|\alpha|\nu]}-\Delta^{\alpha}{}_{\nu]\alpha}\right)-\\ &-\frac{1}{3(n-1)}g_{\lambda(\mu}\left(\Delta^{\alpha}{}_{| \alpha|\nu)}+\Delta^{\alpha}{}_{\nu)\alpha}-2\Delta_{\nu)\alpha}{}^{\alpha} \right)+\\ &+\frac{1}{3(n-1)}g_{\mu\nu}\left(\Delta^{\alpha}{}_{\alpha\lambda }+\Delta^{\alpha}{}_{\lambda\alpha}-2\Delta_{\lambda\alpha}{}^{\alpha}\right),\end{split} \tag{100a}\] \[\mathring{\Delta}_{(\lambda\mu\nu)}\,= \ \Delta_{(\lambda\mu\nu)}-\frac{1}{D+2}g_{(\mu\nu}\left(\Delta_{ \lambda)\alpha}{}^{\alpha}+\Delta^{\alpha}{}_{\lambda)\alpha}+\Delta^{\alpha} {}_{|\alpha|\lambda)}\right),\] (100b) \[\bar{\Delta}_{(\lambda\mu\nu)}\,= \ \frac{1}{n+2}g_{(\mu\nu}\left(\Delta_{\lambda)\alpha}{}^{\alpha}+ \Delta^{\alpha}{}_{\lambda)\alpha}+\Delta^{\alpha}{}_{|\alpha|\lambda)}\right) =\Delta_{(\lambda\mu\nu)}-\mathring{\Delta}_{(\lambda\mu\nu)},\] (100c) \[\bar{D}_{\lambda\mu\nu}\,= \ \frac{1}{n-1}g_{\lambda[\mu}\left(\Delta^{\alpha}{}_{|\alpha| \nu]}-\Delta^{\alpha}{}_{\nu]\alpha}\right)-\frac{1}{3(n-1)}g_{\mu\nu}\left( \Delta^{\alpha}{}_{\alpha\lambda}+\Delta^{\alpha}{}_{\lambda\alpha}-2\Delta_{ \lambda\alpha}{}^{\alpha}\right)+\] \[+\frac{1}{3(n-1)}g_{\lambda(\mu}\left(\Delta^{\alpha}{}_{|\alpha| \nu)}+\Delta^{\alpha}{}_{\nu)\alpha}-2\Delta_{\nu)\alpha}{}^{\alpha}\right)=D _{\lambda\mu\nu}-\mathring{D}_{\lambda\mu\nu}. \tag{100d}\]
## Appendix B Glossary
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|l|}{Indices} & Values \\ \hline \(\mu,\nu,...\) & 0,1,...,\(n-1\) \\ \(i,j,...\) & 1,2,...,\(n-1\) \\ \(a,b,...\) & (0),(1),...,(\(n-1\)) \\ \(M,N,...\) & _1, 2, 3, 4_ \\ \(I,J,...\) & _2, 3, 4_ \\ \(A,B,...\) & _1, 2_ \\ \(\mathcal{A},\mathcal{B},...\) & _1, 2, 3_ \\ \hline \end{tabular}
\end{table}
Table 2: Indices used in this work and their values.
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|l|}{Acronym} & Full name \\ \hline MAG & Metric-affine gravity \\ DoF & Degrees of freedom \\ FF & Fundamental framework, \(\{g,\Gamma\}\) \\ AF & Alternative framework, \(\{g,\mathring{O}^{N},A^{I}\}\) \\ AF\({}^{\circ}\) & Diminished alternative framework, \(\{g,\mathring{O}^{N},B^{A}\}\) \\ d-AF\({}^{\circ}\) & -, \(\{g,\mathring{O}^{I},B^{A}\}\) \\ \hline \end{tabular}
\end{table}
Table 3: Acronyms used in this work and their full name.
## Acknowledgments
D.I. work is funded by the Estonian Research Council grant (SJD14). K. P. acknowledges financial support provided by the European Regional Development Fund (ERDF) through the Center of Excellence TK133 "The Dark Side of the Universe" and PRG356 "Gauge gravity: unification, extensions and phenomenology". K.P. also acknowledges participation in the COST Association Action CA18108 "Quantum Gravity Phenomenology in the Multimessenger Approach (QG-MM)". The authors would also like to thank Anastasios C. Petkou for the fruitful discussions and valuable comments during this work.
|
2310.00528 | Turing patterns on a two-component isotropic growing system. Part 3:
Time dependent conditions and linear growth | We propose general conditions for the emergence of Turing patterns in a
domain that changes size through homogeneous growth/shrinkage based on the
qualitative changes of a potential function. For this part of the work, we
consider the most general case where the homogeneous state of the system
depends on time. Our hypotheses for the Turing conditions are corroborated with
numerical simulations of increasing/decreasing domains of the Brusselator
system for the linear growth/shrinking case. The simulations allow us to
understand the characteristics of the pattern, its amplitude, and wave number,
in addition to allowing us to glimpse the role of time as a bifurcation
parameter. | Aldo Ledesma-Durán | 2023-10-01T00:17:52Z | http://arxiv.org/abs/2310.00528v1 | # Turing patterns on a two-component isotropic growing system.
###### Abstract
We propose general conditions for the emergence of Turing patterns in a domain that changes size through homogeneous growth/shrinkage based on the qualitative changes of a potential function. For this part of the work, we consider the most general case where the homogeneous state of the system depends on time. Our hypotheses for the Turing conditions are corroborated with numerical simulations of increasing/decreasing domains of the Brusselator system for the linear growth/shrinking case. The simulations allow us to understand the characteristics of the pattern, its amplitude, and wave number, in addition to allowing us to glimpse the role of time as a bifurcation parameter.
Presentation
The Turing bifurcation in reaction-diffusion systems where the domain changes size is an essential model for understanding patterns in biological systems where, in most cases, the system changes size due to development. We know that the shape of the pattern at a specific time in a growing domain crucially depends on its past history [1; 2]. But this is not an exclusive property of growing domains, but the same dependence on past history also occurs in fixed domains. This phenomenon is probably related to persistence, _i.e._ the ability of a dissipative structure to maintain its current wavenumber [3]. This type of stability in a fixed-size diffusion reaction is known as Eckhaus stability, and its proof requires a nonlinear approximation to the solution of the system near the Turing bifurcation. However, in the case of a domain that changes over time, this analysis is not yet practicable since it has not been conclusively resolved, even from the linear approach, how to find the Turing bifurcation. One of the main problems in this direction is the temporal dependence of homogeneous states.
For this part of the work, we find the Turing bifurcation by considering the changes in the structure of a potential function for perturbations of the Fourier modes. From this potential function, we expect that all trajectories will decay to a stable point in the absence of diffusion, and that some will become (unstable) saddles for some wavenumber when diffusion is turned on, as we have done in the last part of the work, but now for the more general case where the homogeneous state depends on time. This will establish hypotheses of Turing pattern formation that will be tested against specific numerical simulations of the Brusselator RDD system using the finite difference method in a one-dimensional reaction diffusion system with homogeneous linear growth/shrinkage.
### Summary of Parts 1 and 2: Time dependent homogeneous state and potential function
In an isotropically growing system where a reaction-diffusion process occurs, the equations describing the dynamic is
\[\frac{\partial\mathbf{c}}{\partial t}+\frac{l(t)}{l(t)}\mathbf{c}(\xi,t)= \frac{1}{l^{2}(t)}\mathbb{D}\frac{\partial^{2}\mathbf{c}}{\partial\xi^{2}}+ \mathbf{f}(\mathbf{c}). \tag{1}\]
Here \(l(t)\) is the function measuring the expansion/shrinking of the domain, and the relationship between the real and computational domain is \(x=x_{0}+l(t)\xi\) with \(\xi\in[0,1]\), where \(\xi\) the fixed coordinate and \(x\) the actual coordinate. Besides \(\mathbf{c}\) represents the concentrations, \(\mathbb{D}\) the square diffusion matrix, and \(\mathbf{f}(\mathbf{c})\) the vector of chemical reactions.
Eq. (1) can be separated into that for the homogeneous state and for the perturbations, where the latter are assumed to depend on the spatial coordinate (unlike the former) and are comparatively small. The homogeneous state \(\mathbf{c}_{s}(t)\) obeys
\[\frac{\partial\mathbf{c}_{s}}{\partial t}+\frac{l^{\prime}(t)}{l(t)}\mathbf{c} _{s}=\mathbf{f}(\mathbf{c}_{s}), \tag{2}\]
and in general it depends on the time. In Part 1 [4], we show that under appropriate approximations related to 1) the slow variation of the domain, 2) a sufficient distance of the bifurcations and 3) smallness of nonlinear terms, we show that a good approximation for the homogeneous state satisfying \(\mathbf{c}_{s}(0)=\mathbf{c}_{0}\) in (2) is given by
\[\mathbf{c}_{s}(t)=\mathbf{c}_{0}-\mathbb{P}\frac{e^{\mathbf{\Lambda}t}}{l(t)} \left(\int\limits_{0}^{t}l^{\prime}(t^{\prime})e^{-\mathbf{\Lambda}t^{\prime} }dt^{\prime}\right)\mathbb{P}^{-1}\mathbf{c}_{0}. \tag{3}\]
Here, \(\mathbf{\Lambda}\) is the diagonal matrix of eigenvalues of the Jacobian \(\mathbf{J}\equiv\frac{\partial f}{\partial\mathbf{c}}(\mathbf{c}_{0})\), \(\mathbb{P}\) its modal matrix and \(\mathbf{c}_{0}\) the constant fixed point of the reaction where \(\mathbf{f}(c_{0})=\mathbf{0}\).
In constrast, the perturbations \(\mathbf{\zeta}\) of the system (1), to first order obey
\[\frac{\partial\mathbf{\zeta}}{\partial t}+\frac{\dot{l}(t)}{l(t)}\mathbf{\zeta}=\frac {1}{l^{2}(t)}\mathbb{D}\frac{\partial^{2}\mathbf{\zeta}}{\partial\xi^{2}}+\frac{ \partial\mathbf{f}}{\partial\mathbf{c}}(\mathbf{c}_{s})\mathbf{\zeta}. \tag{4}\]
For a two component system, the involved matrices are
\[\mathbf{\hat{J}}\equiv\frac{\partial\mathbf{f}}{\partial\mathbf{c}}(\mathbf{c }_{s})=\left(\begin{array}{cc}\hat{j}_{11}&\hat{j}_{12}\\ \hat{j}_{21}&\hat{j}_{22}\end{array}\right)\text{ and }\mathbb{D}=\left( \begin{array}{cc}d_{u}&0\\ 0&d_{v}\end{array}\right). \tag{5}\]
Therefore, the evaluation of the last term in (4) generally depends explicitly on the time-dependent homogeneous state through the factor \(\hat{\mathfrak{J}}(t)\).
In Part 2 of this series, we study the Turing conditions under the constant \(\mathbf{c}_{s}\approx\) approximation, which is appropriate, for example, for exponential growth/shrinkage [5]. In this case, after taking the Fourier series in the computational domain, for each wavenumber \(\kappa\), the Fourier modes obey
\[\frac{\partial\boldsymbol{\zeta}_{\kappa}}{\partial t}=\left[\hat{\mathfrak{J} }-\left(\frac{\kappa}{l(t)}\right)^{2}\mathds{D}-\frac{\dot{l}(t)}{l(t)} \mathds{I}\right]\boldsymbol{\zeta}_{\kappa}, \tag{6}\]
Here \(k(t)\equiv\kappa/l(t)\) is the wave number in the actual domain, and \(g(t)\equiv\dot{l}(t)/l(t)\) represents the percentage of size increased/decreased per unit of time. We will call the matrix in parentheses \(A(\kappa,t)\).
By first rewriting these equations in (6) as a pair of second-order equations, we can also write them in potential function form such that \(dV_{\kappa}/dt=F(u_{\kappa},u^{\prime}_{\kappa})\). The qualitative changes of this function allow us to establish the stability properties of the trajectories of each Fourier mode. This allows us to show that the system in the absence of diffusion is stable if
\[\Delta_{A}(0,t)+g^{\prime}(t)\geq 0\,\,\tau_{A}(0,t)\leq 0\ \text{and}\ \frac{d}{dt}\left[\Delta_{A}(0,t)+g^{\prime}(t)\right]\leq 0. \tag{7}\]
These conditions guarantee that \(V_{0}\) (the potential function associated with the mode \(\kappa=0\)) is an elliptical paraboloid centered at the origin, toward which all trajectories are directed. Instability with diffusion requires that for some wavenumber \(\kappa_{m}\), \(V_{m}\geq 0\) and that \(V_{m}\) be a saddle, which requires
\[\Delta_{A}(\kappa_{m},t)+2d_{u}k_{m}(t)k^{\prime}_{m}(t)+g^{\prime}(t)<0. \tag{8}\]
The same condition applies also by also using \(d_{v}\) instead of \(d_{u}\). Here and from now on, \(\tau\) and \(\Delta\) refer to the trace and determination of the matrix in the subscript.
In terms of the original matrices of the RDD system, these conditions are sumarized in the second column of Table (1). Here \(d_{i}\) refers interchangeably to either \(d_{u}\) or \(d_{v}\). These conditions were applied for example for exponential growth, \(l(t)=l(0)e^{rt}\) and their predictions were corroborated against numerical simulations of the Brusselator giving excellent results in the prediction of the Turing space and the number of wave when \(|r|<0.1\), and a good prediction of Turing space asymmetries with respect to the Turing and Hopf bifurcations for growth/shrinkage processes [5].
## II Turing conditions for growing domain with time dependent homogeneous state
We now consider the more general case where \(\mathbf{c}_{s}\) depends on time and therefore also \(\frac{\partial\mathbf{f}(\mathbf{c}_{s})}{\partial\mathbf{c}}=\hat{\mathfrak{ J}}(t)\). After taking the Fourier transform in the computational domain, for each wavenumber \(\kappa\), eq. (1) becomes
\[\frac{\partial\boldsymbol{\zeta}_{\kappa}}{\partial t}=\left[\hat{\mathfrak{J} }(t)-\left(\frac{\kappa}{l(t)}\right)^{2}\mathds{D}-\frac{\dot{l}(t)}{l(t)} \mathds{I}\right]\boldsymbol{\zeta}_{\kappa}, \tag{9}\]
and the matrix in parenthesis is now
\[A(\kappa,t)=\hat{\mathfrak{J}}(t)-k^{2}(t)\mathds{D}-g(t)\mathds{I}. \tag{10}\]
If we define \(\boldsymbol{\zeta}_{\kappa}=(u_{\kappa},v_{\kappa})\), the system in component form is
\[u^{\prime}_{\kappa}(t)=-\left(\frac{\kappa}{l(t)}\right)^{2}d_{1 }u_{\kappa}+\hat{j}_{11}(t)u+\hat{j}_{12}(t)v_{\kappa}-g(t)u_{k},\] \[v^{\prime}_{\kappa}(t)=-\left(\frac{\kappa}{l(t)}\right)^{2}d_ {2}v_{\kappa}+\hat{j}_{21}(t)u_{\kappa}+\hat{j}_{22}(t)v_{\kappa}-g(t)v_{k}.\]
The second order equation for the first component is
\[u_{\kappa}^{\prime\prime}(t)-\left[\tau_{A}+\frac{d}{dt}\log[\hat{j}_{12}]\right] u_{\kappa}^{\prime}+\left[\Delta_{A}+2d_{u}k(t)k^{\prime}(t)+g^{\prime}(t)-\hat{j}_{11}^{ \prime}+A_{11}\frac{d}{dt}\log[\hat{j}_{12}]\right]u_{\kappa}=0, \tag{11}\]
and one similar for \(v_{k}\) by changing \(d_{u}\to d_{v}\), \(\hat{j}_{12}\rightarrow\hat{j}_{21}\), \(\hat{j}_{11}^{\prime}\rightarrow\hat{j}_{22}^{\prime}\) and \(A_{11}\to A_{22}\). Multiplying by \(u_{\kappa}^{\prime}\) and rearranging, we have
\[\frac{dV_{\kappa}}{dt}=\left[\tau_{A}(\kappa)+\frac{d}{dt}\log[\hat{j}_{12}] \right]u_{\kappa}^{\prime 2}+\frac{u_{\kappa}^{2}}{2}\frac{d}{dt}\left[\Delta_{A}( \kappa)+2d_{u}k(t)k^{\prime}(t)+g^{\prime}(t)-\hat{j}_{11}^{\prime}+A_{11}( \kappa)\frac{d}{dt}\log[\hat{j}_{12}]\right]. \tag{12}\]
where the potential function is
\[V_{\kappa}=\frac{u_{\kappa}^{\prime 2}}{2}+\left[\Delta_{A}(\kappa)+2d_{u}k(t)k^ {\prime}(t)+g^{\prime}(t)-\hat{j}_{11}^{\prime}+A_{11}(\kappa)\frac{d}{dt} \log[\hat{j}_{12}]\right]\frac{u_{\kappa}^{2}}{2}. \tag{13}\]
The stability conditions in the absence of diffusion (\(\kappa=0\)) require that \(V_{0}\geq 0\) and \(\hat{V}_{0}\leq 0\). This implies that
\[\Delta_{A}(0)+g^{\prime}(t)-\hat{j}_{11}^{\prime}-A_{11}(0)\frac{d }{dt}\log[\hat{j}_{12}] \geq 0, \tag{14}\] \[\tau_{A}(0)+\frac{d}{dt}\log[\hat{j}_{12}] \leq 0,\] (15) \[\frac{d}{dt}\left[\Delta_{A}(0)+g^{\prime}(t)-\hat{j}_{11}^{\prime }-A_{11}(0)\frac{d}{dt}\log[\hat{j}_{12}]\right] \leq 0. \tag{16}\]
These conditions guarantee that \(V_{0}\) is an elliptical paraboloid centered at the origin toward which all trajectories are directed.
Instability with diffusion requires that for some \(\kappa_{m}\), \(V_{m}\geq 0\) and that \(V_{m}\) be a saddle. Therefore
\[\Delta_{A}(\kappa_{m})+2d_{u}k_{m}(t)k_{m}^{\prime}(t)+g^{\prime}(t)-\hat{j}_ {11}-A_{11}(\kappa_{m})\frac{d}{dt}\log[\hat{j}_{12}]<0 \tag{17}\]
The condition for \(V_{m}\) to be a potential is that all trajectories descend, \(\dot{V}_{m}\leq 0\).
Following the same procedure as in Part 2 of this series, the conditions for the formation of Turing patterns derived in this section in terms of the original matrices are given in Table 1. In the left column we summarize the conditions for a system with a constant homogeneous state [5], and in the right column we add the correction due to the change in time of the homogeneous state found in this work. As in Part 2, we have added the labels \(S\), \(I\), and \(D\), which denote stability, instability, and domain conditions, respectively.
## III A study case: linear growth/shrinking of the Brusselator
Let us consider as an illustrative example the case where the growth/shrinkage is linear \(l(t)=l(0)(1+rt)\), where the growth rate is \(g(t)=r/(1+rt)\). As we show in Part 1 of this work, the homogeneous state of linear growth (\(r>0\)) and shrinkage (\(r<0\)) changes in time and, in the first case, slowly tends to the point concentration fixed \(\mathbf{c}_{0}\), while in the second it slowly moves away from such value [4].
The Brusselator is given by
\[\mathbf{f}(\mathbf{c})=(A-Bc_{u}-c_{u}+c_{u}^{2}c_{v},Bc_{u}-c_{u}^{2}c_{v})^{ T}. \tag{18}\]
The fixed point of the isolated reaction is in \(\mathbf{c}_{0}=(A,B/A)^{T}\), and the Jacobian and diffusion matrix of the fixed domain problem in (5) (see Ref. [3]) are
\[\mathbf{J}=\left(\begin{array}{cc}-1+B&A^{2}\\ -B&-A^{2}\end{array}\right)\text{ and }\text{D}=\left(\begin{array}{cc} \sigma&0\\ 0&1\end{array}\right). \tag{19}\]
The linear approximation for the value of the homogeneous state if \(|r|\) is relatively small is given by Eq. (3), and the results are too long to put on one page. However, it is a useful approximation when explicit closed conditions are needed for the emergence of patterns. However, to avoid the approximation here and focus only on the hipotheses for the Turing conditions, for this work, we directly use the steady state obtained from the direct numerical solution of eq. (3). Furthermore, from now on, to focus only on the effect of distance growth to the bifurcation, we will set \(A=1\) and \(\sigma=0.1\), which gives a critical value of the wavenumber and bifurcation parameters as \(k_{c}=\sqrt{A/\sqrt{\sigma}}\) and \(B_{T}=(1+A\sqrt{\sigma})^{2}\), respectively.
In Fig 1, we plot the value of the concentrations of the homogeneous state resulting from numerically solving the equation (3) for the value of \(B=1.75\) between the times \(t=0\) and \(t_{max}\). We have used as initial condition \(\mathbf{c}_{s}(0)=\mathbf{c}_{0}\). The final time for each value of \(r\) is chosen as the time required for the domain to reach ten times its original size (growth \(r>0\)), or decrease ten times its original size (shrinkage, \(r>0\) ). In this Fig 1, we corroborate that in the case of growth, the value of homogeneous concentrations tends to its fixed point value, and in the case of shrinkage, it moves away from it as time progresses [4]. Note also that there are rapid variations in the central area of this homogeneous concentration graph. These changes in turn can lead to rapid changes in the Turing region.
Now, knowing the homogeneous state, we can evaluate the Turing conditions deduced by us and summarized in the right column of Tab 1. These conditions depend on time. To take this dependence into account, in Fig. 2.a, we have plotted three different circles, whose different diameters reflect the time in which the Turing conditions apply. Thus we evaluate the conditions at times \(t_{max}/3\) (small circle), \(t_{max}/2\) (medium circle) and \(t_{max}\) (large circle), respectively.
Figure 1: Homogeneous state for both concentrations as a function of the parameter \(r\) and time. We set the value \(B=1.75\). The maximum time for simulation is chosen as the time needed to change the original size ten times.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \# & Constant HS & Correction for Time Dependent HS & – \\ \hline \hline S1) & \(\Delta_{\mathbf{j}}-\tau_{\mathbf{j}}g(t)+g^{2}(t)+g^{\prime}(t)\) & \(+\frac{(j_{11}(t)-g(t))j_{12}^{2}(t)}{j_{11}(t)}-\hat{j}_{11}^{\prime}(t)\) & \(>0\) \\ \hline S2) & \(\tau_{\mathbf{j}}-2g(t)\) & \(+\frac{\hat{j}_{12}^{2}(t)}{j_{12}(t)}\) & \(<0\) \\ \hline D3) & \(g^{\prime}(t)[2g(t)-\tau_{\mathbf{j}}]+g^{\prime\prime}(t)\) & \(+\Delta_{\mathbf{j}}^{\prime}-\tau_{\mathbf{j}}^{\prime}g(t)-j_{11}^{\prime \prime}(t)-\frac{\partial}{\partial t}\left[\frac{(g(t)-j_{11}(t))j_{2}^{2}(t) }{j_{12}(t)}\right]\) & \(\leq 0\) \\ \hline \hline I4) & \((2d_{t}-\tau_{\mathbf{D}})g(t)+\sigma_{\mathbf{D}\mathbf{j}}\) & \(+\frac{d_{a}\tau_{\mathbf{D}}}{d_{a}}\log[\hat{j}_{12}]\) & \(>0\) \\ \hline I5) & \(\sigma_{\mathbf{D}\mathbf{j}}^{2}-4\Delta_{\mathbf{D}}\Delta_{\mathbf{j}}+2g(t )[(2d_{j}-\tau_{\mathbf{D}})\sigma_{\mathbf{D}\mathbf{j}}+2\tau_{\mathbf{j}} \Delta_{\mathbf{p}}]\) & \(+\frac{j_{12}^{2}(t)(2(4\tau_{\mathbf{j}}\mathbf{D}g(t)+d_{a}\tau_{0}-2 \Delta_{\mathbf{D}}j_{11}(t)))}{j_{12}(t)}\) & \\ & \(+g^{2}(t)[4d_{i}^{2}-4d_{i}\tau_{\mathbf{D}}-4\Delta_{\mathbf{D}}+\tau_{ \mathbf{D}}^{2}]-4\Delta_{\mathbf{D}}g^{\prime}(t)\) & \(+4\Delta_{\mathbf{D}}\hat{j}_{11}^{2}(t)+\frac{d_{a}^{2}j_{12}^{2}(t)}{j_{12}( t)^{2}}\) & \(\geq 0\) \\ \hline \hline \(k_{m}^{2}\) & \(\min_{i,j\neq i}\left\{\frac{(2d_{i}-\tau_{\mathbf{D}})g(t)+\sigma_{\mathbf{D} \mathbf{j}}}{2\Delta_{\mathbf{D}}}\right\}\) & \(\min_{i,j\neq i}\left\{\frac{(2d_{i}-\tau_{\mathbf{D}})g(t)+\sigma_{\mathbf{D} \mathbf{j}}+d_{i}\frac{d_{i}}{d}\log[j_{i,j}]}{2\Delta_{\mathbf{D}}}\right\}\) & – \\ \hline \end{tabular}
\end{table}
Table 1: Turing conditions for a two-component system with isotropic growth. The middle column summarizes the conditions for a constant homogeneous state in Part 2 of this series. The right column presents the corrections for the time-dependent homogeneous state. \(\tau\) and \(\Delta\) refer to the trace and determinant of the matrix in the subscript, either \(\sigma\), the diagonal diffusion matrix \(\mathsf{D}\), or the Jacobian evaluated in the homogeneous state \(\hat{\mathbf{J}}\), depending on time.
In this way, three concentric circles reflect that a pattern is predicted during almost the entire process, and two or one circle would mean a Turing pattern that appears/disappears over time.
In Fig. 2.b, we show the results of our numerical simulations performed in Comsol Multiphysics of the RDD system at (1) for the Brusselator system at (18) using linear growth. The simulations are performed in a fixed computational domain with 100 equidistant vertices with a simulation time \(t_{max}\) calculated as the time it takes for the system to grow/shrink ten times the original size, depending on whether it is growing/shrinking, _i.e._, \(t_{max}=9/r\) or \(-9/10r\), respectively. The initial domain size is calculated using as reference the bifurcation wavenumber as \(l(0)=2n\pi/k_{c}\), with \(n\) equal to 3 or 19 for \(r>0\) and \(r<0\), respectively. We have used periodic boundary conditions and random disturbances of 10% of the value of the initial concentration of \(\mathbf{\hat{c}}_{0}\).
In Fig. 2.b we show with the letters H, T, M and P the numerical solutions corresponding to homogeneous, Turing, mixed-mode and only periodic solutions in time, respectively. As we explained in Part 2 of this series, homogeneous solutions are characterized by a low amplitude and a tendency to conserve wavenumber in the actual domain; Turing patterns have a medium amplitude and their spatial oscillations occur around a more or less fixed concentration; in the case of mixed mode spatial patterns it differs from Turing patterns in that they oscillate around a limit cycle and, finally, periodic solutions consist of limit cycles at each point without any predominant wave number in the domain.
As can be seen in figure 2, our theory allows us to predict that Turing patterns occur more broadly for domains that are shrinking and, in contrast, for growth they retain the same trend in the values of parameters than those where patterns occur in a fixed domain (\(r=0\)). Therefore, our scheme allows finding Turing patterns in growing domains even when the steady state changes over time.
Also in Fig. 3 we observe that the zone over the Turing region presents both mixed-mode solutions and temporally periodic solutions without a spatial pattern. The characteristics of all these solutions can be better observed in the spatiotemporal maps presented in Fig. 3. These maps replicate the points in Fig. 2.b and show the qualitative changes between the different types of solutions. It should be noted that these spatial maps are not on a single spatial or color scale and are presented only to illustrate qualitative differences.
The strictly temporal character of the conditions for the appearance of Turing patterns can be exemplified, for example, in the region close to \(r\approx 0.075\) and \(B\approx 2\) where, as illustrated in Fig. 3, A Turing pattern is not initially predicted, until later times. This can be corroborated numerically in Fig. 4, where we have plotted the behavior of the wavenumber and the amplitude of the pattern for three different parameters. As observed in the last case, the amplitude of the Turing pattern is zero and only occurs until a later time. This suggests the role that time has in the Turing conditions as a possible bifurcation parameter and which will be studied later in this series of works.
Finally, in Fig. 5, we show some average characteristics of the numerical solutions found. In Fig. 5.a, we find that the wavenumber in the actual domain, as in the exponential case, depends mainly on the growth parameter \(r\). Therefore, for growing, the wavenumbers are smaller than for the shrinking case. This manifests the tendency of evolving-domain systems to have wave numbers at the bottom/upper part of the instability range for growth/shrinkage, respectively. Therefore, it seems that this feature is a characteristic of dilution itself rather than a specific type of growth. On the other hand, in Fig. In fig. 5.b, we show the amplitude averaged over time. This figure corroborates that the
Figure 2: Turing conditions for linear growth/shrinkage. Left.- Predictions of our scheme for three different moments represented by the size of the green circle. Right) Results of our numerical simulations with H,T,M and P representing homogeneous, Turing, mixed-mode and time-periodic numerical solutions, respectively.
region predicted for the Turing structures does indeed have a profile similar to that of Fig. 2.a predicted by us. It also shows that the periodic solutions in the upper right part of 2.b arise due to the impossibility of maintaining a non-zero amplitude for values of \(r\gtrsim 0.05\).
## IV Discussion and Conclusions
In this work we have generalized the idea presented in Part 2 of this series of understanding the conditions for the formation of Turing patterns from the qualitative changes of a potential function of the linearized RDD problem. This has allowed us to hypothesize possible Turing conditions for domains that grow/decrease isotropically throughout their domain, and whose homogeneous state may depend on time.
These hypotheses were tested against the Brusselator-type RDD system with linear growth giving good comparisons between our predictions and the numerical simulations. These results, in addition to being evidence for our predictions, allows us to conclude that in linear growth the Turing region widens for shrinkage and changes slowly in the case of growth compared to a fixed domain system.
We also corroborate that the Turing conditions, for the case in which the homogeneous state depends on time also can depend on time, and therefore it may be the case that a pattern appears or disappears as the domain evolves.
Figure 4: Time-averaged wavenumber and amplitude for three different values of \(r\) given in the inset. \(B=2.0\) was set. Note the sudden appearance of pattern in the latter case.
Figure 3: Spatiotemporal maps of the numerical solutions presented in Fig. 2.a
As in the case of exponential, we corroborate that for linear growth/shrinkage, the wave number on average is smaller/larger than for the case of a fixed domain, respectively, demonstrating that this behavior of the wave number is more of an inherent property of a diluted system that changes in size, rather than the particular type of growth. We also find that the average amplitude of the patterns in the case of linear growth is similar to those obtained with exponential growth studied before.
We conclude that it is still necessary to compare the results of this work with the predictions made in previous works on Turing conditions for increasing domains, for example [6; 7]. However, we believe that the theory presented in this series offers a panoramic vision that allows us to understand the formation of spatial patterns in a general way.
|
2310.18367 | Unsupervised Learning of Molecular Embeddings for Enhanced Clustering
and Emergent Properties for Chemical Compounds | The detailed analysis of molecular structures and properties holds great
potential for drug development discovery through machine learning. Developing
an emergent property in the model to understand molecules would broaden the
horizons for development with a new computational tool. We introduce various
methods to detect and cluster chemical compounds based on their SMILES data.
Our first method, analyzing the graphical structures of chemical compounds
using embedding data, employs vector search to meet our threshold value. The
results yielded pronounced, concentrated clusters, and the method produced
favorable results in querying and understanding the compounds. We also used
natural language description embeddings stored in a vector database with
GPT3.5, which outperforms the base model. Thus, we introduce a similarity
search and clustering algorithm to aid in searching for and interacting with
molecules, enhancing efficiency in chemical exploration and enabling future
development of emergent properties in molecular property prediction models. | Jaiveer Gill, Ratul Chakraborty, Reetham Gubba, Amy Liu, Shrey Jain, Chirag Iyer, Obaid Khwaja, Saurav Kumar | 2023-10-25T18:00:24Z | http://arxiv.org/abs/2310.18367v1 | Unsupervised Learning of Molecular Embeddings for Enhanced Clustering and Emergent Properties for Chemical Compounds
###### Abstract
Molecular structure and property analysis plays a pivotal role in drug development, with significant potential for advancement through machine learning. The development of an emergent property model to decipher molecular intricacies offers a novel computational approach. In this study, we introduce methods for the detection and clustering of chemical compounds based on SMILES data. We designed and developed a similarity search algorithm that uses the Tanimoto coefficient and molecular fingerprinting to analyze graphical chemical structures. Additionally, we enhance existing LLMs using natural language description embeddings stored in a vector database. Our efficient similarity search and clustering algorithm resulted in distinct clusters exceeding a predefined threshold. This approach has the potential to pave the way for transformer-based drug design, offering researchers deeper insights into molecular properties through autoencoders and similarity search.
Contributing authors: [email protected]; [email protected];
[email protected]; [email protected];
[email protected]; [email protected];
[email protected]; [email protected];
## 1 Introduction
Despite a considerable amount of research towards synthesizing specific molecules, the fields of biochemistry and pharmacology still lack a machine learning approach that enables understanding of underlying properties in molecules. Such an approach would benefit various fields by allowing intelligent querying for molecules based on their properties and maintaining sustainability in developing novel solutions in future drug
discovery, and can rapidly advance how future researchers interact with molecular information. Natural language processing and embeddings can be used on both raw text data as well as graphical molecular data. This makes the creation of medical-grade drugs and chemicals not only easier but also more transparent.
Natural language queries are characterized by inputs that are entered as spoken or written language, but lack special and punctuation characters such as the '+' or '!'. These are processed by a large language model, using embeddings and large language models A variety of embedding techniques are used to transform the input text into a series of numbers for processing.
Current molecule searching and drug discovery methods do not give researchers the outright ability to analyze molecules using a reasoning methodology or to infer the properties of novel drugs. Instead, these current functions are limited in their capacity and not intuitive, relying on data that are not readily comprehensible, such as SMILES (Simplified Molecular Input Line Entry System), for marking specific queries, which are very complex and long strings of data unable to be consistently and thoroughly analyzed by most. As a result, essential similarities and properties for drug discovery are hidden within the complex formatting in both the graphical representation of these molecules as well as the natural language descriptions. Our aim through these methods is to develop models capable of developing emergent properties that enable machines to understand molecular properties and relations through either graphical data or descriptions, giving researchers easy access to certain properties and tools, and increasing understanding of molecules and making drug discovery a much faster process as a result.
Furthermore, in Rives et al, the authors use a transformer neural network to process the structures and functions of 86 billion amino acids and later use embeddings to represent protein sequences as points in higher dimensional spaces. Using a self-supervised model to apply protein data as unlabeled amino acid sequences allowed this transformer model to provide a characterization of the proteins and amino acids-the model could have representations that contain information about the biological properties of amino acids and proteins. This method created a deep language contextual model to advance the predictive and generative capabilities of artificial intelligence in biology. Our project builds upon this method, exploring its generative and predictive applications in biochemistry with chemical property prediction.
In Bran et al, the authors discuss "Augmenting large-language models with chemistry tools," and introduce a novel chemistry-based NLP model called ChemChrow. The authors in this text outline their ChemChrow model, which is an LLM-based chemistry agent meant to answer questions about chemistry-based queries using information from web scraped data as needed regarding the query. Their approach created a model based on GPT to aid in drug discovery and to help less knowledgeable researchers delve into the world of chemistry. Our project explores using this method to generate descriptions for each given molecule in our dataset to make the data richer to train our model against. In future research, we would also delve into using ChemChrow as a benchmark to evaluate our model's understanding of properties in molecules.
In Jumper et al, the paper "Highly Accurate Protein Structure Prediction with AlphaFold" published in Nature presents a transformative advancement in the field of molecular biology. The AlphaFold algorithm, developed by researchers at DeepMind, utilizes a deep-learning approach to predict protein structures with unprecedented accuracy. By leveraging a variant of the attention mechanism within transformer models, AlphaFold demonstrates remarkable proficiency in predicting three-dimensional protein structures. This innovation holds immense significance within our broader application of detecting properties in molecules. As the accurate prediction of protein structures is a pivotal step in understanding molecular behavior and function, the insights and methodology derived from AlphaFold's success could potentially serve as a cornerstone for enhancing our ability to develop emergent properties to understand various molecules.
## 2 Results
Our study introduced important methods and steps toward the advancement of research in molecular property prediction and the goal of developing emergent properties in models. Our similarity search algorithm, which employs vector search, was able to accurately identify and cluster chemical compounds based on their graphical input data. This algorithm can be used in the future in parallel with other transformer architectures as a molecular property prediction tool, potentially using a textual input or output to enrich the model. This would employ transformer architecture that can understand fundamental properties in molecules while still being able to understand textual relations. Our vector database of NLP embeddings or fine-tuned models can aid this process to potentially revolutionize the field of drug development, overall allowing researchers to more easily understand the underlying properties of molecules and develop novel solutions in future drug discovery.
One of the key impacts of our study is its potential to significantly improve the efficiency and accuracy of chemical exploration. By leveraging our similarity search algorithm and natural language description embeddings stored in a vector database with GPT-3.5, researchers can more easily search for molecules based on natural language queries and structural similarity. This can help to maintain sustainability in developing novel solutions for future drug discovery, and can rapidly advance how future researchers interact with molecular information.
Our algorithm for similarity search for molecules has the potential to be a powerful tool for scientists to research and develop new drugs and chemicals. By accurately identifying and clustering chemical compounds based on their SMILES data, our algorithm can help researchers to more easily understand the underlying properties of molecules and develop novel solutions in future drug discovery. By giving transformers an understandings of how structural similarities work, we also move closer to a transformer that understands the impact of structural differences in molecules. This can lead to the development of more effective and efficient drugs and chemicals, ultimately benefiting society as a whole and paving the way for future developments in drug discovery.
## 3 Natural Language Data Collection
While extensive chemical compound information is available online through the Pub-Chem dataset, it lacks descriptions suitable for robust training data for a large language model due to variations in length and information richness. To address this, we leveraged the PUG-View API, a REST-style web service provided by PubChem, to aggregate descriptions for each molecule in the dataset.
This effort resulted in the creation of a dataset comprising 328,000 compound descriptions, including SMILES data, chemical formulas, molecular weights, and other pertinent information for each compound. Although the dataset drawn solely from compounds with descriptions provided a reasonable size for research purposes, the average length of these descriptions was a mere 3.09 sentences, leaving room for improvement in fine-tuning.
Our initial attempt at enhancing the dataset involved developing a web scraper to extract relevant information about target molecules from reputable scientific journals to enrich the descriptions. Instead, we employed alternative methods to enhance the dataset and maximize potential accuracy scores for the final model. One such method involved concatenating the 'description' data with all other data categories in the dataset, such as molecular weight, molecular formula, and polar area. This resulted in an increase in the average length of each description by incorporating relevant information, while maintaining the richness of the descriptions. Additionally, the inclusion of molecular formula, polar area, and molecular weight contributed to enhancing the variability and uniqueness of each description in the dataset.
## 4 Graph Neural Network
Further enhancement of our data took place by adding blood-brain barrier permeability as a data point in our natural language description.
In the context of molecular representation, the GNN takes in a SMILES string and turns it into a graph where the nodes represent atoms and the edges represent bonds. Fig 1 shows the SMILES string of a caffeine molecule turned into an adjacency matrix. Every atom (node) has a feature vector, which represents the attributes of the atom. The GNN iteratively updates the feature vectors of nodes by aggregating information from their neighbors, known as message passing.
We used a GNN to enhance the dataset by increasing descriptions and as a benchmark for blood-brain barrier property prediction performance in the LLM, which will be discussed later. The blood-brain barrier (BBB) is a highly selective and semi-permeable boundary that separates the circulating blood from the brain and its
\begin{table}
\begin{tabular}{l c} \hline \hline
**Prompt** & **Response** \\ \hline Ten heaviest gases? & No info on ten heaviest gases. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Original Dataset Response
surrounding extracellular fluid, functioning as a protective barrier from potentially harmful substances while allowing essential nutrients and molecules to pass through. Thus, the controlled permeability of the BBB plays a pivotal role in maintaining brain health, and the permeability property of certain molecules is vital information in developing drugs meant to target the central nervous system.
To predict Blood Brain Barrier Permeability, we utilized the message-passing neural network (MPNN) model detailed by Keras. The MPNN is trained on the benchmark dataset developed by MoleculeNet, and it contains 2050 molecules that each come with a name, label, and SMILES string. The model reported a 96.28 percent AUC (Area Under ROC curve) after training and a 90.26 percent validation AUC after testing. We took the PubChem dataset and ran the SMILES string column through the MPNN to get permeability predictions for all 328k molecules. Because the MPPNN outputs a float value between 0 and 1, we converted these values into natural language descriptions by running the output data frame through an algorithm that assesses the value of the float and returns the permeability as a string sentence. These sentences were then concatenated to the molecules' respective descriptions already in the data frame, and this added more data to our dataset. Fig 1 visualizes a representation of the caffeine molecule as an adjacency matrix to graphically characterize the data for the GNN, a format easier for computation.
## 5 Fine Tuning
Using LLM fine-tuning, we were able to analyze the ability of language-based models to characterize molecules. We discuss our methods involving fine-tuning LLMs,
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**No.** & **Gas Name** & **Weight Type** & **Weight Value** \\ \hline
1 & Radon (Rn) & Atomic weight & 222.0 \\
2 & Uranium Hexafluoride (UF6) & Molecular weight & 352.0 \\
3 & Tungsten Hexafluoride (WF6) & Molecular weight & 297.8 \\
4 & Sulfur Hexafluoride (SF6) & Molecular weight & 146.1 \\
5 & Radium (Ra) & Atomic weight & 226.0 \\
6 & Plutonium Hexafluoride (PuF6) & Molecular weight & 329.0 \\
7 & Osmium (OsO4) & Tetroxide & Molecular weight & 254.2 \\
8 & Oganesson (Og) & Atomic weight & 294.0 Heaviest element) \\
9 & Nitro Oxide (N2O) & Molecular weight & 44.0 \\
10 & Xenon (Xe) & Atomic weight & 131.3 \\ \hline \end{tabular}
\end{table}
Table 2: Response to the prompt with the enhanced dataset
namely, LLaMA 2 and OpenAI. Our study aims to use textual data as the foundational training material for the finely tuned model. This augmentation of data plays a pivotal role in fine-tuning, as relevant data is very important to a successfully fine-tuned model. We curated a chemical dataset of 50,000 molecular descriptions, selected based on "richness" or quality of description for the LLM. We used the Hugging Face Transformers library along with the LLaMA 2 model and OpenAI's 'curie' model. The goal of this approach was to experiment with whether purely text-based large language models could form emergent properties regarding molecular relationships and structures.
### Fine-Tuning Parameters
For LLaMA 2, we leveraged the SFTTrainer component from the Text-to-Text Transfer Learning (TRL) library. We also used Parameter Efficient Fine-Tuning (PEFT) and Low Rank Adaptation (LoRA) in the fine-tuning process to lower the computational burden. This approach achieves performance comparable to full fine-tuning, while also significantly reducing GPU memory requirements. The application of PEFT and LoRA enhances the computational efficiency of LLaMA fine-tuning, making multiple iterations of fine-tuning a feasible avenue for optimal results. Curie's API-based fine-tuning method led to much less direct involvement with hyperparameters due to it being streamlined by OpenAI.
### Fine-Tuning Methods
We utilized several different methods of fine-tuning, all using LLaMA 2 7B and GPT-3 'curie' applied on many unique datasets. The first iteration involved tuning LLaMA
Figure 1: Example graph representation of molecular structure (caffeine)
2 on the data generated earlier using PubChem descriptions and properties. The entire dataset was not fine-tuned as most molecules and functional groups relevant to the model's understanding of chemistry lie in the first portion of the dataset.
\begin{tabular}{|c|c|} \hline
**Rank** & **"Liquid"** \\ \hline
1 & Mercury (Hg) \\ \hline
2 & Bromine (Br) \\ \hline
3 & Iodine (I) \\ \hline
4 & Chlorine (Cl) \\ \hline
5 & Sulfur (S) \\ \hline
6 & Phosphorus (P) \\ \hline
7 & Arsenic (As) \\ \hline
8 & Antimony (Sb) \\ \hline
9 & Bismuth (Bi) \\ \hline
10 & Tellurium (Te) \\ \hline \end{tabular}
search method for molecules. Furthermore, curie's increased number of parameters paired with further techniques can improve these results.
### Molecular LLM
The more important capability of fine-tuning would be creating an LLM capable of understanding the fundamental properties of what makes up a molecule, or a 'Molecular LLM.' This would result in it being capable of understanding the fundamental graphical structure and inter-relationships of a given molecule. To do this the model would not take any textual description as input, which in general goes against the purpose of an LLM. This would challenge large language models like LLaMA and curie to understand not textual patterns in the data but patterns in the molecular structure of the molecules. The first method acted simply as a base test and involved simply inputting the SMILES string into the LLM as the prompt for the LLM with the goal being the model predicting the Blood Brain Barrier Permeability of the molecule and competing with the GNN discussed earlier/ The goal was for the textually based transformer to recognize patterns in the molecules and bonds present in SMILES data and find a relationship between that and the target BBBP. However, because of the complex nature of the SMILES strings being very intricate and oftentimes long, as expected the LLMs (both curie and LLaMA) were unable to grasp the molecular intricacies and patterns in the SMILES data.
### NLP SMILES input
We converted the SMILES data to graphical data via the RDKit library and represented the output graphs in natural language form. The bond data and atomic data were extracted from the graphical representation and converted into a natural language prompt, displaying each of the bonds in the molecule and their respective atoms and
valency contributions. The following table shows an example SMILES NLP representation given the SMILES string input 'CC[N+](C)(C)CC1=CC=CC=C1Br', or the compound known as 'Bretylium,' an antiarrhythmic agent and norepinephrine release inhibitor. This was inputted into Curie, and initially, it yielded a result of 50 percent, which is equivalent to guessing and an accuracy that was not statistically relevant. However, this was using output data of the entire molecular description, including categories like molecular formula, molecular weight, polar area, etc. Although the model performed rather well on those categories, it was more focused on those and unable to grasp relationships between the input data and the Blood-Brain Barrier Penetration, a more complex property. The model was retrained on the same input data with the output data being strictly limited to BBBP. The model was subsequently run on a benchmark dataset of 2050 molecules referenced earlier in the GNN section. It resulted in 71 percent accuracy, which was a notable improvement from the previous iteration. This proved that the LLM understood some parts of the molecular structure of a given molecule and its effects on BBBP. However, an issue in this prediction data is it being heavily skewed toward one column of the data causing a level of bias toward predicting a positive result. This is not ideal because the goal is an even split in outputs.
### Adjacency Matrix
Another attempt was to input an adjacency matrix as the prompt for curie with the completion being BBBP. An adjacency matrix is a mathematical representation used primarily in graph theory to describe the relationships between nodes in a graph. Adjacency matrices are used in various graphical applications, in this case regarding molecules. They provide a convenient and structured way to represent the relationships within a graph, making it easier to analyze and manipulate graph-based data. In this context adjacency matrices were used to convert the graphical representation of molecules with atoms and bonds being nodes and edges into the adjacency matrix.
This was inputted as a string into the LLM to see if it could understand this direct graphical data. The output result was far from ideal, and it proved that directly graph-based data represented in forms such as the adjacency matrix are unlikely to be directly decipherable by a large language model like curie or LLaMA. An adjacency matrix would simply be a matrix of generally smaller integers representing the given molecule. Instead, future attempts at creating this type of "molecular LLM" would include using a network based on graph transformers to calculate relationships between graphical data more efficiently than in regular LLMs, but still relate that to textual inputs or outputs. This ability to relate molecular and textual data would be greatly beneficial in the field of drug discovery.
### 'Curie' Fine-Tune
GPT 3's 'curie' model was fine-tuned on the PubChem descriptions from the final concatenated dataset including molecular formula, molecular weight, polar area, hydrogen bond donors, and blood-brain barrier permeability. Because curie contains nearly twice the number of parameters as LLaMA 2 7B, the fine-tuned model was expected to be deeper and more able to pick up intricacies in the training data. Though the results didn't yet compare with the GPT-3 model using a vector database of embeddings as context (a method discussed more thoroughly later), it proved its basic understanding of molecular properties and their relation to each other in similar compound groups. With more refined data in the future, the model can be improved in the future. Using the current data it is not as accurate but with a scaled approach and more heavily curated data it can become a great method for understanding molecular properties and their relations.
## 6 Embeddings with NLP data
This section builds on fine-tuning with a more practical approach to natural language data. We embedded the textual data into high-dimensional vectors to create an ability to compare semantic descriptions of molecules and cluster them based on those similarities. These embeddings serve a dual purpose, enabling both data visualization and acting as input to encoder-decoder structures in large language models (LLMs) like GPT and Llama. To achieve this, various embedding models were employed, including three BERT-based models and one from OpenAI.
The first model utilized was Base BERT uncased, along with Arzington BERT and Chemical BERT. Multiple embedding models allowed for the encoding of the data in different ways, capturing various aspects of its semantics. As shown in Figure 6, an example visualization of these embeddings using the BERT model, with dimensions reduced through the t-SNE algorithm (t-Distributed Stochastic Neighbor Embedding), can be observed. t-SNE is particularly valuable for visualizing high-dimensional data, such as word embeddings generated by models like BERT. BERT embeddings typically reside in a high-dimensional space (e.g., 768 dimensions), making it challenging to directly visualize and interpret relationships between words or sentences. t-SNE
projects these embeddings into a lower-dimensional space, providing a 2D visual representation of semantic relationships and clusters while preserving the underlying data and relationships from the higher dimensions.
These lower-dimensional representations visually depict the semantic meanings of the descriptions relative to each other. Moreover, basic queries can be compared using cosine similarity across the entire high-dimensional vector database, enabling efficient data sorting and querying. Fig 2 shows the t-SNE embeddings visually represented.
We stored and vectorized the embedding data as "context" for an LLM, specifically GPT-3.5-turbo. The BERT embeddings served as context for the model. The BERT embeddings were saved to a vector database using DeepLake and Langchain libraries. The encoded data, combined with user prompts, resulted in human language output generated by the model, incorporating contextual information from the compound descriptions in PubChem. This approach represents a form of prompt engineering using a vast dataset of embeddings as context, demonstrating significant improvements over simple search algorithms through the vector database and the base GPT-3.5 model.
The LLM utilizing description embeddings also exhibited the ability to generalize to information not explicitly stated in the description dataset, providing additional data. This suggests that the LLM can intelligently search through the chemical dataset via natural language prompts. Running the same LLM with a more strictly curated dataset (only the larger 100,000 descriptions from the original 328,000) yielded similar results, although not statistically significant enough to warrant exclusive use of the second method. However, it did reduce the size of embeddings, which in turn reduced computational power requirements and API costs proportionally.
Figure 2: t-SNE Visualization of BERT Embeddings
This method produced improved results compared to fine-tuning in being able to understand molecular properties and relationships, and it laid the foundation for comparisons with the vector search method later in the paper.
### Benchmark Data for LLM
We created a benchmark dataset to measure the search capabilities of methods such as embedding vector search, embeddings with GPT, and a fine-tuned Llama 2 model. An ideal benchmark would consist of questions posed to the model, encompassing various properties of a given molecule. The model would then be evaluated based on the accuracy and fluency of its responses. However, the model must not possess prior knowledge of the description or question to prevent it from recognizing phrases and "cheating" to obtain answers. To overcome this challenge, descriptions of molecules were considered for scraping from Wikipedia to use as input for the model. However, simply inputting this data into the model would be problematic, as LLMs like GPT and Llama have substantial knowledge of Wikipedia from their training data, potentially recognizing phrases. To address this, summaries of the data were generated using a model, and these summaries were applied to each Wikipedia description to obtain a general summary without copying phrases directly.
This process involved using 1,000 descriptions sourced from PubChem, along with 1,000 descriptions present in PubChem but not included in the training dataset. The resulting benchmark dataset comprises 2,000 benchmark molecules, with 1,000 drawn from the embeddings and 1,000 excluded. This dataset served as the benchmark for evaluating the performance of the LLMs, primarily comparing the results of the GPT-3.5 base model and the implementation with GPT-3.5 drawing context from the vector database of embeddings defined earlier. Cosine similarity was employed as the metric for comparing model-generated outputs with desired outputs in the benchmark dataset.
### Results for LLM Model
Using the benchmark defined previously, an evaluation of the GPT base model and the GPT model utilizing the BERT embeddings vector database was conducted. Both models underwent evaluation on the entire benchmark dataset and were queried with a specified molecule to generate a response containing the description of that molecule. Additionally, they were presented with the same prompt via prompt engineering, querying about the fundamental properties of the molecule, such as density, solubility, and appearance. This approach differed from the previous method, which queried for a specific molecule with its description provided as input, as it was deemed too variable, with multiple possible "correct" answers for a single description query, potentially leading to unwanted ambiguity in accuracy calculations. The results, evaluated using semantic similarity, are depicted in Fig 4.
## 7 Vector Search
### Overview
The vector search algorithm diverges from previous methods and delves into clustering and finding relationships in non-textual data in molecules. It finds similarities between a given query molecule and the molecules in the SMILES dataset provided by PubChem. It achieves this by first clustering fingerprints of the data based on numerical measures of molecular similarity, and then returning the cluster closest to a given molecule when evaluating based on the Mahalanobis distance metric.
Due to computational restrictions, the algorithm was only run on the first 1000 molecules from the SMILES dataset for the hyperparameter tuning and testing portions of our project. We ran the algorithm on the first 1200 molecules for clustering and cluster visualizations as pictured in our results section, as well as a visualization for the first 2000 molecules.
### Embeddings with SMILES data
This section dives into ignoring semantic relationships between the words in each description; it does not take any textual descriptions as input. Instead, it is just given a graphical representation of SMILES data to learn emergent properties that define certain characteristics. Embeddings Using SimCLR The (SimCLR) architecture is a powerful deep-learning framework designed for generating meaningful representations from raw data. We utilized SimCLR to create embeddings of RDKit Morgan fingerprints derived from SMILES data, which encode molecular structures. SimCLR employs a Siamese network structure, where two identical subnetworks share the same
Figure 3: GPT Embeddings Context results
weights. It learns by maximizing the similarity between positive pairs (samples from the same data point) while minimizing the similarity between negative pairs (samples from different data points). We use the cosine similarity function to do this in our case. This process encourages the network to learn high-level features that capture the intrinsic structure of the input data, making it well-suited for generating informative embeddings of molecular fingerprints. Fig 7 shows a 2D visualization of the SimCLR embeddings using the t-SNE method described earlier.
These embeddings can be valuable for tasks such as molecular similarity analysis, compound screening, and drug discovery, as they encode the underlying chemical properties and relationships between molecules in a continuous vector space. Using the SimCLR architecture, we extracted the embeddings of the SMILES data for each compound within the PubChem dataset. We used a pre-trained base encoder, Resnet, as the backbone of our architecture. Another crucial aspect of the SimCLR architecture is the custom contrastive loss function, which aims to maximize the agreement between positive pairs while minimizing the agreement between negative pairs. This process tells the encoder to map similar molecules closer together in the embedding space and push dissimilar molecules apart.
### Fingerprinting and Clustering
We use MACCS (Molecular ACCess Systems) fingerprinting, a commonly used molecular fingerprint found in the RDkit python library, in order to find similarities between given molecules. The inputs to the MACCS fingerprinting function are given by SMILES strings of given molecules. We utilize a Tanimoto similarity calculation with this metric to find similarity between two candidate molecules, which is then translated into a distance between them. This is then entered as a parameter into a matrix of given distances between pairs of molecules.
We utilize t-SNE as a dimensionality reducer, where each row represents a given query molecule, and each column describes the Tanimoto similarity between that
Figure 4: SimCLR Embeddings Visualization
molecule and the others within the dataset as a feature of that molecule. We chose t-SNE because it preserves local structures within data, especially relationships between points that lie close together, making it fit for clustering tasks such as this. Our visualization, clustering, and query function use standardized (z-score normalized) forms of the data returned from the t-SNE algorithm to ensure that each feature contributes equally to similarity calculations and that outliers or large coordinates do not interfere with clustering. As an added bonus, standardization also ensures visualizations can be done with ease.
We employ affinity propagation clustering to sort our data into groups on the basis of molecular similarity. We chose affinity propagation over other clustering algorithms due to its ability to sort without a given number of clusters as a parameter, as is the case with k-means and other clustering algorithms, thus preserving inherent patterns present in the data when visualizing and finding the closest cluster. Fig 5 visualizes the clusters of our molecules.
Figure 5: Vector Search clustering
### Query Function
Our query function utilizes the Mahalanobis distance metric to find the closest cluster to a given query point. The Mahalanobis distance is a generalized form of the Euclidean distance metric and finds the distance between a value and a distribution of points. The Mahalanobis distance also takes into account the relationship between the parameters when looking at multivariate data, and takes into account the variability within the data to return the number of standard deviations away from the mean at which a given data point is located.
We chose the Mahalanobis distance metric for these particular properties. The distances produced by the compute_tanimoto_distances function within our code are the basis of molecules related to the query and do not take into account the fact that each of the parameter molecules may be related to each other as well. The usage of the Mahalanobis distance metric accounted for this factor and returned a statistically significant result of the similarity between a given query point and the clusters.
### Tanimoto Accuracy Calculation
We measure the accuracy of our clustering model by implementing Tanimoto accuracy calculation between the molecules of each given cluster. Using the query function, we input precise values to get specific clusters as the closest cluster, and then implement a Tanimoto accuracy calculation upon them, using MACCS fingerprinting. In addition, we calculate accuracy via summary statistics, as pictured in the results section.
### Time Complexity for Vector Search
The visualization which showcases all the clusters and where they are centered took about 8-10 minutes to retrieve. The data preprocessing, including the one-hot encoding, and computing distances, can be computationally intensive especially when working with a large dataset of 1000-2000 long SMILE strings. Along with this, the t-SNE used for dimensionality reduction can be time-consuming with large datasets due to the pairwise computations and optimization it involves to find the best low-dimensional representation.
The affinity propagation clustering algorithm applied to the reduced-dimensional data is computationally expensive, especially since the algorithm needs to iterate many times to converge. The visualization, although may not be computationally intensive by itself, can add to the overall runtime.
The visualizations represent the query points and the summary statistics took about 1 minute to retrieve. The time to retrieve this is relatively faster due to the smaller size of the input data. This is visualizing the Tanimoto coefficients for a subset of molecules from the closest cluster, which is expected to be much smaller than the original dataset. Additionally, generating a bar chart is less computationally expensive compared to dimensionality reduction and clustering. Lastly, these visualizations require no iterative algorithms or complex calculations, which reduces its runtime.
### Vector Search Results
Our method describes how clustering algorithms, when modified with chemical fingerprinting, can describe similarity between molecules to a certain extent. Though this does not describe emergent properties of understanding molecular design and properties by a model, it analyzes molecular similarity from another angle, particularly the usefulness of clustering algorithms when querying based on similarity. The algorithm uses SMILES data to perform similarity search to get an idea of what molecular similarity might look like based on related molecules. This is an alternative method to search for molecular properties; rather than using prompt-based queries, we use Tanimoto similarities as distances to cluster molecules, while leaving open the possibility for custom embeddings of the data to improve upon our accuracy. Given a numerical representation of a molecule within our framework, the vector search model is then able to return the cluster of molecules that describes the molecules most similar to it, an essential component of understanding the underlying properties that make molecules similar.
Numerically, our findings satisfactorily met the key threshold value of 0.6. For most of the representative clusters below, our model met the threshold. The results were more highly pronounced with concentrated clusters, achieving the highest accuracy with the cluster centered at [-2.0,1.0].
For a majority of non-representative clusters, we met the threshold of 0.55. Below we have pictured summary statistics and representative clusters for a variety of point queries to our model.
|
2310.17054 | BOOST: Harnessing Black-Box Control to Boost Commonsense in LMs'
Generation | Large language models (LLMs) such as GPT-3 have demonstrated a strong
capability to generate coherent and contextually relevant text. However, amidst
their successes, a crucial issue persists: their generated outputs still lack
commonsense at times. Moreover, fine-tuning the entire LLM towards more
commonsensical outputs is computationally expensive if not infeasible. In this
paper, we present a computation-efficient framework that steers a frozen
Pre-Trained Language Model (PTLM) towards more commonsensical generation (i.e.,
producing a plausible output that incorporates a list of concepts in a
meaningful way). Specifically, we first construct a reference-free evaluator
that assigns a sentence with a commonsensical score by grounding the sentence
to a dynamic commonsense knowledge base from four different relational aspects.
We then use the scorer as the oracle for commonsense knowledge, and extend the
controllable generation method called NADO to train an auxiliary head that
guides a fixed PTLM to better satisfy the oracle. We test our framework on a
series of GPT-2-, Flan-T5-, and Alpaca-based language models (LMs) on two
constrained concept-to-sentence benchmarks. Human evaluation results
demonstrate that our method consistently leads to the most commonsensical
outputs. | Yufei Tian, Felix Zhang, Nanyun Peng | 2023-10-25T23:32:12Z | http://arxiv.org/abs/2310.17054v1 | # Boost: Harnessing Black-Box Control to Boost Commonsense in LMs' Generation
###### Abstract
Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text. However, amidst their successes, a crucial issue persists: their generated outputs still lack commonsense at times. However, fine-tuning the entire LLM towards more commonsensical outputs is computationally expensive if not infeasible. In this paper, we present a computation-efficient framework that steers a frozen Pre-Trained Language Model (PTLM) towards more commonsensical generation (i.e., producing a meaningful and plausible output that incorporates a list of concepts). Specifically, we first construct a reference-free evaluator that assigns a sentence with a commonsensical score by grounding the sentence to a dynamic commonsense knowledge base from four different relational aspects. We then use the scorer as the oracle for commonsense knowledge, and extend the controllable generation method called NADO to train an auxiliary head that guides a fixed PTLM to better satisfy the oracle. We test our framework on a series of GPT-2-, FLAN-T5- and Alpaca-based language models (LMs) on two constrained concept-to-sentence benchmarks. Human evaluation results demonstrate that our method consistently leads to the most commonsensical outputs.1
Footnote 1: Source code available at [https://github.com/PlusLabNLP/B005T_EMNLP23](https://github.com/PlusLabNLP/B005T_EMNLP23)
## 1 Introduction
Recent years have witnessed remarkable progress in massively Pre-Trained Language Models such as GPT-3 Brown et al. (2020), Llama Touvron et al. (2023) and instruction following models such as Flan-T5 Chung et al. (2022), ChatGPT OpenAI (2022), and Alpaca Taori et al. (2023). However, one significant drawback is the lack of commonsense knowledge in their generated texts. There have been criticisms around their commonsense importance Marcus (2020); Elazar et al. (2021); Madowald et al. (2023), and a discrepancy in what LLMs generate in the wild versus in question answering Chen et al. (2023).
In this paper, we explore the task of generative commonsense reasoning: a constrained text generation task aiming to generate a plausible sentence given a list of concepts as input. As depicted in Figure 1, language models should generate a sentence that incorporates 'open, hand, oyster, glove' in a meaningful way that aligns with our commonsense. We
Figure 1: LMs such as GPT-2 finetuned, Alpaca-7b fewshot, and GPT-3 Davinci-003 fail to incorporate the concepts in a commonsensical way. We highlight the insensible phrases in purple. (c) illustrates that they are also vulnerable to perturbations of the input prompt as simple as the swap of two concept positions. Our system which uses an auxiliary model to steer a **frozen** PTLM generates the most commonsensical outputs.
unreliable and fail to generate commonsensical outputs_ when the input concepts get complicated. In another case depicted in Figure 1(c), when we swap the position of two input concepts 'customer' and 'employee', LLMs such as Davinci-003 are vulnerable to the change and generate _'employee watched a customer prepare food'_ despite being instructed to not consider the concept appearance order, which is far from plausible.
Various knowledge-augmented systems have been previously proposed to incorporate external knowledge into the model Liu et al. (2021); He et al. (2022) for more plausible generation outputs. However, they all require updating model weights at the scale of hundreds millions of parameters such as BART Lewis et al. (2020). As PTLMs continue to evolve and scale up to hundreds of billions of parameters in size, finetuning the entire LM becomes computationally prohibited for many parties in academia and the industry.
In this work, we propose Boost, a framework to boost the commonsense of PLTMs' generation in a plug-and-play manner (Figure 2), which is inspired by the recent development of controllable generation to use a small auxiliary model to control a PTLM by training on its _self-generated_ samples Meng et al. (2022). Specifically, to better integrate commonsense knowledge, we first build a scorer that evaluates how commonsensical a sentence is. The commonsense scorer, called \(\mathcal{O}\)-Scorer, extracts tuples of commonsense-related concepts (e.g., <customers, prepare their food>) from a sentence, and scores the extracted tuples by grounding the tuples to a dynamic commonsense knowledge base (CSKB) Bosselut et al. (2019); Ghazarian et al. (2023). Next, we use the signal from the \(\mathcal{O}\)-Scorer to train an auxiliary model that steers the PTLM toward more commonsensical outputs. Note that our training process is generalizable and only requires access to the output probability of the PTLMs, which is also efficient due to the smaller size of the auxiliary model.
We test our method on gpt-2, Alpaca, and Flan-T5 on two datasets: 1) CommonGen Lin et al. (2020) that focuses on daily concepts (e.g., <open, hand, oyster, glove>) and 2) CSK-PN Chen et al. (2023) that contains concepts linked with negated commonsense relations (e.g. <wear sum-glasses, at night>).
Our contributions are two-fold. First, we propose a reference-free evaluator to assess how commonsensical a sentence is, which achieves on-par performance with referenced-based metrics such as BERTScore Zhang et al. (2019) in terms of correlation with human judgment. Second, we extend a controllable generation approach to improve commonsense for black-box PTLM. Experimental results show that our method consistently results in the most commonsensical outputs.
## 2 Methodology
### Overview
Figure 2 provides an overview of our approach, Boost. During training, Boost first generate numerous samples (\(\mathbf{y}^{1},...,\mathbf{y}^{N}\)) from the PTLM conditioned on the input constraint \(\mathbf{x}\) (e.g., 'lasso horse cow'). We then construct an oracle to give commonsense scores on all of these self-sampled generations. Next, for each \(\mathbf{y}^{i}\) of length \(T_{i}\), we train the auxiliary model called NADO which essentially learns to predict the expected cs score of the complete sequence \(\mathbf{y}^{i}\) given \(\mathbf{x}\) and an incomplete sequence \(\mathbf{y}^{i}_{<t}\) (\(t\in[1,2,...,T_{i}]\)). The flow at inference time is illustrated in dashed lines: both the PTLM and NADO take \(\mathbf{x}\) and the generated sequence (prefix) \(\mathbf{y}_{<L}\) as input, from which we
Figure 2: The process of Boost to steer a frozen PTLM with an additional neural model and oracle commonsense scorer. The solid lines indicate the training process, while the dashed lines indicate inference. In practice, we combine our commonsense scorer with lexical checking rules, and use the joint signal to train the auxiliary model.
obtain the final output distribution \(q(\mathbf{y}|\mathbf{x})\).
The rest of this section is organized as follows. In SS2.2, we first introduce details to construct the commonsense scorer. Then, in SS2.3, we provide the theory and practices to train the auxiliary model on PTLM's self-generated data towards the oracle.
### Constructing Commonsense Scorer
We use commonsense relation tuples as the intermediate representation of a sentence. Specifically, we get rid of human annotation and leverage on the results of few-shot LLMs. We then check whether these extracted tuples are sensible. Specifically, we assign each parsed tuple with a compatibility score based on its maximum similarity with the numerous valid accepted answers generated by COMET, a dynamic commonsense knowledge base (CSKB). Scores for all tuples in a target sentence are then aggregated to obtain the sentence-level commonsense score. Figure 3 provides an illustration of our oracle scorer.
#### 2.2.1 Commonsense-Relation Extraction
Tuple FormatWe leverage the format of ConceptNet (Speer et al., 2017), a widely used knowledge graph connecting concepts or events with commonsense relations to represent general world knowledge. Specifically, each tuple \(\mathcal{T}\) contains a head concept/event \(h\) (e.g., _driller_) and a tail concept/event \(t\) (e.g., _drill a hole_), which are connected through a commonsensical relation \(r\) (e.g., _is Used For_). We consider four crucial relation types that dominantly exist: _is UsedFor, is Capable Of, is At Location_, and _is Part Of_.
Tuple ExtractionWe present a labor and cost efficient way to extract all tuples from a target sentence, including both commonsensical and nonsensical tuples. LLMs such as GPT-3 and ChatGPT (Brown et al., 2020; Ouyang et al., 2022) have demonstrated remarkable ability of few-shot in-context learning for semantic parsing tasks (Dong and Lapata, 2016; Dunn et al., 2022). Motivated by such progress, instead of asking human workers to annotate a training set of sentences, we leverage OpenAI's GPT3.5-Turbo model to parse the relevant tuples. We hand-crafted 9 examples for our few-shot prompt such that the LLM can accurately extract both sensical tuples (e.g., a girl _is Capable Of_ blowing candles) and nonsensical tuples (e.g., horse _is Capable Of_ riding bikes) from the input sentence. The complete instruction and prompt can be found in Appendix A.
However, in practice, using GPT-3.5-Turbo to parse _all_ sentences needed to train our auxiliary model is costly and unreliable when dependent on the unpredictable traffic of OpenAI's API. To obtain an extractor that can parse \(\sim\) a million sentences at a reasonable cost, we finetune a T5 large model (Raffel et al., 2020) on 6,000 GPT-3.5 annotated sentences for the same task. We show the performance of both tuple extractors in SS3.2.
#### 2.2.2 Generative Commonsense Scoring
After extracting relation tuples from a sentence, we need to assess how commonsensical they are. To this end, we follow the compatibility test proposed by Ghazarian et al. (2023) and leverage COMET (Bosselut et al., 2019), a pre-trained generative commonsense transformer that can predict sensible tails given the heads and relations as input. Compared to other fixed and predefined knowledge bases, COMET is dynamic and much more flexible when dealing with original and unseen inputs.
Formally, given a tuple \(\mathcal{T}_{i}=(h_{i},r_{i},t_{i})\) and a dynamic CSKB denoted by \(\mathcal{C}_{dy}\), we query \(\mathcal{C}_{dy}\) with the head \(h\) and relation \(r\) to obtain a diverse list of conditionally generated tails with beam decoding: \(\{t_{j}^{*}\}_{j=1}^{k}=\mathcal{C}_{dy}(h_{i},r_{i},beam=k)\). The commonsense score for \(\mathcal{T}\) is computed by
\[\text{COMPAT}(\mathcal{T}_{i}|\mathcal{C}_{dy})=\max_{1\leq i\leq k}\cos( \text{emb}(t_{i}),\text{emb}(t_{j}^{*})), \tag{1}\]
where \(emb(\cdot)\) is the vector representation from a sentence embedding model (Reimers and Gurevych, 2019). Finally, we need to aggregate the compatibility scores computed from different
Figure 3: An example of our oracle commonsense scorer. We first extract tuples from a target sentence, and assign each extracted tuple with a commonsensical score using COMET (Bosselut et al., 2019), a dynamic commonsense knowledge base. The sentence-level score is then obtained by aggregating tuple-level scores.
triplets extracted from a single sentence. The sentence-level commonsense score is denoted as the \(\mathcal{O}\)-score. One rationale is that a single nonsensical tuple can result in a nonsensical sentence, while the other is that one mistake will be mitigated by other reasonable tuples. We hence take the 1) minimum and 2) average compatibility scores, and study their correlation with human judgement in SS3.3 and Table 2.
### Commonsense-Guided Generation
In this subsection, we describe how we use our derived commonsense oracle to steer the PTL) toward more commonsensical outputs through a neurally-decomposed head (NADO). In SS2.3.1, we summarize the theoretical solution of Meng et al. (2022) to decompose the sequence-level oracle into token-level guidance with a _frozen_ PTLM, such that when generating the \(i\)-th token, the auxiliary neural network modifies the original output logits predicted by the PTLM. Then, in SS2.3.2, we leverage this method to generate more commonsensical outputs. Note that our model only trains the _additional_ NADO head which has much smaller size than the PTLM and does not require access to the parameters inside the PTLM.
#### 2.3.1 Token-Level Guidance with NADO
NotationSuppose we have a sub-optimal PTLM \(p(\mathbf{y}_{t=T^{\prime}}|\mathbf{x},\mathbf{y}_{t<T^{\prime}})\), our goal is to obtain an optimal auto-regressive model \(q\) from \(p\) such that \(q\) generates outputs better satisfying the oracle scorer \(\mathcal{O}\) (for example, \(q\)'s generated outputs achieve higher \(\mathcal{O}\)-scores than \(p\)). We now define a **predictive function**\(R^{\mathcal{O}}(\mathbf{x},\mathbf{y}_{t<T^{\prime}})\) that predicts the _expected \(\mathcal{O}\)-scores_ of the complete sequence \(\mathbf{y}\) given input \(\mathbf{x}\) and the currently generated tokens \(\mathbf{y}_{t<T^{\prime}}\).
\[R^{\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{t<T^{\prime}}\right) =\mathrm{Exp}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x})}\left[ \mathcal{O}(\mathbf{x},\mathbf{y})\mid\mathbf{y}_{<T^{\prime}}\right] \tag{2}\] \[=\sum_{\mathbf{y}\in\mathcal{Y}}p\left(\mathbf{y}\mid\mathbf{x}, \mathbf{y}_{t<T^{\prime}}\right)\mathcal{O}(\mathbf{x},\mathbf{y}) \tag{3}\]
SolutionThe unique closed formed solution of the optimal \(q\) is (namely, generates most commonsically according to \(\mathcal{O}\)):
\[q^{*}\left(y_{=T^{\prime}}\mid\mathbf{x},\mathbf{y}_{<T^{\prime}}\right)= \frac{R^{\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{\leq T^{\prime}}\right)}{R^ {\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{\leq T^{\prime}-1}\right)}p\left(y_ {=T^{\prime}}\mid\mathbf{x},\mathbf{y}_{<T^{\prime}}\right) \tag{4}\]
Please refer to Meng et al. (2022) for details of the proof. From Eq.4 we see that when both \(\mathbf{x}\) and \(\mathbf{y}_{t<T^{\prime}}\) are fixed, the optimal auto-regressive model is factorized into \(R^{\mathcal{O}}\) and \(p\) at step \(T^{\prime}\):
\[q^{*}\left(y_{T^{\prime}}\mid\mathbf{x},\mathbf{y}_{<T^{\prime}}\right)\propto R ^{\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{\leq T^{\prime}}\right)\cdot p \left(y_{T^{\prime}}\mid\mathbf{x},\mathbf{y}_{<T^{\prime}}\right) \tag{5}\]
ApproximationAs we cannot enumerate \(\mathcal{Y}\) that contains infinite number of sequences, the well-defined \(R^{\mathcal{O}}\) is intractable. A neural model called NADO is hence introduced to approximate \(R^{\mathcal{O}}\), by training on numerous samples \(\mathcal{Y}\) generated by \(p\).
#### 2.3.2 NADO-Guided Generation
Given a pre-trained language model \(p\) such as the GPT-2 and Alpaca model, we first ask \(p\) to generate numerous samples to obtain an approximation of \(\mathcal{Y}\) with various inputs concepts \(\mathbf{x}\in\mathcal{X}\). We then use the oracle \(\mathcal{O}\) to assign each sample a score, which is used to train the NADO model.
TrainingDuring training, the NADO model takes \(\mathbf{x},\mathbf{y}\) as input, and learns to predict from \(R^{\mathcal{O}}(\mathbf{x},\mathbf{y}_{t=0})\) till \(R^{\mathcal{O}}(\mathbf{x},\mathbf{y}_{t\leq T})\). Here, \(T\) is the complete sequence length and the sentence-level value \(\mathcal{O}(\mathbf{x},\mathbf{y})\) is used as the labels for all steps, from \(t=0\) till \(t=T\). We emphasize that in order for \(\mathcal{O}\) to learn \(R^{\mathcal{O}}\) successfully, all \((\mathbf{x},\mathbf{y})\) pairs must be self-sampled by the base model \(p\) instead of come from the CommonGen training data.
We use cross entropy loss as the objective function. Given a particular input \(\mathbf{x}\), the cross entropy loss is
\[\mathcal{L}_{CE}(\mathbf{x}) =\sum_{\mathbf{y}\in\mathcal{Y}}p(\mathbf{y}\mid\mathbf{x})L_{CE }\left(\mathbf{x},\mathbf{y},R^{\mathcal{O}}\right) \tag{6}\] \[=\sum_{i=0}^{T}CE\left(R^{\mathcal{O}}\left(\mathbf{x},\mathbf{y} _{\leq i}\right),\mathcal{O}(\mathbf{x},\mathbf{y}_{\leq i})\right)\]
In practice, we also add a regularization term to the loss. In order to satisfy the definition that \(\sum_{y_{i}}R^{\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{\leq i}\right)p\left(y _{i}\mid\mathbf{x},\mathbf{y}_{<i}\right)=R^{\mathcal{O}}\left(\mathbf{x}, \mathbf{y}_{\leq i-1}\right)\), our regularization loss is measured by the KL divergence of the following:
\[\mathcal{L}_{reg}(\mathbf{x})=KL\left(\sum_{y_{i}}R^{\mathcal{O}}\left( \mathbf{x},\mathbf{y}_{\leq i}\right)\cdot p\left(y_{i}\mid\mathbf{x}, \mathbf{y}_{<i}\right),\right. \tag{7}\]
\[R^{\mathcal{O}}\left(\mathbf{x},\mathbf{y}_{\leq i-1}\right)) \tag{8}\]
Then, the final training loss is \(\mathcal{L}_{CE}(\mathbf{x})+\lambda\mathcal{L}_{reg}(\mathbf{x})\), where \(\lambda\) is a hyper-parameter to balance these two
terms. In practice, we use grid search and choose the best \(\lambda\) from [0.1, 0.5, 1.0].
InferenceAt inference time, there are two forward passes as shown in Eq.5 and Figure 2. The decoding efficiency roughly remains unchanged because the NADO head has much smaller size than the base PTLM.
## 3 Experimental Results for the Oracle
In this section, we show the results of the commonsense scorer described in SS2.2. The experiments and results of commonsense-guided generation (SS2.3) can be found in SS4 and SS5.
### Tuple Extraction Data
We use the GPT-3.5-Turbo model provided by OpenAI to extract the tuples of 6,000 sentences (with a total cost of $12.4), based on which we train the T5-large based tuple extractor. Since our goal is to parse all possible commonsense tuples whether they are sensical or not, we need both sensical and less reasonable sentences. To this end, we randomly select 3,000 sentences from the CommonGen Lin et al. (2020) train split (we consider them as more sensical) and another 3,000 sampled from a basic gpt-2 model (we consider them as less coherent and sensical).
### Tuple Extractor Results
Following the rationale in SS3.1, we study the benefit brought by augmenting the training data with tuples extracted from less coherent and sensical sentences. Specifically, we compare the following three settings: 1) base: trained on the 3,000 sensical sentences; 2) aug: trained on 1,500 sensical sentences and 1,500 less sensical sentences; 3) all: trained on all 6,000 sentences. We test the model performance on a held-out set of 350 sentences that is mix of both types. To obtain the gold labels on the test set, we start with the few-shot GPT-3.5's annotation. After that, two human annotators iteratively checked and fixed any error they see.
For each relation type, we report the average f1-score in Table 1. Here, if the lemmatized tokens in a generated triplet has over 50% overlap with those in the ground-truth triplet, we consider it as correct. Otherwise, we consider it as wrong. Comparing T5-Large aug with T5-Large base in Table 1, we see improvements across all four relation types. Besides, increasing the train data size also boosts the extractor's performance. We also notice that our extractors perform worse on _UsedFor_ and _CapableOf_ than on _AtLocation_ and _PartOf_, which is partially due to the errors of the training signal (i.e., labels are inaccurately annotated by GPT-3.5).
### Oracle Commonsense Scorer Results
To compute the machine-generated compatibility score in Eq.1, we set beam size \(k=128\). Meanwhile, we instruct human annotators to evaluate the target sentences on how commonsensical they are. Each sentence is annotated by 3 workers with a scale of 1 (least) to 4 (best). We also ask every annotator to specify which part of the target sentence is nonsensical. We find out that explicitly asking the workers to pay detailed attention and point out the erroneous parts helps to increase the inter annotator agreement (IAA, measured by Spearman's correlation) from 0.56 to 0.67. The final sentence-level commonsense score annotated by humans is the average of 3 individual ratings.
Table 2 shows the correlations between human ratings and automatic scores. For our proposed \(\mathcal{O}\)-Score, we report the correlations of taking the minimum (min) and average (mean) of all tuple-level compatibility scores. Taking the average consistently result in higher correlation, reflecting that one mistake of a nonsensical tuple can be mitigated by other sensical ones. Therefore, we use the mean score to train the auxiliary model. We also compare with reference-based metrics such as METEOR Banerjee and Lavie (2005) and BERTScoreZhang et al. (2019). Since there are, on average, 4 references per candidate in the Com
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Relation Type} & _At Lo-_ & _Used_ & _Capa-_ & _Part_ & All \\ & _cation_ & _For_ & _ble Of_ & _Of_ & All \\ \hline T5-Large base & 71.0 & 67.1 & 65.6 & 76.8 & 70.1 \\ T5-Large aug & 72.4 & 67.9 & 68.9 & 79.2 & 72.1 \\ T5-Large all & 73.4 & 70.5 & 69.4 & 79.6 & **73.2** \\ \hline Few-Shot GPT-3.5 & 83.5 & 71.1 & 78.7 & 82.2 & 78.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of different tuple extractors, measured by F1-score. The last row indicates the upper bound that our T5 models can achieve.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{l}{**Reference-Free: min\_mean**} & \multicolumn{2}{l}{**Reference-Based**} \\ \hline T5 \(\mathcal{O}\)-Score & 0.27610.284 & METEOR-all & 0.214 \\ GPT-3.5 \(\mathcal{O}\)-Score & 0.28110.299 & BERTScore-one & 0.280 \\ Gold \(\mathcal{O}\)-Score & **0.346**10.**365 & BERTScore-all & 0.302 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Spearman correlation between human commonsense ratings and six automatic metrics: our \(\mathcal{O}\)-Score with tuples extracted by T5, GPT-3.5-Turbo, and the gold tuples, plus METEOR and BERTScore.
monGen dataset, we select the first reference to compute BERTScore-one, and all available references to compute BERTScore-all. We show that our reference-free scorer performs on par with the best reference-based metric, BERTScore-all, and outperforms the same when use gold tuples extracted by human.
## 4 Experiments about Guided Generation
### Data
Training DataAs is illustrated in Figure 2, we train our auxiliary model on the PTLM's self-sampled data. For each set of input concept, we use top-p sampling (p=0.95) with temperature T=0.7 to generate \(N\) samples. In theory, the larger the \(N\), the more accurate approximation \(R^{\mathcal{O}}\) can learn. In practice, due to limitations in computational resources, we set \(N\) to 48 when the base model \(p\) is gpt-2, and 10 for Alpaca. In total, we have 1.5M training instances self-sampled by gpt-2 and 0.3M training instances self-sampled by Alpaca.
Test DataWe test on two different datasets. The first is the CommonGen dev split (Speer et al., 2017) which contains 993 lists of keywords focusing on daily concepts (e.g., _<open, hand, oyster, glove>_). Each list of keywords is paired with more than one human written references. Our second test data is distilled from CSK-PN(Chen et al., 2023), which sources challenging triples from ConceptNet (Speer et al., 2017) and tags them with positive/negative relation labels. We randomly select 993 triples with negative relations from CSK-PN(e.g. _<wear sunglasses, at night>_). There is no human reference for the second set. To reduce the effect of data leakage in GPT-3 and Alpaca, we randomly shuffled the keywords within each entry.2
Footnote 2: We know that GPT-3 text-daviči-003 is trained on recent dataset up to 2021 which likely contains the ConceptNet (Speer et al., 2017) and CommonGen (Lin et al., 2020) data.
### Experimental Setup
Choice of Base ModelsAlthough our framework does not require fine-tune PTLMs, it does require access to the PTLM's _output distribution_. Hence, we cannot apply our method to some popular but close-sourced LLMs such as ChatGPT. We choose Alpaca, Flan-T5, and gpt2 instead. In addition, because the pre-trained gpt2 has no instruction following abilities, we have to train it to learn the task of 'generating a commonsensical sentence given these input concepts'. Specifically, we finetune it on the CommonGen training data for 1 epoch, well before the finetuning converges. We call this process _warm up_, as the goal is mainly to get the smaller base model onboard with our task format. For instruction-following models such as Alpaca, we still add this warm up process for a fair comparison. In total, we apply our commonsense-guided generation method to 5 different base models: gpt-2-large with warm up, zero-shot Alpaca-7b, few-shot Alpaca-7b, Alpaca-7b with warm up, and zero-shot Flan-T5-large.
Auxiliary ModelsThe auxiliary \(R^{\mathcal{O}}\) models are 4-layer transformer decoders with the same dimension and number of heads as the base models. 3 They are \(1/9\), \(1/8\), and \(1/12\) the size of gpt-2-large, Alpaca-7b, and Flan-T5-large. We train the auxiliary models for 10 epochs with a learning rate of \(1e-5\) on a single NVIDIA A100 80GB GPU. In comparison, it is not possible to finetune Alpaca-7b using only one 80GB GPU without any memory-saving technique such as LoRA (Hu et al., 2021).
Footnote 3: For Flan-T5, the encoder-decoder model, the auxiliary head is a 4-layer T5 decoder that takes in the input constraints as hidden steps from a fixed, pretrained encoder.
### Compared Systems
A*esque Decoding (Lu et al., 2022)A Neuro-logic decoding algorithm that injects constraints into a neurologic process with a look ahead heuristic, which results in more plausible outputs.
Gelato (Zhang et al., 2023)A tractable probabilistic model (TPM) to impose constraints in language models such as gpt2-large. It achieves state-of-the-art (SOTA) performance on constraint satisfaction. Because it is non-trivial to train new TPMs on Alpaca-based models, we use the authors' original TPM which is trained on the gpt2-large model that is finetuned on CommonGen.
Lex (Meng et al., 2022)The vanilla NADO method trained only with lexical constraints as the sequence-level Boolean oracle. Namely, the scorer returns 1 if all lexical constraints are satisfied, and 0 otherwise.
Boost(Ours)Our method that uses the commonsense oracle to steer the auxiliary NADO model. We compare two variations: **1)**Boost CS: using only the commonsense oracle introduced in SS2.2, **2)**Boost Joint: multiplying the lexical checking Boolean function (the same used in **Lex**) with the commonsense oracle score.
**GPT3/ChatGPT** We instruct OpenAI's 3.5-turbo and text-davinci-003 to generate a plausible sentence given the constraints, stating that the keywords do not necessarily have to remain in the same order. Note that these models are likely to be trained on our test data already.
For all compared systems, we decode with top_k (\(k=30\)) sampling with a temperature \(T=0.7\).
### Evaluation Setup
Evaluation MetricsWe use the keyword coverage ratio (after lemmatization) and the \(\mathcal{O}\) score as automatic metrics to assess the quality of generated texts. For the CommonGen benchmark which contains human written sentences, we also report the n-gram overlap (BLEU-4). Considering that our systems are trained towards higher \(\mathcal{O}\) score, we also conduct human annotation for unbiased evaluation. Specifically, we instruct the MTurkers to evaluate 1) how commonsensical each sentence is from a 1-4 Likert scale, and 2) how much they like the sentence overall (_e.g.,_ being interesting and informative). An example questionnaire with the full instructions can be found in Appendix C. We pay the MTurkers $18 per hour, and the annotation process is the same as mentioned in SS3.3.
Inter-Group and Intra-Group Comparison.Our human evaluation is relative, meaning that the human evaluators are asked to compare the quality of different machine-generated outputs given the same input constraint. Since we have five base models and each entails a group of systems to compare with, we first conduct human evaluation within each group. Then, we select representative systems for inter-group comparison.
## 5 Result and Analysis
### Intra-Group Results
We compile the results on the CommonGen and CSK-PN benchmark in Table 3. We find out that,
\begin{table}
\begin{tabular}{l|c c c|c c|c c|c c} \hline \hline
**Test Data** & \multicolumn{4}{c|}{**CommonGen** (Lin et al., 2020)} & \multicolumn{4}{c}{**CSK-PN** (Chen et al., 2023)} \\ \hline & \multicolumn{3}{c|}{**Automatic**} & \multicolumn{3}{c|}{**Human**} & \multicolumn{3}{c|}{**Automatic**} & \multicolumn{3}{c}{**Human**} \\
**Evaluation Metric** & \(\mathcal{O}\)**Score** & **Coverage** & **BLEU4** & **CS** & **Overall** & \(\mathcal{O}\)**Score** & **Coverage** & **CS** & **Overall** \\ \hline \multicolumn{10}{|l|}{_Setting:_gpt2 warm up_} \\ A*esque (Lu et al., 2022) & 0.469 & 97.2\% & 28.1 & 2.37 & 2.72 & 0.489 & 63.0\% & 3.14 & 3.09 \\ GeLaTo (Zhang et al., 2023) & 0.592 & 99.3\% & 33.0 & 2.45 & 2.78 & / & / & / & / \\ Base Model & 0.514 & 90.7\% & 23.2 & 2.31 & 2.80 & 0.53 & 83.9\% & 3.10 & 3.04 \\ Lex (Meng et al., 2022) & 0.538 & 96.1\% & 29.8 & 2.38 & 2.80 & 0.544 & 92.1\% & 3.14 & 3.06 \\ Boost CS & 0.615 & 90.9\% & 23.6 & **2.64** & **3.12** & 0.595 & 89.2\% & **3.33** & 3.13 \\ Boost Joint & 0.597 & 96.1\% & 30.1 & 2.54 & 3.01 & 0.587 & 92.0\% & **3.28** & **3.18** \\ \hline \multicolumn{10}{|l|}{_Setting:_Plan-T5 _zero-shot_} \\ Base Model & 0.571 & 84.6\% & 17.5 & 2.86 & 2.80 & 0.555 & 80.7\% & 2.78 & 2.71 \\ Lex & 0.577 & 93.7\% & 26.0 & 3.04 & 2.92 & 0.569 & 89.6\% & 2.97 & 2.88 \\ Boost CS & 0.619 & 91.3\% & 21.6 & **3.14** & 3.05 & 0.613 & 88.9\% & 3.07 & **3.03** \\ Boost Joint & 0.606 & 93.1\% & 25.6 & 3.12 & **3.06** & 0.601 & 89.6\% & **3.08** & 3.00 \\ \hline \multicolumn{10}{|l|}{_Setting:_Alpaca _warm up_} \\ Base Model & 0.563 & 91.5\% & 20.9 & 3.02 & 2.84 & 0.523 & 93.9\% & 3.07 & 3.06 \\ Lex & 0.584 & 95.9\% & 30.5 & 3.12 & 3.00 & 0.535 & 95.0\% & 3.14 & 3.10 \\ Boost CS & 0.611 & 93.6\% & 28.9 & **3.36** & **3.11** & 0.558 & 94.4\% & 3.21 & 3.19 \\ Boost Joint & 0.592 & 95.7\% & 30.3 & 3.32 & **3.11** & 0.543 & 94.8\% & **3.23** & **3.21** \\ \hline \multicolumn{10}{|l|}{_Setting:_Alpaca _zero-shot_} \\ Base Model & 0.509 & 90.4\% & 21.3 & 2.98 & 3.07 & 0.536 & 93.2\% & 3.26 & 3.09 \\ Lex & 0.566 & 95.3\% & 30.1 & 3.03 & 3.05 & 0.547 & 95.5\% & 3.21 & 3.11 \\ Boost CS & 0.603 & 92.1\% & 24.4 & **3.36** & 3.17 & 0.565 & 93.9\% & **3.51** & **3.28** \\ Boost Joint & 0.588 & 95.0\% & 29.7 & 3.32 & **3.23** & 0.559 & 95.4\% & 3.40 & 3.19 \\ \hline \multicolumn{10}{|l|}{_Setting:_Alpaca _few-shot_} \\ Base Model & 0.552 & 92.2\% & 22.5 & 3.19 & 3.03 & 0.546 & 92.4\% & 3.27 & 2.89 \\ Lex & 0.581 & 95.7\% & 30.8 & 3.26 & 3.03 & 0.551 & 95.3\% & 3.26 & 2.92 \\ Boost CS & 0.608 & 94.6\% & 28.4 & **3.38** & **3.18** & 0.584 & 94.8\% & **3.40** & 3.10 \\ Boost Joint & 0.591 & 95.7\% & 30.1 & 3.36 & **3.18** & 0.572 & 95.2\% & 3.37 & **3.14** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Intra-Group evaluation results on two benchmarks: CommonGen (with reference) and CSK-PN (without reference). Here, we _define a group_ as multiple systems under the same setting (_i.e.,_ base model) _and_ on the same dataset. We use boldface to denote the best scores within each group, and underlines to denote the second best. Our model Boost consistently achieves the most commonsensical ratings as annotated by humans. The gap between Boost and the corresponding Base Model is statistically significant (p<0.05) measured by Student’s t-test. Note that the human ratings across groups are _not_ directly comparable as they are conducted in separate batches.
BLEU-4 has a high correlation with the keyword coverage ratio (\(r=0.914\) measured by Pearson Correlation), but has close to zero correlation with human judgment on commonsense (\(r=-0.08\)) and overall preference (\(r=0.04\)). We therefore hypothesize that BLEU-4, coverage ratio, and other metrics measuring the superficial lexical overlap with ground truth, cannot identify meaningful and commonsensical outputs at least in our setting.
Moreover, in all eight groups of human evaluation, Boost successfully improves the commonsense level and overall preference. Comparing Flan-T5 with gpt2, we see that our approach is more effective on instruction-tuned models than similarly-sized decoder only models. In addition, although Boost Joint achieves slightly lower commonsense ratings than Boost CS, the later is a lot worse in the keyword coverage, indicating that Boost CS has a higher risk to generate reasonable sentences without satisfying the input constraints. Hence, in the constrained generation setting, we still consider Boost Joint as the best model.
### Inter-Group Results
The inter-group evaluation results are shown in Table 4. Our model Boost outperforms all baselines, including Davinci-003. We leave the comparison with ChatGPT in SS6 as a separate discussion.
Surprisingly, although _human written references_ are still the most commonsensical, they _are less preferred_ by our annotators compared with Alpaca/Boost generations. Upon further inspection, we find out that the gold references in CommonGen are relatively short and flat (_e.g., "The car drove through the snow."_), which may also explain why Alpaca warmed up on CommonGen are less preferred than the few-shot setting where high-quality in-context examples are carefully selected.
### Case Study
We show three example generations by our systems and the baselines in Table 5 to further understand the advantage of Boost. In the first example, the baselines connect different constraints logically, but in a less plausible way (e.g., all concepts are bonded to the same object). Our system on the other hand describes a scene where _people_ play games while _dogs_ walk around. In the second and third example, we all know that the Statue of Liberty is not alive and a telephone is inedible. Instead of directly adding negations, we observe Boost tends to provide more contexts to make its output reasonable. In contrast, other baselines wrongly acknowledge that the Statue of Liberty can be alive or the ant can eat a telephone.
ChatGPT in terms of commonsense, but it excels in overall preference. On the CSK-PN eval set where the gap between our model and ChatGPT is larger, we randomly select 100 pairs of outputs and conduct pairwise comparison on both commonsense and overall preference. Results can be found in Table 6. Specifically, each pair is first randomly shuffled and then annotated by at least two annotators. If the two annotators disagree, a third annotator is introduced for the final judge. They can also provide an optionally justification for their choice, which can earn them a small bonus.
Analysis of human's feedback reveal that ChatGPT tends to generate a sentence with highly common scenarios (_e.g.,_ "_It is not advisable to wear sunglasses at night as it can impede your vision and increase the risk of accidents."_), making the raters less interested. On the other hand, our model tends to provide more creative context (_e.g.,_ "_Some-one wears sunglasses at night to avoid the bright lights of the approaching car."_), earning human annotators' overall preference without sacrificing the commonsense too much. As one annotator commented, _"I am fed up with those sentence with the so-called better commonsense because they are unimpressive"_. Such tendency of ChatGPT results in a higher commonsense rating yet noticeably lower overall preference. In short, we highlight that ChatGPT has not entirely solved the task.
The (so far) impossible fair comparison.Last, we would like to list two points regarding why evaluating ChatGPT and our model may not be a fair comparison: _(1) Test Data Contamination_: ChatGPT, which is trained on data up to 2021, likely have been trained on both datasets we tested on, including the test set. _(2) Size and Trick Differences_: Different from Boost, ChatGPT is more than a plain language model and benefits largely from RLHF and many engineering tricks unknown to the public. It is also much larger than our largest PTLM, which is alpaca-7b. Nonetheless, our approach is technically complementary with ChatGPT's language model, too. Unfortunately, due to API limitations, direct verification remains infeasible as we do not have access to its output logits.
## 7 Related Works
Controllable Generation with Frozen PTLMsThere are two major lines: modifying the decoding algorithm and guiding PTLMs with auxiliary models. Recently, Lu et al. (2021, 2022) propose neurologic decoding with a look ahead heuristic, and Qin et al. (2022) propose energy-based constrained decoding. One drawback of this line that the inference is slow due to the large search space. In the other line, Dathathri et al. (2021); Yang and Klein (2021) guide the generation process with an auxiliary model in a plug-and-play fashion by leveraging statistical principles such as the Bayesian rule. Meng et al. (2022) propose to solve the distributional discrepancy of training data and PTLM's generated tokens by training with data directly sampled from the base model. However, mistakes in commonsense are neglected when previous works formulate the whole task as a lexical constrained generation game.
Commonsense MetricsZhou et al. (2022) measures the commonsense of dialogue turns by hard and soft matching the relations across each turn to ConceptNet. ACCENT (Ghazarian et al., 2023) propose an unsupervised metric to measure the event commonsense of dialogue responses via the ATOMIC knowledge graph (Hwang et al., 2020). Our commonsense oracle is inspired by ACCENT but we primarily focused on factoid commonsense in a constrained generation setting. A concurrent work of ours is Vera (Liu et al., 2023), a supervised model that learns to estimate the plausibility of statements. On the other hand, our metric is unsupervised and neuro-symbolic, thus more interpretable.
## 8 Conclusion
We present Boost, a framework to boost the commonsense in PLTMs' generation by training an auxiliary model with a commonsense scorer as the oracle. Our \(\mathcal{O}\)-Scorer is task-agnostic and reference-free, meaning that it is generalizable to many downstream tasks such as dialogue and open-ended text generation. For such application, one may need to replace the vanilla PTLMs with task-specific models and then train the NADO head. The \(\mathcal{O}\)-Scorer can also be combined with task-specific guidance.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Winning System** & **Boost** CS & **Same** & **ChatGPT** \\ \hline CS & 30\% & 17\% & 53\% \\ Overall & 47\% & 25\% & 28\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Which system has better commonsense (CS) and overall human preference? Pair-wise comparison between Boost and ChatGPT shows that our model earns more overall pick while the ChatGPT have higher commonsense.
## Acknowledgement
We thank PlusLab members from UCLA and the anonymous EMNLP reviewers for their constructive feedback and suggestions that helped to improve the paper.
## Limitations
We discuss the limitations of our work. First, our tuple extractor covers only four relation types and can miss many other important relation types such as causal, temporal order, etc. These later relation types are more sophisticated such that LLMs are strong as gpt-3.5-turbo will fail at [14, 15, 16, 17]. Second, we find out that the cosine similarities of sentence embeddings used in Eq. 1 to compute the compatibility scores sometimes do not align with human judgement. The errors incurred during the generative scoring process is then propagated into the training process of NADO, which negatively affect the output's quality. Last, although the auxiliary models have much smaller size than the PTLMs, the number of samples needed to train \(R^{\mathcal{O}}\) is still large in order to guarantee a good approximation of the closed form solution derived in Eq. 4.
## Ethics Statement
It is known that the generated results by PTLMs could capture the bias reflected in the training data [16, 15]. Our model Boost is build upon PTLMs including T5 [11], GPT-2 [10], and Alpaca [15], which may potentially generate offensive content for certain groups or individuals. We suggest to carefully examine the potential biases before deploying the models to real-world applications.
|
2302.00739 | Inference of Partial Colexifications from Multilingual Wordlists | The past years have seen a drastic rise in studies devoted to the
investigation of colexification patterns in individual languages families in
particular and the languages of the world in specific. Specifically
computational studies have profited from the fact that colexification as a
scientific construct is easy to operationalize, enabling scholars to infer
colexification patterns for large collections of cross-linguistic data. Studies
devoted to partial colexifications -- colexification patterns that do not
involve entire words, but rather various parts of words--, however, have been
rarely conducted so far. This is not surprising, since partial colexifications
are less easy to deal with in computational approaches and may easily suffer
from all kinds of noise resulting from false positive matches. In order to
address this problem, this study proposes new approaches to the handling of
partial colexifications by (1) proposing new models with which partial
colexification patterns can be represented, (2) developing new efficient
methods and workflows which help to infer various types of partial
colexification patterns from multilingual wordlists, and (3) illustrating how
inferred patterns of partial colexifications can be computationally analyzed
and interactively visualized. | Johann-Mattis List | 2023-02-01T20:22:20Z | http://arxiv.org/abs/2302.00739v1 | # Inference of Partial Colexifications from Multilingual Wordlists
###### Abstract
The past years have seen a drastic rise in studies devoted to the investigation of colexification patterns in individual languages families in particular and the languages of the world in specific. Specifically computational studies have profited from the fact that colexification as a scientific construct is easy to operationalize, enabling scholars to infer colexification patterns for large collections of cross-linguistic data. Studies devoted to partial colexifications - colexification patterns that do not involve entire words, but rather various parts of words-, however, have been rarely conducted so far. This is not surprising, since partial colexifications are less easy to deal with in computational approaches and may easily suffer from all kinds of noise resulting from false positive matches. In order to address this problem, this study proposes new approaches to the handling of partial colexifications by (1) proposing new models with which partial colexification patterns can be represented, (2) developing new efficient methods and workflows which help to infer various types of partial colexification patterns from multilingual wordlists, and (3) illustrating how inferred patterns of partial colexifications can be computationally analyzed and interactively visualized.
**Keywords:** partial colexification, loose colexification, colexification networks, computational comparative linguistics
## 1 Introduction
The past years have seen a drastic rise in studies devoted to the investigation of colexification patterns in individual languages families and the languages of the world. The concept of _colexification_ has proven specifically useful for computational and quantitative approaches in lexical typology. The term _colexification_ was originally proposed by Francois (2008) as a cover term for all cases where multiple senses
are expressed by one word form, no matter whether the multitude of senses results from polysemy or homophony. Colexifications can be easily computed from large collections of lexical data, specifically from multilingual wordlists, in which a certain number of concepts is translated into several target languages (see List 2014, 22-24). Through the aggregation of several multilingual wordlists, it is straightforward to assemble large amounts of cross-linguistic colexification data, as witnessed by the growth in recent versions of the Database of Cross-Linguistic Colexifications (CLICS, List et al. 2018; Rzymski et al. 2020, [https://clics.clld.org](https://clics.clld.org)), as well as by the increase in studies which exploit colexification data assembled from different sources (Di Natale et al., 2021; Bao et al., 2021). Quantitative studies on colexification patterns have also shown that it is straightforward to extract those colexifications that are most likely to result from polysemy by searching for colexifications recurring across several language families - as opposed to frequent colexifications inside one and the same language family, wich might reflect wide-spread cases of homophony (List et al., 2013). This means in turn that large colexification networks can be treated as polysemy networks that give us direct insights into certain aspects of lexical semantics (Youn et al., 2016; Jackson et al., 2019; Harvill et al., 2022).
Up to today, however, most studies dealing with colexifications focus on colexifications of _entire words_. Colexifications involving only certain parts of the words in a given language - _partial colexifications_, also called _loose colexifications_(Francois, 2008) - have rarely been investigated (see Urban 2011 for an exception) and rarely been computed automatically from larger collections of cross-linguistic data (see List et al. 2022 for initial attempts).
Two major factors seem to account for the problems involving studies with partial or loose colexifications. On the one hand, it is less straightforward to model partial colexifications in networks, since the relations between words that share common parts may at times be asymmetric, with one word being entirely repeated in the other word. Not only are different network types needed to model partial colexification networks, it is also much less straightforward to interpret them. On the other hand, it is difficult to _infer_ partial colexifications networks from large collections of cross-linguistic data, since partial commonalities between words easily arise by chance or reflect grammatical distinctions (noun classes, gender marking, part of speech). As a result, a method that naively searches for similarities between words in the same language variety in a large corpus typically provides very densely connected noisy networks in which one barely finds any signal that would be interesting from a semantic or cognitive perspective. Thus, while it is easy to handle noise due to homophony in the case of full colexification networks by using strict thresholds for the occurrence of particular colexifications in combination with normalized weights, it is difficult to use the same criteria when creating partial colexification networks.
This study attempts to address at least some of these problems by proposing new models with which certain kinds of partial colexification patterns can be represented in networks, and by developing new efficient methods and workflows that help to infer different types of partial colexification patterns from multilingual wordlists. Having inferred these patterns, the study further shows how they can be visualized and analyzed.
## 2 Materials and Equipment
### Multilingual Wordlists
The starting point of our new workflow for the inference of partial colexifications are multilingual wordlists. A wordlists is hereby understood as a collection of word forms which are arranged by their meaning. Unlike a dictionary, in which the word form (the _headword_) constitutes the primary linguistic unit by which data are ordered, a wordlist orders words by their meaning. While a dictionary starts from the form, following a semasiological or form-based perspective, a wordlist starts from the meaning, following an onomasiological, or concept-based perspective. As a result, a multilingual wordlist allows us to compare how certain _concepts_ (which are thought to be generally comparable across languages, even if this may be problematic in practice) are translated into certain languages.
The compilation and aggregation of multilingual wordlists has made a remarkable progress during the last decade and the number of digitally available wordlist collections is constantly increasing. On the one hand, large unified multilingual wordlist collections have been proposed in the past years (Key and Comrie, 2016; Dellert et al., 2020; Haspelmath and Tadmor, 2009), on the other hand, standards for cross-linguistic data formats have been constantly improved (Forkel et al., 2018) and applied to many smaller or growing data collections (Ferraz Gerardi et al., 2021) and for the purpose of _retro-standardization_(Geisler et al., 2021).
### Cross-Linguistic Data Formats
For the exploration of partial colexification patterns across multiple languages, a modified version of the well-known _Intercontinental Dictionary Series_ (IDS) was prepared (Key and Comrie, 2016). While the original version mixes phonetic transcriptions with language-specific phonological transcriptions and orthographic entries, the entries in the modified version were semi-automatically converted to the International Phonetic Alphabet in the variant proposed by the Cross-Linguistic Transcription Systems (CLTS) reference catalog ([https://clts.clld.org](https://clts.clld.org), List et al. 2023, see Anderson et al. 2018). The conversion was done by applying the Lexibank workflow of creating standardized wordlists in Cross-Linguistic Data Formats (List et al., 2022). In this workflow, originally non-standardized datasets are semi-automatically standardized by applying a mix of software tools (based on CLDFBench, Forkel and List 2020) and manual annotation in order to convert the data into the formats recommended by the Cross-Linguistic Data Formats initiative (Forkel et al., 2018).
The updated version of the IDS provides wordlists for 329 language varieties for up to 1310 concepts. The standardized phonetic transcriptions consist of a total of 558 distinct sounds (types) which occur 2 902 306 times in the data (tokens), with an average phoneme inventory size of 50.76 sounds per variety. Although - strictly speaking - partial colexifications could in theory also be identified from orthographic data, being able to work with a larger multilingual wordlist available in phonetic transcriptions has two major advantages, even if the transcriptions may contain certain errors. On the
one hand, it is easier to evaluate the findings, if transcriptions are harmonized for one dataset, on the other hand, knowing that sounds are represented in segmented form makes it easier to select the thresholds by which partial colexifications are preliminarily accepted or discarded. The revised version of the Intercontinental Dictionary Series is currently curated on GitHub, where it can be accessed at [https://github.com/intercontinental-dictionary-series/ids-segmented](https://github.com/intercontinental-dictionary-series/ids-segmented). The version used in this study is v0.2 ([https://github.com/intercontinental-dictionary-series/idssegmented/tree/0.1](https://github.com/intercontinental-dictionary-series/idssegmented/tree/0.1)).
For developmental purposes and in order to test certain technical aspects of the new methods proposed here, a smaller wordlist by Allen (2007) was used. This list - also converted to CLDF (see [https://github.com/lexibank/allenbai](https://github.com/lexibank/allenbai)) - offers data for 9 Bai dialect varieties in standardized phonetic transcriptions.
## 3 Methods
Full colexifications across languages can be handled in an efficient way that has shown to provide very interesting insights into semantic relations. Partial (or loose) colexifications, however, suffer from noise, resulting from the fact that partial similarities between words in the same language may result from a large number of factors (coincidence, grammatical markers) that do not reflect specific semantic relations between the words in question. As a result, the well-established workflows for the inference of full colexification networks cannot be used to infer partical colexification networks. In order to handle this problem, I propose a three-stage approach that starts from the _modeling_ of partial colexifications - which helps to reduce the search space and provides a consistent _representation_ of distinct types of partial colexifications in networks -, offers efficient methods for the _inference_ of specific partial colexification types, and finally allows us to _analyse_ different kinds of partial colexification networks in various ways. In this context, modeling, inference, and analysis reflect a general approach to scientific problem solving in the historical sciences that was inspired by its application in evolutionary biology (Dehmer et al., 2011).
### Modeling Partial Colexifications Across Languages
#### 3.1.1 Major Types of Partial Colexification
When modeling words as sequences of sounds, we can define major sequence relations in a formal way. Since sequences play a crucial role in many scientific fields - ranging from computer science (Gusfield, 1997) via bioinformatics (Durbin et al., 2002[1998]) to physics (Kruskal and Liberman, 1999[1983]) - basic relations between sequences have been independently identified and discussed long ago. In the following, we will distinguish the term _partial colexification_ from the term _loose colexification_ (the latter originally termed by Francois 2008). According to this distinction, partial colexifications are restricted to concatenative morphology, while loose colexifications would also allow for non-concatenative (paradigmatic) morphology. The narrower notion of partial colexifications has the advantage that we can use existing models and insights from earlier studies on sequence and string relations and adopt them to the notion
of partial colexifications.
When comparing three fictitious sequences ABC, XYABCD, and ZABCEF, it is easy to see that the first sequence ABC recurs in both the second and the third one. In computer science and bioinformatics, ABC is called a _common substring_ of XYABCD and ZABCEF. Since there is no longer substring than ABC, it is furthermore the _longest common substring_ between both sequences. Regarding the specific relation between XYABCD and ZABCEF, we can say that they share a common substring of length 3. The sequence ABC also shares substrings of length 3 with the two other sequences ZABCEF and XYABCD. In addition, however, we can see that the sequence ABC is _identical_ with the common substring, and we can say that ABC is a part of XYABCD and ZABCEF. While the former relation between sequences (sharing a substring of a given length) is commutable, the latter relation isn't: saying that one sequence A is part of another sequence B is not the same as saying that sequence B is part of sequence A.
Given their importance for a wide range of scientific and industrial applications, many efficient algorithms for the computation of common substrings and the identification of part-of relations (or parthood relations, see Hoehndorf et al. 2009) in sequences have been proposed (see the overview in Gusfield 1997, 89-121). Common substrings and part-of relations are two fundamental relations between sequences that cover the notion of partial colexification in parts. The exact relation between common substring and part-of relations on the one hand and the notion of partial colexification on the other hand depends on the way in which we define the latter. If we insist that the material shared between words from the same language should reflect _lexical morphemes_ - that is, smallest form-meaning pairs in a given language that bear a lexical meaning - we can say that any instance of a partial colexification between two word forms in a given language also corresponds to a common substring relation between the sound sequences representing the two forms. If two sound sequences exhibit a common substring relation, however, this does not automatically mean that we are dealing with a partial colexification in this narrower sense, since word forms can have common substrings for other reasons. The common substring can reflect purely grammatical morphemes (various forms of affixes, compare verbs in German, sharing all the infinitive ending, like _lawf-en_ "run", _geh-en_ "walk", etc.), or the similarity can be accidental (compare German _Herbst_ "autumn" sharing a common substring _st_ with _Wurst_ "sausage").
While it seems that scholars implicitly use the term _loose colexification_ in a way that restricts the relation to shared lexical morphemes across words in the same language (Francois, 2008), it is important to note that such a narrow sense is in fact not needed. Colexification was deliberately chosen as a term to describe the relation of words with different senses sharing the same pronunciation. As a result, terms like _partial colexification_ or _loose colexification_ should also be used in a neutral, overarching form, while semantically more interesting relations should be inferred from partial colexification patterns in a second step.
#### 3.1.2 Affix and Overlap Colexifications
When trying to develop methods that search for meaningful partial colexifications, that is, partial colexifications which reflect similar processes of lexical motivation (Koch, 2001) underlying the formation of words in a given language, it is useful to work with a narrower notion of shared similarity than the one reflected in the common substring and part-of relations between sequences introduced above. Thus, instead of searching whether a word form A expressing a concept X is part of a word form B expressing a concept Y, we can ask if A is a _prefix_ or a _suffix_ of B, thus, if A is identical with the beginning or the end of B. Similarly, we can ask if A and B _overlap_, that is, if they share a common substring which is either a prefix or a suffix of both sequences.
The search for linguistically relevant affix and overlap colexifications can be further restricted by setting thresholds for the length of the substring which the sequences coelify, and by setting thresholds for the length of the remaining parts of the sequences which do _not_ coelify. Here, the fact that our multilingual wordlist is now available in the form of fully segmented, standardized phonetic transcriptions, comes in handy, since it allows us to set up thresholds for a certain number of _sounds_ rather than a certain number of _symbols_ which often reflect individual sounds only in combination.
#### 3.1.3 Representing Partial Colexifications in Networks
Similar to substring colexifications and part-of colexifications, the fundamental difference between affix colexification and overlap colexification is that the former entails a _directional relation_ (one sequence is a part of the other sequence, in this specific case, appearing in the beginning or the end), while the direction of the latter (two sequences share a common part) cannot be directly determined. When representing affix colexifications in networks, we can account for the directionality of the relation by using directed network models.
In contrast to the well-known undirected weighted network models used for the representation of full colexification networks (List et al., 2013) which can be easily visualized, both interactively (Mayer et al., 2014) and statically (using edge thickness to account for differences in the weights for the links connecting individual concepts, see List et al. 2018), weighted _directed_ networks draw a link from one concept A to another concept B only in those cases where an affix colexification from A to B can be attested. As an example, consider German the words German _Finger_ "finger" and _Fingernagel_ "fingermail". Here, the former word form is an affix of the latter word form (since the latter word starts with the former word), and we can therefore draw a link from the concept "FINGER" to the concept "FINGERNAIL" in an affix colexification network, in which both words in German are attested. When visualizing these networks, we may have links pointing in both directions (since there might well be languages in which the word for "fingerail" appears as an affix of the word for "finger", although we do not expect to find many examples), and we can use arrows at the tips of the links indicating the direction of individual links in our network.
Overlap colexifications can be handled in the same way in which one would handle full colexifications.
The difference is that one should internally store the individual suffixes that make up for the overlap connection. Thus, while it is enough to store one word form for a colexification in a full colexification network (since words colexifying two or more concepts are by definition identical), it is important - for the sake of transparency - to indicate the actual suffix that recurs across two words in an overlap colexification network. Figure 1 contrasts the three types of colexification networks along with some simplified examples.
### Efficient Inference of Full and Partial Colexification Networks
A naive implementation of a simple search for partial colexifications (be they affix or overlap colexifications) would take all word forms from one language and then compare each word against each other word in the sample, storing observed commonalities. While this procedure is easy to understand and certainly yields the desired results, it is far away from being efficient. As a result, specifically when dealing with large cross-linguistic data collections, it is advisable to use efficient search strategies.
For the computational of full colexifications, an efficient search strategy consists in the use of _associative arrays_ as a major data structure, which consists of a key that allows to access a given value. In the Python implementation used by the CLICS database (List et al., 2018; Rzymski et al., 2020), the keys consist of the individual word forms for a given language, while the values are a list of the concepts that the form
Figure 1: Overview of 3 major colexification type discussed in this study. (1) provides an example for a full colexification in Yaqui (data from CLICS\({}^{\text{3}}\) Rzymski et al. 2020), (2) shows an example for the directional representation of affix colexifications with an example from Gulin Chinese (data from Lü Lü Lili et al. 2007), and (3) shows an example for overlap colexification in Füzhöu Chinese (data from Lü Lü Lili et al. 2007).
links to. In order to infer colexifications for a given language, the method iterates over all words for a given language in a wordlist and subsequently adds them to the associative array, storing the concept that the word form expresses in the list that serves as the value. If a given word form has already been added to the array, the associated list is expanded by adding the new concept in question. In a second stage, the method iterates over all keys in the associative array and adds all pairs of diverging concepts in the list to the growing network of colexifications across several languages. Detailed descriptions of this procedure can be found in a tutorial accompanying Jackson et al. (2022) and in List (2022). Figure 2 shows the structure resulting from applying this method to a small wordlist of three German words.
In our approach to partial colexifications, we proceed in a similar fashion, by iterating over the wordlist of each individual language twice. In order to find affix colexifications, however, the associative arrays are filled with affixes of varying size, and the list serving as the value is then filled with tuples of the corresponding full word form and its concept. The affixes are computed by iterating from the left and the right of the sound sequence representing the word form. Affix sizes are limited by two thresholds. The first threshold (default set to 2) limits the minimum size of the affix to 3 sounds. The second threshold makes sure that the size of the remaining word part is larger than a certain minimum (default set to 2). In combination, both thresholds guarantee that the affix we infer has a reasonably large size, and that the full word form to which we link it is also large enough to increase the chances that we detect compounding structures rather than cases of inflection. With these thresholds, we can detect all potential _affix candidates_ for a given word in a first run and store them in our associative array. In a second step, we then iterate over all original words in the data, sorted by length, starting with the longest word. For each of the word forms, we then check if it occurs in the array of affix candidates. If this is the case, this means that the word appears as the affix of one of word forms linked as a value to this array, and we can
Figure 2: Efficient search for full colexifications using associative arrays. Data are represented in JSON format for a wordlist consisting of three German word forms _Erde_ “EARTH”, _Erde_ “WORLD”, and _Welt_ “WORLD”. The top-left box shows the initial format of the data (a wordlist consisting of two columns, one storing the concept and one storing the word form in IPA. The top-right box shows the resulting associative array, in which the forms serve as a key and concepts expressing this form are added to the same array as a value. The bottom-right box shows the resulting colexification inferred from this example.
add them to our network, by adding a link from the word recurring as affix to the word containing the affix.
For the computation of overlap colexifications, we pursue the same strategy as for affix colexifications in the first stage, by populating an associative array with affix candidates for the word forms in our wordlist. Due to the increase in noise when searching for overlap colexifications the default thresholds for the length of the affix are set to 4 and the threshold for the length of the remaining part are set to 3. In the second stage, we iterate over the array with affix candidates itself, which has been sorted by the length of the affixes serving as keys in reverse order (starting from the longest affix found in the data for a given language). For each affix, we then compare all word pairs in which this affix recurs and check that neither of the two forms appears as a suffix or a prefix of the other form. If these conditions are met and the forms are also not identical (which would correspond to a full colexification), we store the forms as overlap colexifications along with the affix by which the forms overlap.
As can be easily seen from the descriptions, the complexity of the three methods for the inference of full colexifications, affix colexifications, and overlap colexifications differs. The search for full colexifications
Figure 3: Efficient search for affix colexifications, illustrated for a wordlist of three German words _Hand_ “HAND”, _Schuh_ “SHOE”, and _Handschuh_ “GLOVE” (lit. “hand-shoe”). Starting from the original wordlist in the top-left box, each word form is represented by all possible prefixes and suffixes that match the two-threshold criterion (see text) in the associative array in the top-right box. When iterating over the word forms in the original concept list, we find that two words, _Hand_ and _Schuh_ are stored in the array and we can therefore infer an affix relation between the two words and the word _Handschuh_, represented in the form of a directed graph in the box at the bottom of the figure.
requires the least amount of computation time, followed by the search for affix colexifications, and by the search for overlap colexifications.
With the methods for the inference of full and partial colexifications in individual languages above, we can construct full and partial colexifications networks by applying the search strategies to multiple languages and iteratively growing a colexification network, in which we add edges when new edges are inferred for a particular language, or increase edge weights when edges have been already attested during the iteration. The networks computed in this form are all annotated in various ways. For the nodes, we store the number of word forms that can be found in the data, the number of language families in which these words are attested, and the actual word forms in each language. For the links between the nodes, we store the number of concrete word forms which exhibit the colexification relation, the number of language families, in which these colexifications can be found, and the actual word forms (including the colexifying parts for partial colexifications) in which the colexifications occur. For affix colexifications, we infer a directed network, while the network for full and overlap colexifications is undirected.
### Analyzing Partial Colexification Networks
In order to understand major differences between full colexification networks and the two new types networks introduced here, one can compare their _degree distributions_. The degree of a node in a network is the number of its edges (Newman, 2010, 133-135). The weighted degree of a node in a network is the sum of the edge weights of its edges. While we have only one type of degree for undirected networks, we have two possible degrees for network with directed edges, the _in-degree_ and the _out-degree_, with the former representing the number (or the sum of the edge weights) of incoming edges of a given node, and the latter representing the number (or the sum of the edge weights) of outgoing edges of a given node in the network. In order to compare the degree distributions of two networks constructed from the same set of nodes, we can compute the Spearman rank correlation (Spearman, 1904), which tell us to which degree those nodes that show a very high degree in one network also show a high degree in the other network, and to which degree nodes with low degrees in one network also tend to show low degrees in the other one.
In addition to the comparison of degree distributions, it is also useful to visualize the networks and to zoom in to interesting parts that illustrate where major differences can be found. This can be done quite conveniently now with the help of software packages for network visualization, such as Gephi (Bastian et al., 2009) or Cytoscape (Shannon et al., 2003; Smoot et al., 2011). For the visualizations reported here, Cytoscape is used.
### Implementation
The methods reported here are implemented in Python and shared in the form of Python library that can be used as a plugin to the CL Toolkit package ([https://pypi.org/project/cltoolkit](https://pypi.org/project/cltoolkit), List and Forkel 2021). CL Toolkit was designed to allow to access CLDF Wordlists that conform to the standards
proposed by the Lexibank repository (List et al., 2022) conveniently from Python scripts or from the Python interactive console. For the handling of graphs, the NetworkX package was used (Hagberg, 2009), and for the inference of communities, the Igraph package was used (Csardi and Nepusz, 2006). The computation of rank correlations was done with SciPy (Virtanen et al., 2020). The supplementary offers access to all data and code necessary to replicate the results reported here.
## 4 Results
### Computation Time of Efficient Colexification Inference
In order to test whether the newly proposed method for the inference of affix colexifications is indeed more efficient than a conceptually much simpler comparison of all word against all words in a word list, a small experiment was designed in which the CLDF dataset of Bai dialects derived from Allen (2007) was analyzed several times and computation times were calculated. The results of this test indicate that the new method is indeed much more efficient in terms of computation time than the naive iteration. In various experiments on different Linux machines, computation time differences show that the naive all-to-all word comparison needs more than five times as much time than the new efficient approach, while both produce exactly identical results. While computation time may be less important when working with small datasets of only about a dozen of languages, it can become a bottleneck when working with large datasets such as the Intercontinental Dictionary Series. For this reason, the efficient solution proposed here, is proving very useful. This does not mean, however, that the solution is perfect, and it may well be the case that there are more efficient solutions available (e.g., using _suffix trees_, see Gusfield 1997, 122-180) that could be implemented in the future.
### Comparing Degree Distributions
Having computed colexification networks for full colexifications, affix colexifications, and overlap colexifications, the Spearman rank correlation was computed for the weighted degree distributions of all three colexification types, splitting affix colexifications into two types of degree distributions, the in-degree and the out-degree. The results of this comparison are given in Table 1. As can be seen from this table, two moderate correlations can be observed for the total of six pairings. The degree distribution of the full colexification network correlates moderately with the out-degree distribution of the affix colexification network (\(r=0.50\), \(p<0.0001\)), and the degree distribution of the overlap colexification network correlates moderately with the in-degree distribution of the affix colexification network (\(r=0.42\), \(p<0.0001\)).
Interpreting these results may not seem straightforward at the first sight. The correlation between the weighted degree of concepts in full colexification networks and the out-degree of concepts in affix colexification networks points to a tendency according to which concepts that are often fully colexified with other concepts _also_ seem to be frequently _reused_ as compounds or affixes in complex words. While this finding may seem to be quite reasonable or even obvious, it was so far not possible to confirm it in
cross-linguistic studies. Partial colexification networks thus point us to an important property of concepts that tend to colexify frequently across the languages in the world: their propensity to be reused in word formation processes to form new words. This property, which I propose to call _lexical root productivity_ (the term is inspired by a discussion with Alexandre Francois, see List 2019a and List 2019b), plays a key role in lexical motivation, the process underlying the formation of new word forms in the languages of the world (Koch, 2001).
The correlation between the weighted degree distribution of overlap colexifications with the out-degree distribution of affix colexifications has an even more straighforward explanation. Concepts that exhibit many overlap colexifications across a larger sample of languages are concepts that are often expressed with the help of compounds or morphologically complex words. The same holds for those concepts that have many incoming edges in an affix colexification network. As a result, the correlation between the two degree distributions is not very surprising. It shows, however, that both the weighted in-degree of affix colexification networks and the weighted degree of overlap colexification networks can be used as a proxy to measure the _compoundhood of concepts_ (a term inspired by Martin Haspelmath, p. c.), that is, the tendency of concepts to be expressed by compound words or morphologically complex words.
### Inspecting Colexifications through Subgraphs
While the investigation of the degree distributions already gives us a nice impression about the commonalities and differences between different kinds of colexification networks, a closer investigation of smaller parts of the graphs can help us to see these differences much more clearly. In order to provide a fruitful sample, the Infomap algorithm (Rosvall and Bergstrom, 2008) was used to compute communities from the full colexification network. In a second step, 23 concepts which show different properties with respect to their full and partial colexifications, were selected and the corresponding subgraphs for full, affix, and overlap colexification networks were extracted and visualized with the help of Cytoscape (Shannon et al., 2003; Smoot et al., 2011).
\begin{table}
\begin{tabular}{l l l l}
**Colexification Type A** & **Colexification Type B** & **Nodes** & **R** & **P-Value** \\ \hline \hline Full Colexification & Affix Colexification (In-Degree) & 1308 & 0.0960 & \(<\) 0.0001 \\ Full Colexification & Affix Colexification (Out-Degree) & 1308 & 0.5034 & \(<\) 0.0001 \\ Full Colexification & Overlap Colexification & 1307 & 0.1179 & \(<\) 0.0001 \\ Affix Colexification (In-Degree) & Affix Colexification (Out-Degree) & 1308 & -0.0830 & \(<\) 0.0001 \\ Affix Colexification (In-Degree) & Overlap Colexification & 1307 & 0.4212 & \(<\) 0.0001 \\ Affix Colexification (Out-Degree) & Overlap Colexification & 1307 & -0.0488 & 0.0104 \\ \end{tabular}
\end{table}
Table 1: Comparing the Spearman rank correlations for the four different kinds of degree distributions. As can be seen, we can observe significant moderate correlations for Full Colexifications as compared to the Out-Degree of Affix Colexifications (0.5) and for the In-Degree of Affix Colexifications compared to Overlap Colexifications (0.42). For the other pairings, no significant correlations can be observed.
Figure 4: Comparing full (A), overlap (B), and affix (C) colexifications for subgraphs of the IDS dataset. Line width indicates the weight of the colexifications, colors other than light gray indicate communities inferred for the full colexification network, and link directions in the affix colexification network (B) are displayed with the help of arrows. Concept labels are taken from the Concepticon project. Concept labels with an asterisk were modified to to enhance the visualization.
As can be seen from the visualizations shown in Figure 4, the three networks show a remarkable difference in their individual structures, although they all involve the same concepts. Thus, while the concept EYE has only one spurious link in the full colexification network to SPRING (A), it is completely isolated in the overlap colexification network (B), while appearing as a rather central concept with a high out-degree in the affix colexification network (C). When inspecting connected components in all three networks, we find huge differences between the concepts that are fully connected with each other, while it is easy to spot semantic or morphological connections that give rise to these patterns. Thus, we find a cluster of BLIND, TEAR, EYELASH, EYEEROW, EYELID, and BLINK in the overlap colexification network that clearly seems to result from the fact that the words expressing these concepts all contain a morpheme for EYE. The central position of EYE in the affix colexification network confirms this role, and we find similar structures for WATER as another central concept in the affix colexification network. A systematic comparison of these different kinds of colexification networks allows us to identify semantic _key players_ that play an important role in contributing morphological material to the construction of the lexicon of many of the world's languages.
## 5 Discussion
This study has presented new ideas regarding the inference of partial colexification networks from multilingual wordlists. It has introduced new models that can be used to handle partial colexification patterns and proposed new efficient methods and workflows for the inference of partial colexification networks. Two new ways to handle partial colexification patterns in networks were introduced, namely _affix colexifications_ and _overlap colexifications_. Using these new types of colexification patterns to infer affix and overlap colexification networks from large multilingual wordlist revealed some interesting properties of both network types. While overlap colexification networks allow us to measure the _compoundhood_ of individual concepts across the world's languages, affix colexification networks could be used as an initial proxy to measure _lexical root productivity_ across languages. Apart from being interesting for people working in the field of lexical typology, we assume that these new types of colexification networks can be very useful for many additional scientific fields in the future, including most notably computer science (and approaches to computational semantics) and psychology.
## Funding
This research was supported by the Max Planck Society Research Grant _CALC3_ (JML, [https://digling.org/calc/](https://digling.org/calc/)), the ERC Consolidator Grant _ProduSemy_ (JML, Grant No. 101044282, see [https://cordis.europa.eu/project/id/101044282](https://cordis.europa.eu/project/id/101044282)).
## Acknowledgments
I thank Robert Forkel for helpful comments on parts of the code base and the creation of the segmented version of the Intercontinental Dictionary Series.
## Supplementary Materials
The data and code used in this study are curated on GitHub, where they can be accessed at [https://github.com/lingpy/pacs/releases/tag/v0.1](https://github.com/lingpy/pacs/releases/tag/v0.1) (Version 0.1). Upon publication, data and code will also be archived with Zenodo.
## Data Availability Statement
Data and code accompanying the study are made freely available. The data and code used in this study are curated on GitHub, where they can be accessed at [https://github.com/lingpy/pacs/releases/tag/v0.1](https://github.com/lingpy/pacs/releases/tag/v0.1) (Version 0.1). Upon publication, data and code will also be archived with Zenodo.
|
2308.15409 | An Incremental SVD Method for Non-Fickian Flows in Porous Media:
Addressing Storage and Computational Challenges | It is well known that the numerical solution of the Non-Fickian flows at the
current stage depends on all previous time instances. Consequently, the storage
requirement increases linearly, while the computational complexity grows
quadratically with the number of time steps. This presents a significant
challenge for numerical simulations. While numerous existing methods address
this issue, our proposed approach stems from a data science perspective and
maintains uniformity. Our method relies solely on the rank of the solution
data, dissociating itself from dependency on any specific partial differential
equation (PDE). In this paper, we make the assumption that the solution data
exhibits approximate low rank. Here, we present a memory-free algorithm, based
on the incremental SVD technique, that exhibits only linear growth in
computational complexity as the number of time steps increases. We prove that
the error between the solutions generated by the conventional algorithm and our
innovative approach lies within the scope of machine error. Numerical
experiments are showcased to affirm the accuracy and efficiency gains in terms
of both memory usage and computational expenses. | Gang Chen, Yangwen Zhang, Dujin Zuo | 2023-08-29T16:09:48Z | http://arxiv.org/abs/2308.15409v3 | An Incremental SVD Method for Non-Fickian Flows in Porous Media: Addressing Storage and Computational Challenges
###### Abstract
It is well known that the numerical solution of the Non-Fickian flows at the current stage depends on all previous time instances. Consequently, the storage requirement increases linearly, while the computational complexity grows quadratically with the number of time steps. This presents a significant challenge for numerical simulations, and to the best of our knowledge, it remains an unresolved issue. In this paper, we make the assumption that the solution data exhibits approximate low rank. Here, we present a memory-free algorithm, based on the incremental SVD technique, that exhibits only linear growth in computational complexity as the number of time steps increases. We prove that the error between the solutions generated by the conventional algorithm and our innovative approach lies within the scope of machine error. Numerical experiments are showcased to affirm the accuracy and efficiency gains in terms of both memory usage and computational expenses.
## 1 Introduction
The non-Fickian flow of fluid in porous media [6] is complicated by the history effect which characterizes various mixing length growth of the flow and can be modeled by an integro-differential equation: Find \(u=u(x,t)\) such that
\[u_{t}+\mathcal{A}u+\int_{0}^{t}K(t-s)\mathcal{B}u(s)ds =f(x,t),\quad\text{in }\Omega\times(0,T]\,, \tag{1.1a}\] \[u =0,\quad\quad\quad\quad\text{on }\partial\Omega\times(0,T]\,,\] (1.1b) \[u(x,0) =u_{0}(x),\quad\text{in }\Omega, \tag{1.1c}\]
where \(\Omega\in\mathbb{R}^{d}(d=1,2,3)\) is a bounded convex polygonal domain with Lipschitz boundary \(\partial\Omega\), \(u_{0}\) is a given function defined on \(\Omega\), \(K(t)\) is a nonnegative memory kernel, \(f(x,t)\) is a known function, \(\mathcal{A}\) is a symmetric positive definite second-order elliptic operator and is of the form:
\[\mathcal{A}=-\sum_{i,j=1}^{d}\frac{\partial}{\partial x_{i}}(a_{ij}(x)\frac{ \partial}{\partial x_{j}})+a(x)I,a(x)\geq 0,\]
\[a_{ij}(x)=a_{ji}(x),i,j=1,\cdots,d,\quad a_{1}\sum_{i=1}^{d}\xi_{i}^{2}\geq \sum_{i,j=1}^{d}a_{ij}\xi_{i}\xi_{j}\geq a_{0}\sum_{i=1}^{d}\xi_{i}^{2},\quad a _{1}\geq a_{0}>0.\]
The operator \(\mathcal{B}\) is any second-order linear operator and takes the following form:
\[\mathcal{B}=-\sum_{i,j=1}^{d}\frac{\partial}{\partial x_{i}}(b_{ij}(x)\frac{ \partial}{\partial x_{j}})+\sum_{i=1}^{d}b_{i}(x)\frac{\partial}{\partial x_{i} }+b(x)I.\]
Numerous numerical approaches have been put forth to address the problem (1.1). Among the array of computational techniques available, finite difference for time discretization and Galerkin finite element for spatial discretization have gained significant prominence. Time discretization methods encompass strategies rooted in backward Euler, Crank-Nicolson, as well as their hybrid variants. As for spatial discretization, a range of methodologies are employed, including conventional finite element methods [1, 3, 16, 17, 19, 29, 21], mixed finite element methods [6, 13, 14, 26], finite volume method [25] and discontinuous Galerkin methods [22]. For further information, refer to [2, 9, 15, 20, 27, 28] and the citations therein.
One challenge faced by these numerical schemes is the need to store all preceding time numerical solutions when computing the solution at the next step. The storage requirement increases linearly while the computational complexity grows quadratically with the number of time steps, see more details in Section 2. This poses a substantial challenge for numerical simulations. To address this issue, we adopt the incremental singular value decomposition (SVD) algorithm to compress the data while simultaneously solving the equation (1.1).
The incremental SVD, initially introduced by Brand [4], provides an efficient approach to compute the SVD of a low rank matrix. This method begins by initializing the incremental SVD using a small dataset, subsequently updating it as new data becomes accessible. This technique finds diverse applications, encompassing tasks such as recommender systems [10], proper orthogonal decomposition [7, 8], dynamic mode decomposition [12] and visual tracking [23]. The algorithm necessitates the computation of thousands or even millions of orthogonal matrices, which are subsequently multiplied together. Nonetheless, these multiplications have the potential to destroy the orthogonality. Consequently, many reorthogonalizations are imperative in practical implementation. Brand addressed this matter in [5], noting, "It is an open question how often this is necessary to guarantee a certain overall level of numerical precision; it does not change the overall complexity." A subsequent work [30] provided a response to this query, suggesting a method to mitigate the need for extensive orthogonal matrix computations and thereby eliminating the necessity for re-orthogonalizations. Moreover, they demonstrated that this modification does not adversely impact the algorithm's outcomes.
Our approach is to simultaneously solve the integro-differential equation (1.1) and compress the solution data using the incremental SVD method. The incremental SVD algorithm can be easily used in conjunction with a time stepping code for simulating equation (1.1). This approach enables storing solution data in several smaller matrices, alleviating the need for a huge dense matrix as often seen in traditional methods. Consequently, by presuming that the solution data demonstrates an approximate low-rank characteristic, we are able to address the issue of data storage in solving the integro-differential equation (1.1).
The subsequent sections of this paper are thoughtfully structured as follows. In Section 2, we provide an overview of the standard finite element method employed for spatial discretization, coupled with the back Euler method utilized for time discretization in order to tackle the integro-differential equation (1.1). We demonstrate that the storage requirement increases linearly, while the computational complexity grows quadratically with the number of time steps \(n\) in this standard approach. Moving on to Section 3, we delve into a review of the improved incremental SVD algorithm introduced in [30]. Within Section 4, we present a novel algorithm for resolving the integro-differential equation (1.1), leveraging the improved incremental SVD approach. We es
tablish that our novel algorithm maintains memory efficiency under the premise of low-rank data, and it demonstrates exclusively linear growth in computational complexity with the progression of time steps. In Section 5, we present a rigorous error analysis for our approach, demonstrating that the convergence rates are equivalent to traditional methods. The numerical experiments in Section 6 further confirm the efficiency of our new algorithm in terms of both memory usage and computational performance. Finally, we discuss potential future work in the conclusion.
## 2 The finite element method for integro-differential euqations
In this section, our objective is to present the finite element method for integro-differential equations.
Let \(\mathcal{T}_{h}\) represent a collection of regular simplices \(K\) that partition the domain \(\Omega\), and \(\mathcal{P}_{k}(K)\) (\(k\geq 1\)) denote the polynomial space defined on the element \(K\) with a maximum degree of \(k\). Utilizing the triangulation \(\mathcal{T}_{h}\), we can define the continuous piecewise finite element space \(V_{h}\) as follows:
\[V_{h}=\left\{v_{h}\in H_{0}^{1}(\Omega):v_{h}|_{K}\in\mathcal{P}_{k}(K),\forall K \in\mathcal{T}_{h}\right\}.\]
The semidiscrete Galerkin scheme for the integro-differential equation (1.1) can be expressed as follows: seek \(u_{h}(t)\in V_{h}\) such that
\[(u_{h,t},v_{h})+\mathscr{A}(u_{h},v_{h})+\int_{0}^{t}K(t-s) \mathscr{B}(u_{h}(s),v_{h})ds=(f,v_{h}),\quad\forall v_{h}\in V_{h}, \tag{2.1a}\] \[u_{h}(0)=u_{h}^{0}\in V_{h}, \tag{2.1b}\]
where \(u_{h}^{0}\) corresponds to the projection of \(u_{0}\) onto the space \(V_{h}\), \(u_{h,t}\) denotes the time derivative of \(u_{h}\), while \(\mathscr{A}(\cdot,\cdot)\), \(\mathscr{B}(\cdot,\cdot)\) represent the bilinear forms associated with the operator \(\mathcal{A}\) and \(\mathcal{B}\), defined on \(H_{0}^{1}(\Omega)\times H_{0}^{1}(\Omega)\), and takes the following forms:
\[\mathscr{A}(u,v)=\sum_{i,j=1}^{d}\left(a_{ij}(x)\frac{\partial u }{\partial x_{i}},\frac{\partial v}{\partial x_{j}}\right)+(a(x)u,v),\] \[\mathscr{B}(u,v)=\sum_{i,j=1}^{d}\left(b_{ij}(x)\frac{\partial u }{\partial x_{i}},\frac{\partial v}{\partial x_{j}}\right)+\sum_{i=1}^{d}(b_ {i}(x)\frac{\partial u}{\partial x_{i}},v)+(b(x)u,v).\]
Here, \((\cdot,\cdot)\) represents the inner product in \(L^{2}(\Omega)\).
In the discretization of the time domain, we employ the backward Euler method. To achieve this, we select a time step size \(\Delta t\) and uniformly partition the time domain \([0,T]\) into \(n\) steps, where \(T=n\Delta t\). We denote \(t_{i}=i\Delta t\) and utilize numerical quadrature as follows:
\[\sum_{j=0}^{i}\omega_{i+1,j}g(t_{j})\approx\int_{0}^{t_{i+1}}K(t_{i+1}-s)g(s)ds,\]
where \(\omega_{i+1,j}=\Delta t\cdot K(t_{i+1-j})\) for \(j=0,1,\ldots,i\).
Then, the fully discrete scheme is expressed as follows: given \(u_{h}^{0}\in V_{h}\), we aim to find \(u_{h}^{i+1}\in V_{h}\) for all \(i=0,1,\ldots,n-1\), satisfying
\[\left(\frac{u_{h}^{i+1}-u_{h}^{i}}{\Delta t},v_{h}\right)+\mathscr{A}(u_{h}^ {i+1},v_{h})+\sum_{j=0}^{i}\omega_{i+1,j}\mathscr{B}(u_{h}^{j},v_{h})=(f^{i+1 },v_{h}),\forall v_{h}\in V_{h}, \tag{2.2}\]
which is equivalent to
\[(u_{h}^{i+1},v_{h})+\Delta t\mathscr{A}(u_{h}^{i+1},v_{h})=(u_{h}^{i},v_{h})- \Delta t\sum_{j=0}^{i}\omega_{i+1,j}\mathscr{B}(u_{h}^{j},v_{h})+\Delta t(f^{i+1 },v_{h}), \tag{2.3}\]
for all \(v_{h}\in V_{h}\) and \(f^{i+1}\) denotes the value of the function \(f\) at time \(t_{i+1}\).
Next, we assume that \(V_{h}=\operatorname{span}\left\{\phi_{1},\cdots,\phi_{m}\right\}\), and we define the matrices \(M\), \(A\), \(B\) and vector \(b_{i+1}\) as follows:
\[\begin{split}& M_{ij}=(\phi_{i},\phi_{j}),\quad A_{ij}=\sum_{k, \ell=1}^{d}\left(a_{k\ell}(x)\frac{\partial\phi_{i}}{\partial x_{k}},\frac{ \partial\phi_{j}}{\partial x_{\ell}}\right)+(a(x)\phi_{i},\phi_{j}),\quad(b_ {i+1})_{j}=(f^{i+1},\phi_{j}),\\ & B_{ij}=\sum_{k,\ell=1}^{d}\left(b_{k\ell}(x)\frac{\partial \phi_{i}}{\partial x_{k}},\frac{\partial\phi_{j}}{\partial x_{\ell}}\right)+ \sum_{k=1}^{d}(b_{k}(x)\frac{\partial\phi_{i}}{\partial x_{k}},\phi_{j})+(b(x )\phi_{i},\phi_{j}),\end{split} \tag{2.4}\]
where \(X_{ij}\) represents the element of row \(i\) and column \(j\) of matrix \(X\), and \((\alpha)_{j}\) denotes the \(j\)-th component of the vector \(\alpha\). Let \(u_{i+1}\) be the coefficients of \(u_{h}^{i+1}\) at time \(t_{i+1}\), given by:
\[u_{h}^{i+1}=\sum_{j=1}^{m}(u_{i+1})_{j}\phi_{j}. \tag{2.5}\]
Substituting (2.4) and (2.5) into (2.3), we obtain the following algebraic system:
\[(M+\Delta tA)u_{i+1}=Mu_{i}-\Delta tB\sum_{j=0}^{i}\omega_{i+1,j}u_{j}+\Delta tb _{i+1}. \tag{2.6}\]
By solving the above algebraic system, we can obtain the solution to (2.3). The algorithm is presented in Algorithm 1.
```
0:\(\Delta t\), \(M\in\mathbb{R}^{m\times m}\), \(A\in\mathbb{R}^{m\times m}\),\(B\in\mathbb{R}^{m\times m}\), \(u_{0}\)
1: Set \(\widetilde{A}=M+\Delta tA\), \(U=\texttt{zeros}(m,n+1)\), \(U(:,1)=u_{0}\)
2:for\(i=0\) to \(n-1\)do
3: Compute the weights \(\omega_{i+1,j}(j=0,1,\cdots,i)\) and get the load vector \(b_{i+1}\)
4:\(\widetilde{b}_{i+1}=MU(:,i+1)-\Delta tB\sum_{j=0}^{i}\omega_{i+1,j}U(:,j+1)+ \Delta tb_{i+1}\)
5: Solve \(\widetilde{A}u_{i+1}=\widetilde{b}_{i+1}\)
6:\(U(:,i+2)=u_{i+1}\)
7:endfor
8:\(u_{n}\)
```
**Algorithm 1** (Finite element and backward Euler method for solving equation (1.1))
However, it becomes evident that solving the algebraic system (2.6) poses a significant computational challenge. To compute the numerical solution \(u_{h}^{i+1}\), all the preceding time numerical solutions \(\{u_{h}^{j}\}_{j=0}^{i}\) must be available. As the coefficients \(\{\omega_{i+1,j}\}_{j=0}^{i}\) change with each time step \(i\), it becomes necessary to store all the numerical solutions \(\{u_{h}^{j}\}_{j=0}^{i}\). The storage cost of the history term is
\[\mathcal{O}(mn), \tag{2.7}\]
and the computational cost is:
\[\sum_{i=1}^{n}\sum_{j=0}^{i}\mathcal{O}(m)=\mathcal{O}(mn^{2}). \tag{2.8}\]
Consequently, the storage requirement increases linearly while the computational complexity grows quadratically with the number of time steps \(n\). This poses a substantial challenge for numerical simulations.
Fortunately, the memory and computational complexity issue mentioned above can be addressed with the help of the incremental singular value decomposition (SVD) method, provided we make the assumption that the solution data exhibits approximate low rank.
## 3 The incremental SVD method
We begin by introducing several crucial definitions and concepts that are essential for understanding the incremental SVD method. Given a vector \(u\in\mathbb{R}^{m}\) and an integer \(r\) satisfying \(r\leq m\), the notation \(u(1:r)\) denotes the first \(r\) components of \(u\). Likewise, for a matrix \(U\in\mathbb{R}^{m\times n}\), we use the notation \(U(p:q,r:s)\) to refer to the submatrix of \(U\) that encompasses the entries from rows \(p\) to \(q\) and columns \(r\) to \(s\).
Throughout this section, we make the assumption that the rank of the matrix \(U\in\mathbb{R}^{m\times n}\) is low, specifically denoted by \(\texttt{rank}(U)\ll\min\{m,n\}\).
Next, we present an improved version of Brand's incremental SVD algorithm from [30]. The algorithm updates the SVD of a matrix when one or more columns are added to the matrix. We split the process into four steps.
#### Step 1: Initialization
Assuming that the first column of matrix \(U\), denoted as \(u_{1}\), is non-zero, we can proceed to initialize the SVD of \(u_{1}\) using the following approach:
\[\Sigma=(u_{1}^{\top}u_{1})^{1/2},\quad Q=u_{1}\Sigma^{-1},\quad R=1.\]
The algorithm is shown in Algorithm 2.
```
0:\(u_{1}\in\mathbb{R}^{m}\)
1:\(\Sigma=(u_{1}^{\top}u_{1})^{1/2};\quad Q=u_{1}\Sigma^{-1};\quad R=1\)
2:\(Q,\Sigma,R\)
```
**Algorithm 2** (Initialize ISVD)
Assuming we already have the truncated SVD of rank \(k\) for the first \(\ell\) columns of matrix \(U\), denoted as \(U_{\ell}\):
\[U_{\ell}\approx Q\Sigma R^{\top},\quad\text{with}\quad Q^{\top}Q=I_{k},\quad R ^{\top}R=I_{k},\quad\Sigma=\texttt{diag}(\sigma_{1},\cdots,\sigma_{k}), \tag{3.1}\]
where \(\Sigma\in\mathbb{R}^{k\times k}\) is a diagonal matrix with the \(k\) ordered singular values of \(U_{\ell}\) on the diagonal, \(Q\in\mathbb{R}^{m\times k}\) is the matrix of the corresponding \(k\) left singular vectors of \(U_{\ell}\) and \(R\in\mathbb{R}^{\ell\times k}\) is the matrix of the corresponding \(k\) right singular vectors of \(U_{\ell}\).
Given our assumption that the matrix \(U\) is low rank, it is reasonable to expect that most of the columns of \(U\) are either linearly dependent or nearly linearly dependent on the vectors in \(Q\in\mathbb{R}^{m\times k}\).
Without loss of generality, we assume that the next \(s\) vectors, denoted as \(\{u_{\ell+1},\ldots,u_{\ell+s}\}\), their residuals are less than a specified tolerance when projected onto the subspace spanned by the columns of \(Q\). However, the residual of \(u_{\ell+s+1}\) is larger than the given tolerance. In other words,
\[|u_{i}-QQ^{\top}u_{i}| <\texttt{tol},\quad i=\ell+1,\cdots,\ell+s, \tag{3.2a}\] \[|u_{i}-QQ^{\top}u_{i}| \geq\texttt{tol},\quad i=\ell+s+1. \tag{3.2b}\]
#### Step 2: Update the SVD of \(U_{\ell+s}\) (\(p\)-truncation)
By the assumption (3.2a), we have
\[U_{\ell+s} =[U_{\ell}\mid u_{\ell+1}\mid\cdots\mid u_{\ell+s}]\] \[\approx\Big{[}Q\Sigma R^{\top}\mid u_{\ell+1}\mid\cdots\mid u_{ \ell+s}\Big{]}\] \[\approx\Big{[}Q\Sigma R^{\top}\mid QQ^{\top}u_{\ell+1}\mid\cdots \mid QQ^{\top}u_{\ell+s}\Big{]}\] \[=Q\underbrace{\Big{[}\Sigma\mid Q^{\top}u_{\ell+1}\mid\cdots\mid Q ^{\top}u_{\ell+s}\Big{]}}_{Y}\left[\begin{array}{cc}R&0\\ 0&I_{s}\end{array}\right]^{\top}.\]
We can obtain the truncated SVD of \(U_{\ell+s}\) by computing the full SVD of the matrix \(Y\). Specifically, let \(Y=Q_{Y}\Sigma_{Y}R_{Y}^{\top}\) be the SVD of \(Y\), and split \(R_{Y}\) into \(\left[\begin{array}{c}R_{Y}^{(1)}\\ R_{Y}^{(2)}\end{array}\right]\). With this, we can update the SVD of \(U_{\ell+s}\) as follows:
\[Q\gets QQ_{Y},\quad\Sigma\leftarrow\Sigma_{Y},\quad R\leftarrow\left[ \begin{array}{c}RR_{Y}^{(1)}\\ R_{Y}^{(2)}\end{array}\right]\in\mathbb{R}^{(\ell+s)\times\ell}.\]
It is worth noting that the dimension of the matrices \(Q\) and \(\Sigma\) remains unchanged, and we need to incrementally store the matrix \(W=\left[Q^{\top}u_{\ell+1}\mid\cdots\mid Q^{\top}u_{\ell+s}\right]\). As \(W\) belongs to \(\mathbb{R}^{k\times s}\) where \(k\leq r\) is relatively small, the storage cost for this matrix is low.
#### Step 3: Update the SVD of \(U_{\ell+s+1}\) (No truncation)
Next, we proceed with the update of the SVD for \(U_{\ell+s+1}\). Firstly, we compute the residual vector of \(u_{\ell+s+1}\) by projecting it onto the subspace spanned by the columns of \(Q\), i.e.,
\[e=u_{\ell+s+1}-QQ^{\top}u_{\ell+s+1}. \tag{3.3}\]
First, we define \(p=\|e\|\). Then, based on (3.2b), we deduce that \(p>\texttt{tol}\). Finally, we denote \(\widetilde{e}\) as \(e/p\). With these definitions, we establish the following fundamental identity:
\[U_{\ell+s+1} =[U_{\ell+s}\mid u_{\ell+s+1}]\] \[\approx\Big{[}Q\Sigma R^{\top}\mid p\widetilde{e}+QQ^{\top}u_{ \ell+s+1}\Big{]}\] \[\approx[Q\mid\widetilde{e}]\underbrace{\left[\begin{array}{cc} \Sigma&Q^{\top}u_{\ell+s+1}\\ 0&p\end{array}\right]}_{\widetilde{Y}}\left[\begin{array}{cc}R&0\\ 0&1\end{array}\right]^{\top},\]
Let \(\bar{Q}\bar{\Sigma}\bar{R}^{\top}\) be the full SVD of \(\bar{Y}\). Then the SVD of \(U_{\ell+s+1}\) can be approximated by
\[U_{\ell+s+1}\approx([Q\mid\vec{e}]\,\bar{Q})\bar{\Sigma}\left(\left[\begin{array} []{cc}R&0\\ 0&1\end{array}\right]\bar{R}\right)^{\top}.\]
With this, we can update the SVD of \(U_{\ell+s+1}\) as follows:
\[Q\leftarrow([Q\mid\vec{e}])\bar{Q},\quad\Sigma\leftarrow\bar{\Sigma},\quad R \leftarrow\left[\begin{array}{cc}R&0\\ 0&1\end{array}\right]\bar{R}.\]
It is worth noting that, in this case, the dimensions of the matrices \(Q\) and \(\Sigma\) increase.
**Remark 1**.: Theoretically, the residual vector \(e\) in (3.3) is orthogonal to the vectors in the subspace spanned by the columns of \(Q\). However, in practice, this orthogonality can be completely lost, a fact that has been confirmed by numerous numerical experiments [7, 8, 18]. In [11], Giraud et al. stressed that exactly two iteration-steps are enough to keep the orthogonality. To reduce computational costs, Zhang [30] suggested using the two iteration steps only when the inner product between \(e\) and the first column of \(Q\) exceeds a certain tolerance. Drawing from our experience, it is imperative to calibrate this tolerance to align closely with the machine error. For instance, as demonstrated in this paper, we consistently establish this tolerance as \(10^{-14}\).
### Step 4: Singular value truncation
For many PDE data sets, they may have a large number of nonzero singular values but most of them are very small. Considering the computational cost involved in retaining all of these singular values, it becomes necessary to perform singular value truncation. This involves discarding the last few singular values if they fall below a certain tolerance threshold.
**Lemma 1**.: [30, Lemma 5.1] Assume that \(\Sigma=\operatorname{diag}\left(\sigma_{1},\sigma_{2},\ldots,\sigma_{k}\right)\) with \(\sigma_{1}\geq\sigma_{2}\geq\ldots\geq\sigma_{k}\), and \(\bar{\Sigma}=\operatorname{diag}\left(\mu_{1},\mu_{2},\ldots,\mu_{k+1}\right)\) with \(\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{k+1}\). Then we have
\[\mu_{k+1} \leq p, \tag{3.4}\] \[\mu_{k+1} \leq\sigma_{k}\leq\mu_{k}\leq\sigma_{k-1}\leq\ldots\leq\sigma_{1 }\leq\mu_{1}. \tag{3.5}\]
The inequality (3.4) indicates that, regardless of the magnitude of \(p\), the last singular value of \(\bar{Y}\) can potentially be very small. This implies that the tolerance set for \(p\) cannot prevent the algorithm from computing exceedingly small singular values. Consequently, an additional truncation is necessary when the data contains numerous very small singular values. Fortunately, inequality (3.5) assures us that only the last singular value of \(\bar{Y}\) has the possibility of being less than the tolerance. Therefore, it suffices to examine only the last singular value.
1. If \(\Sigma_{Y}(k+1,k+1)\geq\mathtt{tol}\), then \[Q\longleftarrow[Q\mid\vec{e}]Q_{Y},\quad\Sigma\longleftarrow\Sigma_{Y}, \quad R\longleftarrow\left[\begin{array}{cc}R&0\\ 0&1\end{array}\right]R_{Y}.\]
2. If \(\Sigma_{Y}(k+1,k+1)<\mathtt{tol}\), then \[Q\longleftarrow[Q\mid\vec{e}]Q_{Y}(:,1:k),\quad\Sigma\longleftarrow\Sigma_{Y} (1:k,1:k),\quad R\longleftarrow\left[\begin{array}{cc}R&0\\ 0&1\end{array}\right]R_{Y}(:,1:k).\]
It is essential to note that \(p\)-truncation and no-truncation do not alter the previous data, whereas singular value truncation may potentially change the entire previous data. However, we can establish the following bound:
**Lemma 2**.: Suppose \(Q\Sigma R^{\top}\) is the SVD of \(A\in\mathbb{R}^{m\times n}\), where \(\{\sigma_{i}\}_{i=1}^{r}\) are the positive singular values. Let \(B=Q(:,1:r-1)\Sigma(1:r-1,1:r-1)(R(:,1:r-1))^{\top}\). We have:
\[\max\{|a_{1}-b_{1}|,|a_{2}-b_{2}|,\ldots,|a_{n}-b_{n}|\}\leq\sigma_{r}.\]
Here, \(a_{i}\) and \(b_{i}\) correspond to the \(i\)-th columns present in matrices \(A\) and \(B\) respectively. The symbol \(|\cdot|\) denotes the Euclidean norm within the realm of \(\mathbb{R}^{m}\).
The proof of Lemma 2 is straightforward, and thus we omit it here. Moving forward, we will provide a summary of the aforementioned four steps in Algorithm 3.
```
0:\(Q\in\mathbb{R}^{m\times k},\Sigma\in\mathbb{R}^{k\times k},R\in\mathbb{R}^{ \ell\times k},\texttt{tol},W,Q_{0},q,u_{\ell+1}\),
1: Set \(d=Q^{\top}u_{\ell+1};e=u_{\ell+1}-Qd;p=(e^{\top}e)^{1/2}\);
2:if\(p\geq\texttt{tol}\)then
3:if\(q>0\)then
4: Set \(Y=[\Sigma\mid\texttt{cell2mat}(W)]\); \([Q_{Y},\Sigma_{Y},R_{Y}]=\texttt{svd}(Y,{}^{\prime}\texttt{econ}^{\prime})\);
5: Set \(Q_{0}=Q_{0}Q_{Y},\Sigma=\Sigma_{Y},R_{1}=R_{Y(1:k,:)},R_{2}=R_{Y(k+1:\texttt{ end},:)},R=\left[\begin{array}{c}RR_{1}\\ R_{2}\end{array}\right]\); \(d=Q_{Y}^{\top}d\);
6:endif
7: Set \(Y=\left[\begin{array}{cc}\Sigma&d\\ 0&p\end{array}\right]\); \([Q_{Y},\Sigma_{Y},R_{Y}]=\texttt{svd}(Y)\); \(e=e/p\);
8:if\(|e^{\top}Q(:,1)|>10^{-14}\)then
9:\(e=e-Q(Q^{\top}e);p_{1}=(e^{\top}e)^{1/2};e=e/p_{1}\);
10:endif
11: Set \(Q_{0}=\left[\begin{array}{cc}Q_{0}&0\\ 0&1\end{array}\right]Q_{Y}\);
12:if\(\Sigma_{Y}(k+1,k+1)\geq\texttt{tol}\)then
13:\(Q=[Q\mid e]Q_{0},\quad\Sigma=\Sigma_{Y},\quad R=\left[\begin{array}{cc}R&0 \\ 0&1\end{array}\right]R_{Y},\quad Q_{0}=I_{k+1}\);
14:else
15:\(Q=[Q\mid e]Q_{0}(:,1:k),\Sigma=\Sigma_{Y}(1:k,1:k),R=\left[\begin{array}{ cc}R&0\\ 0&1\end{array}\right]R_{Y}(:,1:k),\quad Q_{0}=I_{k}\);
16:endif
17:\(W=\left[\right];q=0\)
18:else
19:\(q=q+1\); \(W\left\{q\right\}=d\);
20:endif
21:\(Q,\Sigma,R,W,Q_{0},q\)
```
**Algorithm 3** (Update ISVD)
**Remark 2**.: The set \(W\), which is one of the outputs of Algorithm 3, has the potential to be non-empty. Hence, the output of Algorithm 3 does not represent the SVD of \(U_{\ell+1}\). Therefore, it becomes essential to check whether \(W\) is empty. If \(W\) is not empty, we proceed to update the SVD for the vectors contained in \(W\). For additional details, please refer to [30]; however, as it is not required for the integro-differential equation, we have excluded it here.
It is worth mentioning that even though the output of Algorithm 3 may not represent the SVD of \(U_{\ell+1}\), \(Q[\Sigma R\mid W]\) serves as an approximation of \(U_{\ell+1}\). We will utilize this approximation in the next section, as it is crucial for the computation of the equation (1.1).
## 4 Incremental SVD method for the integro-differential equation
This section focuses on the application of the incremental SVD algorithm to the equation (1.1).
Throughout this section, we make the assumption that the solution data of equation (1.1) exhibits approximate low rank.
Our approach is to simultaneously solve the integro-differential equation and incrementally update the SVD of the solution. By doing so, we store the solutions at all time steps in the four matrices of the SVD. As a result, we are able to address the issue of data storage in solving the integro-differential equation (1.1).
Due to the errors that may arise during the data compression process and the potential alterations caused by singular value truncation to previous storage, it becomes necessary for us to modify the traditional scheme (2.2). Below, we provide a brief discussion of our approach.
1. Use the initial condition \(u_{h}^{0}\) to compute the numerical solution at time step 1, which follows the traditional approach. However, we use \(\widehat{u}_{h}^{1}\) to denote the numerical solution for consistency. Once we obtain \(\widehat{u}_{h}^{1}\), we compress \(\{u_{h}^{0},\widehat{u}_{h}^{1}\}\) and denote the corresponding compressed data as \(\{\widetilde{u}_{h}^{1,0},\widetilde{u}_{h}^{1,1}\}\).
2. Use the compressed data \(\{\widetilde{u}_{h}^{1,0},\widetilde{u}_{h}^{1,1}\}\) to compute the numerical solution at time step 2, denoted by \(\widehat{u}_{h}^{2}\). Once we obtain \(\widehat{u}_{h}^{2}\), we compress \(\{\widetilde{u}_{h}^{1,0},\widetilde{u}_{h}^{1,1}\}\) and \(\widehat{u}_{h}^{2}\), and denote the corresponding compressed data as \(\{\widetilde{u}_{h}^{2,0},\widetilde{u}_{h}^{2,1},\widetilde{u}_{h}^{2,2}\}\).
3. At time step \(i\), given the compressed data \(\{\widetilde{u}_{h}^{i,j}\}_{j=0}^{i}\), we compute the numerical solution at time step \(i+1\), denoted by \(\widehat{u}_{h}^{i+1}\). We then compress \(\{\widetilde{u}_{h}^{i,j}\}_{j=0}^{i}\) and \(\widehat{u}_{h}^{i+1}\), and denote the corresponding compressed data as \(\{\widetilde{u}_{h}^{i+1,j}\}_{j=0}^{i+1}\).
4. Continue the above process until we reach the final time step.
In summary, we apply our novel approach by incrementally compressing data at each time step to compute the numerical solutions throughout the process.
Based on the preceding discussion, we can present our formulation below, where we seek \(\widehat{u}_{h}^{i+1}\in V_{h}\) that satisfies the following equation:
\[\left(\frac{\widehat{u}_{h}^{i+1}-\widehat{u}_{h}^{i}}{\Delta t},v_{h}\right) +\mathscr{A}(\widehat{u}_{h}^{i+1},v_{h})+\sum_{j=0}^{i}\omega_{i+1,j} \mathscr{B}(\widetilde{u}_{h}^{i,j},v_{h})=(f^{i+1},v_{h})\quad\forall v_{h} \in V_{h}. \tag{4.1}\]
Subsequently, we express equation (4.1) into matrix form to highlight the benefits of utilizing the incremental SVD for solving the integro-differential equation more distinctly. To do this, let \(\widehat{u}_{i+1}\) and \(\widetilde{u}_{i,j}\) denote the coefficient of \(\widehat{u}_{h}^{i+1}\) and \(\widetilde{u}_{h}^{i,j}\), respectively. Then we seek a solution \(\widehat{u}_{i+1}\in\mathbb{R}^{m}\) that satisfies the following equation:
\[(M+\Delta tA)\widehat{u}_{i+1}=M\widehat{u}_{i}-\Delta tB\sum_{j=0}^{i} \omega_{i+1,j}\widetilde{u}_{i,j}+\Delta tb_{i+1}. \tag{4.2}\]
Here, \(\{\widetilde{u}_{i,j}\}_{j=0}^{i}\) represents the data that has been compressed from \(\{\widetilde{u}_{i-1,0},\ldots,\widetilde{u}_{i-1,i-1},\widehat{u}_{i}\}\) using the incremental SVD algorithm. We assume that \(Q_{i}\), \(\Sigma_{i}\), \(R_{i}\), and \(W_{i}\) are the matrices associated with this compression process. In other words,
\[[\widetilde{u}_{i-1,0}\mid\cdots\mid\widetilde{u}_{i-1,i-1}\mid\widehat{u}_{i} ]\quad\stackrel{{\text{Compress}}}{{\rightarrow}}\quad Q_{i}[ \Sigma_{i}R_{i}\mid W_{i}]^{\top}=[\widetilde{u}_{i,0}\mid\cdots\mid\widetilde{ u}_{i,i}]. \tag{4.3}\]
Let \([\Sigma_{i}R_{i}\mid W_{i}]^{\top}\) be denoted as \(X_{i}\). Accordingly, equation (4.2) can be rewritten as follows:
\[(M+\Delta tA)\widehat{u}_{i+1}=M\widehat{u}_{i}-\Delta tBQ_{i}\sum_{j=0}^{i} \omega_{i+1,j}X_{i}(:,j)+\Delta tb_{i}. \tag{4.4}\]
Once \(\widehat{u}_{i+1}\) is obtained, we update the SVD of \([\widetilde{u}_{i,0}\mid\cdots\mid\widetilde{u}_{i,i}\mid\widehat{u}_{i+1}]\) using \(Q_{i}\), \(\Sigma_{i}\), \(R_{i}\), \(W_{i}\), and \(\widehat{u}_{i+1}\) based on the principles of the incremental SVD method. This update process is illustrated in Figure 1.
Throughout the remainder of this section, we will examine the memory and computational cost pertaining to the history term in our novel approach. Our data storage involves four matrices: \(Q_{i}\), \(\Sigma_{i}\), \(R_{i}\), and \(W_{i}\), resulting in a memory cost of \(\mathcal{O}((m+n)r)\), where \(r\) represents the rank of the solution data. By taking into account our assumption that \(r\ll\min\{m,n\}\), we can compare this memory cost with that of the traditional approach presented in (2.7), which illuminates a noteworthy reduction in our innovative method.
Moving on to the computational cost, which also encompasses the cost of the incremental SVD, it can be summarized as follows:
\[\mathcal{O}(mnr)+\sum_{i=1}^{n}\sum_{j=0}^{i}\mathcal{O}(r)=\mathcal{O}(mnr+ rn^{2}).\]
Here, once again, \(r\) represents the rank of the solution data. Based on our assumption that \(r\ll\min\{m,n\}\), we can compare the computational cost in (2.8) to that of the traditional approach, revealing that our approach experiences only linear growth, rather than quadratic growth as observed in the traditional approach.
## 5 Error estimate
In this section, we derive the error between the solution of the scheme (4.1) and the exact solution that satisfies the equation (1.1).
Figure 1: The process of using the incremental SVD to solve the integro-differential equation.
### Assumptions and Main Result
We assume throughout that \(\Omega\) is a bounded convex polyhedral domain, the data of (1.1) satisfies the following conditions:
1. Let \(\left\|\psi\right\|_{a}^{2}=\mathscr{A}(\psi,\psi)\) for \(\psi\in H_{0}^{1}(\Omega)\). \(\left\|\cdot\right\|_{a}\) is a norm and is equivalent to \(\left\|\cdot\right\|_{1}\) on \(H_{0}^{1}(\Omega)\). There exists \(c_{0}>0\) such that \[\left|\mathscr{B}(u,v)\right|\leq c_{0}\lambda_{0}^{\beta/2-1}\left\|u\right\|_ {a}\left\|v\right\|_{a},\quad\forall u,v\in H_{0}^{1}(\Omega),\] where \(\beta\) is the order of the operator \(B\), and \(\lambda_{0}=\lambda_{0}(\Omega,\mathcal{A})>0\) is the first eigenvalue of the elliptic problem \[\mathcal{A}\phi-\lambda\phi=0,\;x\in\Omega\quad\text{and}\quad\phi=0,\;x\in \partial\Omega.\]
2. Let \(K_{0}=\int_{0}^{T}K(t)dt\), the following inequality holds: \[c_{0}K_{0}\lambda_{0}^{\beta/2-1}<1.\]
3. Let \(\mu=\max\left\{\mu_{0}\right\}\), where \(\mu_{0}\) are the solutions of \[1-c_{0}K_{\mu_{0}}\lambda_{0}^{\beta/2-1}\geq\frac{\mu_{0}}{\lambda_{0}},\quad 0 <\mu_{0}<\lambda_{0},\] where \(K_{\mu_{0}}=\int_{0}^{T}e^{\mu_{0}t}K(t)dt\).
* Let \[K_{\mu}^{\Delta t}=\max_{1\leq i\leq n}\sum_{j=0}^{i}\omega_{i+1,j}e^{\mu(t_{i}-t_ {j})}\] for any \(0<\mu_{0}\) small, there exists a small \(T_{0}>0\) such that \(0<\Delta t\leq T_{0}\), \[1-c_{0}\lambda_{0}^{\beta/2-1}K_{\mu_{0}}^{\Delta t}e^{\mu_{0}\Delta t}\geq\frac {\mu_{0}}{\lambda_{0}}e^{\mu_{0}\Delta t}.\]
**Remark 3**.: It is worth noting that conditions (A1)-(A3) are imposed to ensure the dominance of the operator \(\mathcal{A}\) over the integral term. The presence of \(\mu>0\) in (A3) is a consequence of (A2). These same assumptions were employed in the work of [1] to establish the stability of the standard finite element method.
Now, we state the main result of our paper.
**Theorem 1**.: Let \(u\) and \(\widehat{u}_{h}^{n}\) denote the solution of (1.1) and (4.1), respectively. Throughout the entire process of the incremental SVD algorithm, the tolerance tol is applied to both \(p\)-truncation and singular value truncation. Under assumptions (A1) - (A4), if \(u(t)\in H^{k+1}(\Omega)\), then the following error bound holds:
\[\|u(t_{n})-\widehat{u}_{h}^{n}\|\leq C(h^{k+1}+\Delta t)+(T_{sv}+1)\sqrt{T(1+ \gamma^{-1})\sigma(A)}\texttt{tol}, \tag{5.1}\]
where \(C\) and \(\gamma\in(0,2c_{0}^{-1}K_{0}^{-1}\lambda_{0}^{1-\beta/2}-2)\) are two positive constants, independent of \(h\), \(\Delta t\), and tol, and \(\sigma(A)\) represents the spectral radius of the stiffness matrix \(A\), \(T_{sv}\) signifies the total number of times singular value truncation is applied.
### Proof of Theorem 1
We begin by giving an error bound between the solution of the standard finite element method given by equation (2.3) and the solution of the Non-Fickian model (1.1). Additionally, we derive an error bound between the solution of the standard finite element method (2.3) and our novel scheme (4.1). By applying the triangle inequality, we obtain a straightforward error bound between the solution of the Non-Fickian model (1.1) and our novel scheme (4.1).
**Lemma 3**.: [1, Theorem 4.4] Let \(u\) and \(u_{h}^{n}\) be the solution of (1.1) and (2.3) respectively. Assume that the conditions (A1)-(A4) hold and \(\left\|u_{0}-u_{h}^{0}\right\|\leq Ch^{k+1}\left\|u_{0}\right\|_{k+1}\). Then there exists a constant \(C>0\), independent of \(h\) and \(\Delta t\), such that
\[\|u(t_{n})-u_{h}^{n}\|\leq C(h^{k+1}+\Delta t). \tag{5.2}\]
Now, we will proceed to establish the error estimation between the solution of the standard finite element method (2.3) and our novel scheme (4.1).
**Lemma 4**.: Let \(u_{h}^{n}\) and \(\widehat{u}_{h}^{n}\) be the solution of (2.3) and (4.1), respectively. Given assumptions (A1) and (A2), the following error bound is established:
\[\|u_{h}^{n}-\widehat{u}_{h}^{n}\|\leq\sqrt{T(1+\gamma^{-1})}\max_{0\leq i\leq n -1}\max_{0\leq j\leq i}\|\widehat{u}_{h}^{i,j}-\widehat{u}_{h}^{j}\|_{a},\]
where \(\gamma\in(0,2c_{0}^{-1}K_{0}^{-1}\lambda_{0}^{1-\beta/2}-2)\) and \(c_{0},\lambda_{0},K_{0}\) are defined in assumptions (A1)-(A2).
Proof.: Recall that \(u_{h}^{i+1}\) and \(\widehat{u}_{h}^{i+1}\) satisfy the following equations
\[\left(\frac{u_{h}^{i+1}-u_{h}^{i}}{\Delta t},v_{h}\right)+\mathscr{A }(u_{h}^{i+1},v_{h})+\sum_{j=0}^{i}\omega_{i+1,j}\mathscr{B}(u_{h}^{j},v_{h})=(f ^{i+1},v_{h}),\quad\forall v_{h}\in V_{h}, \tag{5.3a}\] \[\left(\frac{\widehat{u}_{h}^{i+1}-\widehat{u}_{h}^{i}}{\Delta t}, v_{h}\right)+\mathscr{A}(\widehat{u}_{h}^{i+1},v_{h})+\sum_{j=0}^{i}\omega_{i+1,j} \mathscr{B}(\widetilde{u}_{h}^{i,j},v_{h})=(f^{i+1},v_{h}),\quad\forall v_{h} \in V_{h}. \tag{5.3b}\]
Subtracting (5.3a) from (5.3b) and introducing two notations: \(\widehat{e}_{i+1}=u_{h}^{i+1}-\widehat{u}_{h}^{i+1}\), and \(\widetilde{e}_{i,j}=u_{h}^{j}-\widehat{u}_{h}^{i,j}\), \(j=0,1,\ldots,i\), we obtain the following equation:
\[\left(\frac{\widehat{e}_{i+1}-\widehat{e}_{i}}{\Delta t},v_{h} \right)+\mathscr{A}(\widehat{e}_{i+1},v_{h})+\sum_{j=0}^{i}\omega_{i+1,j} \mathscr{B}(\widetilde{e}_{i,j},v_{h})=0,\quad\forall v_{h}\in V_{h}. \tag{5.4}\]
Substituting \(v_{h}=\widehat{e}_{i+1}\in V_{h}\) into the above equation and utilizing the identity \(2(a-b,a)=\|a\|^{2}-\|b\|^{2}+\|a-b\|^{2}\), we derive the following equation:
\[(2\Delta t)^{-1}\left(\|\widehat{e}_{i+1}\|^{2}-\|\widehat{e}_{i }\|^{2}+\|\widehat{e}_{i+1}-\widehat{e}_{i}\|^{2}\right)+\|\widehat{e}_{i+1}\| _{a}^{2}=-\sum_{j=0}^{i}\omega_{i+1,j}\mathscr{B}(\widetilde{e}_{i,j}, \widehat{e}_{i+1}).\]
Using Cauchy-Schwarz inequality, assumption (A1), we can deduce the following equation:
\[(2\Delta t)^{-1}\left(\|\widehat{e}_{i+1}\|^{2}-\|\widehat{e}_{i }\|^{2}+\|\widehat{e}_{i+1}-\widehat{e}_{i}\|^{2}\right)+\|\widehat{e}_{i+1}\| _{a}^{2}\] \[\leq\sum_{j=0}^{i}c_{0}\lambda_{0}^{\beta/2-1}\omega_{i+1,j}\,\| \widetilde{e}_{i,j}\|_{a}\,\|\widehat{e}_{i+1}\|_{a}\] \[\leq\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{j=0}^{i}\omega_{i +1,j}\|\widetilde{e}_{i,j}\|_{a}^{2}+\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2} \sum_{j=0}^{i}\omega_{i+1,j}\|\widehat{e}_{i+1}\|_{a}^{2}\] \[\leq\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{j=0}^{i}\omega_{i +1,j}((\|\widetilde{e}_{i,j}-\widehat{e}_{j}\|_{a}+\|\widehat{e}_{j}\|_{a})^{2 }+\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{j=0}^{i}\omega_{i+1,j}\| \widehat{e}_{i+1}\|_{a}^{2}\] \[\leq\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{j=0}^{i}\omega_{i +1,j}((1+\gamma)\|\widehat{e}_{j}\|_{a}^{2}+(1+\gamma^{-1})\|\widetilde{e}_{i,j }-\widehat{e}_{j}\|_{a}^{2})+\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{j=0}^ {i}\omega_{i+1,j}\|\widehat{e}_{i+1}\|_{a}^{2}\]
for some \(\gamma\in(0,1)\). Summing over \(i\) ranging from \(0\) to \(n-1\), we have:
\[(2\Delta t)^{-1}\left(\|\widehat{e}_{n}\|^{2}-\|\widehat{e}_{0}\|^{2 }+\sum_{i=0}^{n-1}\|\widehat{e}_{i+1}-\widehat{e}_{i}\|^{2}\right)+\sum_{i=0}^{ n-1}\|\widehat{e}_{i+1}\|_{a}^{2} \tag{5.5}\] \[\leq\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{i=0}^{n-1}\sum_{j =0}^{i}\omega_{i+1,j}((1+\gamma)\|\widehat{e}_{j}\|_{a}^{2}+(1+\gamma^{-1})\| \widetilde{e}_{i,j}-\widehat{e}_{j}\|_{a}^{2})\] \[\quad+\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{i=0}^{n-1}\sum_ {j=0}^{i}\omega_{i+1,j}\|\widehat{e}_{i+1}\|_{a}^{2}\] \[\leq\frac{c_{0}\lambda_{0}^{\beta/2-1}(1+\gamma)}{2}\sum_{i=0}^{ n-1}\sum_{j=i}^{n-1}\omega_{j+1,i}\|\widehat{e}_{i}\|_{a}^{2}+\frac{c_{0} \lambda_{0}^{\beta/2-1}(1+\gamma^{-1})}{2}\sum_{i=0}^{n-1}\sum_{j=0}^{i} \omega_{i+1,j}\|\widetilde{e}_{i,j}-\widehat{e}_{j}\|_{a}^{2}\] \[\quad+\frac{c_{0}\lambda_{0}^{\beta/2-1}}{2}\sum_{i=0}^{n-1}\sum_ {j=0}^{i}\omega_{i+1,j}\|\widehat{e}_{i+1}\|_{a}^{2}\] \[\leq\frac{2+\gamma}{2}c_{0}\lambda_{0}^{\beta/2-1}K_{0}\sum_{i=0} ^{n-1}\|\widehat{e}_{i+1}\|_{a}^{2}+\frac{c_{0}\lambda_{0}^{\beta/2-1}(1+ \gamma^{-1})}{2}K_{0}\sum_{i=0}^{n-1}\max_{0\leq j\leq i}\|\widetilde{e}_{i,j} -\widehat{e}_{j}\|_{a}^{2}.\]
Using assumption (A2), we can choose \(\gamma\in(0,2c_{0}^{-1}K_{0}^{-1}\lambda_{0}^{1-\beta/2}-2)\) such that
\[\frac{2+\gamma}{2}c_{0}\lambda_{0}^{\beta/2-1}K_{0}<1.\]
Then (5.5) becomes
\[\|\widehat{e}_{n}\|^{2} \leq\|\widehat{e}_{0}\|^{2}+c_{0}K_{0}\lambda_{0}^{\beta/2-1}(1+ \gamma^{-1})T\max_{0\leq i\leq n-1}\max_{0\leq j\leq i}\|\widetilde{e}_{i,j}- \widehat{e}_{j}\|_{a}^{2}\] \[\leq(1+\gamma^{-1})T\max_{0\leq i\leq n-1}\max_{0\leq j\leq i}\| \widetilde{e}_{i,j}-\widehat{e}_{j}\|_{a}^{2}.\]
The proof is finalized by utilizing \(\widehat{e}_{0}=0\) along with the condition \(c_{0}K_{0}\lambda_{0}^{\beta/2-1}<1\) (Assumption A2).
Next we turn to estimate the term \(\max_{0\leq i\leq n-1}\max_{0\leq j\leq i}\|\widetilde{u}_{h}^{i,j}- \widehat{u}_{h}^{j}\|_{a}\). We have the following error bound:
**Lemma 5**.: Let \(\{\widehat{u}_{h}^{i}\}_{i=1}^{n}\) be the solution of (2.3), and let \(\{\widetilde{u}_{h}^{i,j}\}_{j=0}^{i}\) represent the compressed data corresponding to \(\{\widetilde{u}_{h}^{i-1,0},\widetilde{u}_{h}^{i-1,1},\ldots,\widetilde{u}_{h} ^{i-1,i-1},\widetilde{u}_{h}^{i}\}\). This compressed solution is obtained using the incremental SVD method with a tolerance of tol applied to both \(p\)-truncation and singular value truncation. Let \(T_{sv}\) represent the total number of times the singular value truncation is applied, then we can obtain the following inequality:
\[\max_{0\leq i\leq n-1}\max_{0\leq j\leq i}\|\widetilde{u}_{h}^{i,j}-\widehat{ u}_{h}^{j}\|_{a}\leq(T_{sv}+1)\sqrt{\sigma(A)}\texttt{tol},\]
where \(\sigma(A)\) represents the spectral radius of the stiffness matrix \(A\).
Proof.: Assuming \(\widehat{u}_{j}\) and \(\widetilde{u}_{k,\ell}\) are the coefficients of \(\widehat{u}_{h}^{j}\) and \(\widetilde{u}_{h}^{k,\ell}\) corresponding to the finite element basis functions \(\{\phi_{s}\}_{s=1}^{m}\), respectively, we establish the following inequality for \(0\leq j\leq i-1\leq n-1\):
\[\|\widetilde{u}_{h}^{i,j}-\widehat{u}_{h}^{j}\|_{a} \leq\sum_{k=0}^{i-j-1}\|\widetilde{u}_{h}^{i-k,j}-\widetilde{u}_{ h}^{i-k-1,j}\|_{a}+\|\widetilde{u}_{h}^{j,j}-\widehat{u}_{h}^{j}\|_{a}\] \[=\sum_{k=0}^{i-j-1}\sqrt{(\widetilde{u}_{i-k,j}-\widetilde{u}_{i -k-1,j})^{\top}A(\widetilde{u}_{i-k,j}-\widetilde{u}_{i-k-1,j})}+\sqrt{( \widetilde{u}_{j,j}-\widehat{u}_{j})^{\top}A(\widetilde{u}_{j,j}-\widehat{u}_ {j})}\] \[\leq\sum_{k=0}^{i-j-1}\sqrt{\sigma(A)}|\widetilde{u}_{i-k,j}- \widetilde{u}_{i-k-1,j}|+\sqrt{\sigma(A)}|\widetilde{u}_{j,j}-\widehat{u}_{j}|.\]
Here, \(\sigma(A)\) represents the spectral radius of the stiffness matrix \(A\). It is notable that \(\widetilde{u}_{i-k,j}\) corresponds to the \(j\)-th compressed data at the \(i-k\)-th step, as illustrated by:
\[[\widetilde{u}_{i-k-1,0}\mid\widetilde{u}_{i-k-1,1}\mid\ldots\mid\widetilde {u}_{i-k-1,i-k-2}\mid\widehat{u}_{i-k-1}]\stackrel{{\text{ Compressed}}}{{\longrightarrow}}[\widetilde{u}_{i-k,0}\mid\widetilde{u}_{i-k,1}\mid\ldots \mid\widetilde{u}_{i-k,i-k}].\]
Furthermore, considering that both \(p\) truncation and no truncation maintain the prior data unchanged, it follows that at most \(\min\{T_{sv},i-j-1\}\) terms of \(\{\widetilde{u}_{i-k,j}-\widetilde{u}_{i-k-1,j}\}_{k=0}^{i-j-1}\) are non-zero, where \(T_{sv}\) represents the total number of times singular value truncation is applied. Consequently, for any \(0\leq j\leq i-1\leq n-1\), employing Lemma 2, we can derive:
\[\max_{0\leq i\leq n-1}\max_{0\leq j\leq i}\|\widetilde{u}_{h}^{i,j}-\widehat{ u}_{h}^{j}\|_{a}\leq(T_{sv}+1)\sqrt{\sigma(A)}\texttt{tol}.\]
Hence, by applying the triangle inequality to Lemmas 3 to 5, we can derive the error estimate for \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\). This completes the proof of Theorem 1.
## 6 Numerical experiments
This section comprises several numerical experiments conducted to show the efficiency and accuracy of our scheme (4.1). In the following scenario, we consider a case where the exact solution is known for (1.1). Let \(\Omega=(0,1)\times(0,1)\), \(\mathcal{A}u=-\Delta u\), \(\mathcal{B}u=-\frac{1}{10}\Delta u\), \(K(t)=e^{-t}\), and the exact solution is given as:
\[u(x,y,t)=xy(1-x)(1-y)e^{-t}\cos t.\]
We can compute the initial condition \(u_{0}\) and the source term \(f\) using the provided data.
We employ linear finite element for the spatial discretization and utilize the backward Euler method for the time discretization.
In the accuracy test, we set \(\texttt{tol}=10^{-12}\) and the final time \(T=1\). The convergence rates of \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) are reported in Section 6 for fixed time steps \(\Delta t=10^{-3}\) and \(\Delta t=10^{-4}\), while varying the mesh size \(h\) to assess the convergence rates in space. Additionally, the convergence rates of \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) are reported in Section 6 for a fixed mesh size \(h=2^{-9}\sqrt{2}\), while varying the time step \(\Delta t\) to evaluate the convergence rates in time. This verifies the first two terms in our error estimation (5.1).
In order to validate the final error term in (5.1), we calculate the numerical solution utilizing the finite element method and document the discrepancy between the outcomes of the conventional
finite element method (2.3) and our innovative approach (4.1) in Section 6. It is noteworthy that the errors align closely with machine precision, confirming the correspondence with the last element in our error estimation (5.1).
Additionally, we compare the wall time and memory for both the finite element method and our approach and plot four figures for intuition. It is evident that our new approach is more efficient, especially when the time step and mesh size are small. This suggests that our scheme performs well for large-scale problems.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \(\Delta t\) & \(h/\sqrt{2}\) & \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) & rate & \(\Delta t\) & \(h/\sqrt{2}\) & \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) & rate \\ \hline \multirow{4}{*}{\(10^{-3}\)} & \(1/2^{2}\) & 1.20E-03 & - & & \(1/2^{2}\) & 1.20E-03 & - \\ \cline{2-7} & \(1/2^{3}\) & 3.15E-04 & 1.93 & & \(1/2^{3}\) & 3.15E-04 & 1.93 \\ \cline{2-7} & \(1/2^{4}\) & 8.01E-05 & 1.98 & & \(1/2^{4}\) & 8.03E-05 & 1.97 \\ \cline{2-7} & \(1/2^{5}\) & 2.00E-05 & 2.00 & & \(1/2^{5}\) & 2.01E-05 & 1.99 \\ \cline{2-7} & \(1/2^{6}\) & 4.84E-06 & 2.05 & & \(1/2^{6}\) & 5.03E-06 & 2.00 \\ \cline{2-7} & \(1/2^{\prime}\) & 1.05E-06 & 2.20 & & \(1/2^{\prime}\) & 1.24E-06 & 2.03 \\ \hline \end{tabular}
\end{table}
Table 1: The convergence rates of \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) for \(\Delta t=10^{-3}\) and \(\Delta t=10^{-4}\) with different mesh size \(h\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \(h/\sqrt{2}\) & \(\Delta t\) & \(\|\widehat{u}_{h}^{n}-u(t_{n})\|\) & rate & \(h/\sqrt{2}\) & \(\Delta t\) & \(\|\widehat{u}_{h}^{n}-u(t_{n})\|\) & rate \\ \hline \multirow{4}{*}{\(1/2^{9}\)} & \(1/2^{2}\) & 6.81E-05 & \(-\) & & \(1/2^{2}\) & 6.81E-05 & \(-\) \\ \cline{2-7} & \(1/2^{3}\) & 3.24E-05 & 1.07 & & \(1/2^{3}\) & 3.25E-05 & 1.07 \\ \cline{2-7} & \(1/2^{4}\) & 1.57E-05 & 1.05 & & \(1/2^{4}\) & 1.58E-05 & 1.04 \\ \cline{2-7} & \(1/2^{5}\) & 7.68E-06 & 1.03 & & \(1/2^{5}\) & 7.74E-06 & 1.03 \\ \cline{2-7} & \(1/2^{6}\) & 3.77E-06 & 1.03 & & \(1/2^{6}\) & 3.83E-06 & 1.02 \\ \cline{2-7} & \(1/2^{\prime}\) & 1.84E-06 & 1.03 & & \(1/2^{\prime}\) & 1.90E-06 & 1.01 \\ \hline \end{tabular}
\end{table}
Table 2: The convergence rates of \(\|u(t_{n})-\widehat{u}_{h}^{n}\|\) for \(h=2^{-9}\sqrt{2}\) and \(h=2^{-10}\sqrt{2}\) with different time step \(\Delta t\).
## 7 Conclusion
In this paper, we present a novel and efficient algorithm for resolving Non-Fickian flows. Our method draws inspiration from the incremental Singular Value Decomposition (SVD) technique commonly employed in data science. Notably, this approach exclusively operates on data intrinsic to the problem at hand. This adaptability leads us to posit its suitability for a range of significant models that encompass a memory component. A couple of such models include the Debye memory-enhanced Maxwell's equations [24]. These promising avenues form the focus of our upcoming research endeavors.
|
2310.01636 | Adaptive Visual Scene Understanding: Incremental Scene Graph Generation | Scene graph generation (SGG) involves analyzing images to extract meaningful
information about objects and their relationships. Given the dynamic nature of
the visual world, it becomes crucial for AI systems to detect new objects and
establish their new relationships with existing objects. To address the lack of
continual learning methodologies in SGG, we introduce the comprehensive
Continual ScenE Graph Generation (CSEGG) dataset along with 3 learning
scenarios and 8 evaluation metrics. Our research investigates the continual
learning performances of existing SGG methods on the retention of previous
object entities and relationships as they learn new ones. Moreover, we also
explore how continual object detection enhances generalization in classifying
known relationships on unknown objects. We conduct extensive experiments
benchmarking and analyzing the classical two-stage SGG methods and the most
recent transformer-based SGG methods in continual learning settings, and gain
valuable insights into the CSEGG problem. We invite the research community to
explore this emerging field of study. | Naitik Khandelwal, Xiao Liu, Mengmi Zhang | 2023-10-02T21:02:23Z | http://arxiv.org/abs/2310.01636v3 | # Adaptive Visual Scene Understanding:
###### Abstract
Scene graph generation (SGG) involves analyzing images to extract meaningful information about objects and their relationships. Given the dynamic nature of the visual world, it becomes crucial for AI systems to detect new objects and establish their new relationships with existing objects. To address the lack of continual learning methodologies in SGG, we introduce the comprehensive Continual Scene Graph Generation (CSEGG) dataset along with 3 learning scenarios and 8 evaluation metrics. Our research investigates the continual learning performances of existing SGG methods on the retention of previous object entities and relationships as they learn new ones. Moreover, we also explore how continual object detection enhances generalization in classifying known relationships on unknown objects. We conduct extensive experiments benchmarking and analyzing the classical two-stage SGG methods and the most recent transformer-based SGG methods in continual learning settings, and gain valuable insights into the CSEGG problem. We invite the research community to explore this emerging field of study. All data and source code are publicly available at here.
## 1 Introduction
Scene graph generation (SGG) aims to extract object entities and their relationships in a scene. The resulting scene graph, carrying semantic scene structures, can be used for a variety of downstream tasks such as object detection(Szegedy et al., 2013), image captioning (Hassan et al., 2023; Aditya et al., 2015), and visual question answering (Ghosh et al., 2019). Despite the notable advancements in SGG, current works have largely overlooked the critical aspect of continual learning. In the dynamic visual world, new objects and relationships are introduced incrementally, posing challenges for SGG models to adapt and generalize without forgetting previously acquired knowledge. This problem of Continual Scene Graph Generation (CSEGG) holds great potential for various applications, such as real-time robotic navigation in dynamic environments and adaptive augmented reality experiences.
Due to a scarcity of research specifically addressing the challenges of CSEGG, there is a pressing need for specialized investigations and methodologies to enable CSEGG. While the field of continual learning has witnessed significant growth in recent years, with a major focus on tasks such as image classification (Mai et al., 2021), object detection (Wang et al., 2021), and visual question answering (Lei et al., 2022), these endeavors have largely neglected the distinctive complexities associated with CSEGG. Factors such as the long-tailed distribution of objects and relationships from each task in CSEGG and the intricate interplay in forgetting and generalization between incremental object detection and relationship classification remain unaddressed by previous works.
In this study, we establish a methodology for investigating CSEGG. Building upon existing SGG datasets (Krishna et al., 2017; Kuznetsova et al., 2020), we contribute a comprehensive CSEGG dataset containing 11 tasks, 108,249 images, 150 object classes, and 50 relationships, along with 3 learning protocols, 8 metrics, and a benchmark of continual learning baselines. Our research focuses on analyzing the extent to which existing scene graph generation methods, when combined with common continual learning techniques, experience forgetting as they learn new object entities and relationships. Additionally, we assess the generalization ability of CSEGG models in classifying
known relationships between unknown objects. We expand our discussions on the problem motivation in **Sec.A.1.4** and highlight the existence of a significant performance gap in this field. We believe that our work provides a valuable testbed for the research community to explore and advance this emerging area of study.
**Main Contributions**
**1.** Building upon existing SGG datasets, we introduced a large CSEGG dataset containing 108,249 images, 150 object classes, and 50 relationships.
**2.** We designed a systematic methodology for conducting CSEGG experiments and evaluating CSEGG models over a sequence of SGG tasks across 3 learning protocols. This helps the community expand the studies of CSEGG and benchmark future AI models.
**3.** In the experiments, we investigated the unique long-tailed challenge in CSEGG and observed that existing SGG models with continual learning methods do not perform well.
**4.** We assessed the ability of CSEGG models to generalize and classify known relationships on unfamiliar objects. Our findings revealed that continual learning models exhibiting less forgetting demonstrated better generalization capabilities in this regard.
## 2 Related Works
**Scene Graph Generation Datasets.**
Visual Phrase (Sadeghi and Farhadi, 2011) stands as one of the earliest datasets in the field of visual phrase recognition and detection. Over time, various large-scale datasets have emerged to tackle the challenges of Scene Graph Generation (SGG) (Johnson et al., 2015; Lu et al., 2016; Krishna et al., 2017; Kuznetsova et al., 2020; Liang et al., 2019; Zareian et al., 2020; Yang et al., 2019; Xu et al., 2017; Zhang et al., 2017; Dai et al., 2017; Li et al., 2017; Zhang et al., 2019). Among these, the Visual Genome dataset (Krishna et al., 2017) has played a pioneering role by providing comprehensive annotations of objects, attributes, and relationships in images. Another notable dataset is the Visual Relationship Detection dataset (Lu et al., 2016), which specifically focuses on detecting and classifying object relationships. More recently, the Open Image V6 dataset (Kuznetsova et al., 2020) has contributed to scene graph generation tasks by offering a vast collection of images accompanied by localized narratives. Despite the significant contributions of these datasets to SGG, none have been explicitly designed for the task of Continual Scene Graph Generation (CSEGG). To address this gap, we leverage the existing datasets (Krishna et al., 2017) and curate a new one tailored specifically for CSEGG.
**Scene Graph Generation Models.** SGG models are categorized into two main approaches: top-down and bottom-up. Top-down approaches(Liao et al., 2019; Yu et al., 2017) typically rely on object
Figure 1: (a) **A scene graph is a graph structure, where objects are represented as nodes (red boxes), and the relationships between objects are represented as edges connecting the corresponding nodes (green boxes). Each node in the graph contains information such as the object’s class label, and spatial location. The edges in the graph indicate the relationships between objects, often described by predicates. A scene graph can be parsed into a set of triplets, consisting of three components: a subject, a relationship predicate, and an object that serves as the target or object of the relationship. The graph allows for a compact and structured representation of the objects and their relationships within a visual scene. (b) **An example CSEGG application** is presented, where a robot continuously encounters new objects (blue) and new relationships (yellow) over time across new scenes.
detection as a precursor to relationship prediction. They involve detecting objects and then explicitly modeling their relationships using techniques such as rule-based reasoningLu et al. (2016) or graph convolutional networks Yang et al. (2018). On the other hand, bottom-up approaches focus on jointly predicting objects and their relationships in an end-to-end manner Li et al. (2017); Xu et al. (2017). These methods often employ graph neural networks Li et al. (2021); Zhang et al. (2019) or message-passing algorithms Xu et al. (2017) to capture the contextual information and dependencies between objects. Furthermore, recent works have explored the integration of language priors Plummer et al. (2017); Lu et al. (2016); Wang et al. (2019) and attention mechanisms in transformersAndrews et al. (2019) to enhance the accuracy and interpretability of scene graph generation. However, none of these works evaluate SGG models in incremental relationship learning and continual adaptation to new object entities. We benchmark SGG models in CSEGG, identify research gaps, and provide valuable insights about the development of future CSEGG models.
**Continual Learning Methods.** Existing continual learning works can be categorized into several approaches. (1) Regularization-based methods Chaudhry et al. (2018); Zenke et al. (2017); Aljundi et al. (2018); Benzing (2022) aim to mitigate catastrophic forgetting by employing regularization techniques in the parameter space, such as Elastic Weight Consolidation (EWC)Kirkpatrick et al. (2017) or Synaptic Intelligence (SI)Zenke et al. (2017). (2) Replay-based methods Rolnick et al. (2019); Chaudhry et al. (2019); Riemer et al. (2018); Vitter (1985); Rebuffi et al. (2017); Castro et al. (2018) utilize a memory buffer to store and replay past data during training, enabling the model to revisit and learn from previously seen examples, thereby reducing forgetting. The variants of these methods include generative replaysShin et al. (2017); Wu et al. (2018); Ye and Bors (2020); Rao et al. (2019), where synthetic data is generated and replayed. (3) Dynamic architecture-based approachesWang et al. (2022) adapt the model's architecture dynamically to accommodate new tasks. Techniques like network expansionYoon et al. (2017); Hung et al. (2019); Ostapenko et al. (2019) grow network capacity to accommodate additional knowledge without interfering with the existing ones. Despite the extensive investigation of continual learning in domains like image classification Mai et al. (2021); Wang et al. (2022); Cha et al. (2021) and object detectionWang et al. (2021); Shieh et al. (2020); Menezes et al. (2023), there is a notable dearth of research focusing on CSEGG. This work aims to bridge this gap by addressing the distinctive challenges of SGG, including the issue of long-tailed distribution Desai et al. (2021); Nan et al. (2021); Chiou et al. (2021), within continual learning.
## 3 Continual Scene Graph Generation (CSEGG) Benchmark
In CSEGG, we consider a sequence of tasks consisting of images and corresponding scene graphs with new objects or new relationships, or both. Let \(D_{t}=\{(I_{i},G_{i})\}_{i=1}^{N_{t}}\) represent the dataset at task \(t\), where \(I_{i}\) denotes the \(i\)-th image and \(G_{i}\) represents the associated scene graph. The scene graph \(G_{i}\) comprises a set of object nodes \(O_{i}\) and their corresponding relationships \(R_{i}\). Each object node \(o_{j}\) is defined by its class label \(c_{j}\) and its spatial coordinates. Each relationship \(r_{k}\) is represented by a triplet
Figure 2: **Three learning scenarios are introduced. From left to right, they are S1. relationship (Rel.) incremental learning (Ince.); S2. relationship and object (Rel. + Obj.) Increre.; and S3. relationship generalization (Rel. Gen.) in Object Incre.. In S1 and S2, example triplets in the training (solid line) and test sets (dash line) from each task are presented. The training and test sets from the same task are color-coded. The new objects or relationships in each task are bold and underlined. In S3, one single test set (dashed gray box) is used for benchmarking the relationship generalization ability of object incre. learning models across all the tasks.**
\((o_{s},p_{k},o_{o})\), where \(o_{s}\) and \(o_{o}\) denote the subject and object nodes, respectively, and \(p_{k}\) represents the relationship predicate. The goal of CSEGG is to develop models that can incrementally learn new objects, new relationships, or both, without forgetting previously learned knowledge. Next, we introduce three continual learning scenarios.
### Learning Scenarios
We re-organize and divide the data from Visual Genome (Krishna et al., 2017) to cater to three continual learning scenarios below in our CSEGG dataset. The dataset contains 108,249 images, 150 objects, and 50 relationship categories. We follow the standard image splits for training, validation, and test sets (Xu et al., 2017).
**Scenario 1 (S1): Relationship Incremental Learning.** While existing continual object detection literature focuses on incrementally learning object attributes (Mai et al., 2021; Wang et al., 2022; Cha et al., 2021; Wang et al., 2021; Shieh et al., 2020; Menezes et al., 2023), incremental relationship classifications are equally important as it provides a deeper and more holistic understanding of the interactions and connections between objects within a scene. See **Sec. A.1.1** for a concrete example application in medical imaging. To uncover contextual information and go beyond studies of object attributes, we introduce this scenario where new relationship predicates \(p_{k}\) are incrementally added in each task (**Fig. 2S1**). There are 5 tasks in S1. To simulate the naturalistic settings where the frequency of relationship distribution is often long-tailed, we randomly and uniformly sample relationship classes from head, body and tail categories in Visual Genome (Krishna et al., 2017), and form a set of 10 relationship classes for each task. Thus, the relationships within a task are long-tailed; and the number of relationships from the head categories of each task is of the same scale. Different from continual object recognition and detection tasks, it is difficult to find unique images, in which ground truth scene graphs only contain relationships belonging to one specific task. To tackle this issue, we allow CSEGG models to see the same images over tasks, but the relationship labels are only provided in their given task (see **Sec. A.1.5** for the design motivation). The same reasoning applies in **S2** and **S3**. Example relationship classes from each task and their distributions are provided in **Fig. 3(a)**.
**Scenario 2 (S2): Scene Incremental Learning.** To simulate the real-world scenario when there are demands for detecting new objects and new relationships over time in old and new scenes, we introduce this learning scenario where new objects \(O_{i}\) and new relationship predicates \(p_{k}\) are incrementally introduced over tasks (**Fig. 2S2**). See **Sec. A.1.2** for two real-world example applications in robot collaborations on construction sites and video surveillance systems. To select the object and relationship classes from the original Visual Genome (Krishna et al., 2017) for S2, we
Figure 3: **Label distribution in each task in each learning scenario is presented. In scenario S1 (a) and scenario S3 (c), we use different colors to denote different tasks. The color gradient indicates the frequency of data within a task, with the lighter color denoting the smaller frequency of data in that category. Only the most frequent labels (relationship labels in (a) and object labels in (c)) are provided. See the legend for the total data size per task. In (b) scenario S2 on both objects and relationships, data distributions are presented in the form of small-world networks, where nodes denote object categories and the edges linking object pairs indicate relationships. Thickness in edges implies the diversity of relationships between object pairs. Same color conventions as (a) and (c) are applied. See the legend for triplet sizes. See **Fig. S1,S2,S3** in **Sec. A.3** for the full statistics of S1-3.
have two design motivations in mind. First, in real-world applications, such as robotic navigation, robots might have already learned common relationships and objects in one environment. Incremental learning only happens on less frequent relationships and objects. (2) Transformer-based AI models typically require large amounts of training data to yield good performances. Training only on a small amount of data from tail classes often leads to close-to-chance performances. Thus, we take the common objects and relationships from the head classes in Visual Genome as one task, while the remaining less frequent objects and relationships from tail classes as the other task. This results in 2 tasks in total with the first task containing 100 object classes and 40 relationship classes. In the subsequent task, the CSEGG models are trained to continuously learn to detect 25 more object classes and 5 more relationship classes. Same as **S1**, both the object class and relationship class distributions are still long-tailed within a task (**Fig. 3(b)**).
**Scenario 3 (S3): Scene Graph Generalization In Object Incremental Learning.** We, as humans, have no problem at all recognizing the relationships of unknown objects with other nearby objects, even though we do not know the class labels of the unknown objects. This scenario is designed to investigate whether the CSEGG models can generalize as well as humans. See **Sec. A.1.3** for two real-world applications in the deep sea and space explorations for autonomous navigation systems. Specifically, there are 4 tasks in total with each task containing 30 object classes and 35 relationship classes. In each subsequent task, the CSEGG models are trained to continuously learn to detect 30 more object classes and learn to classify the same set of 35 relationships among these objects. The class selection criteria for each task follow the same as **S1**, where the selections occur uniformly over head, body, and tail classes. Example object classes and their label distributions for each task are provided in **Fig. 3(c)**. Different from **S1** and **S2**, a standalone generalization test set is curated, where the objects are unknown and their classes do not overlap with any object classes in the training set but the relationships among these unknown objects are common to the training set of every task. The CSEGG models trained after every task are tested on the same generalization test sets.
### Scene Graph Generation Backbone
We use the state-of-the-art one-stage Scene graph Generation TRansformer (SGTR) (Li et al., 2022b) and the traditional two-stage SGG model (Xu et al., 2017), named as CNN-SGG. CNN-SGG detects objects with Faster-RCNN(Girshick, 2015) backbone and infers their relationships separately via Iterative message passing (IMP)(Xu et al., 2017). For simplicity and consistency, we focus discussions solely on SGTR in the main text. For CNN-SGG, see **Sec. A.2.2** for the introduction, training and implementation details over all three learning scenarios. We observed consistent relative CSEGG performance among all continual learning methods across both backbones (**Fig. S15** and **Sec. A.5**).
Different from CNN-SGG, SGTR formulates the task as a bipartite graph construction problem. Briefly, we introduce how SGTR works (**Fig. 4a**). Given a scene image \(I_{i}\), SGTR utilizes a 2D-CNN followed by a transformer-based encoder to extract image features. These features are further incorporated into a transformer-based decoder to predict object and subject nodes \(O_{i}\). After that, the object-aware predicate nodes \(R_{i}\) are formed based on both image features and object node features. Finally, a bipartite graph \(G_{i}\) is constructed to collectively represent the scene with object \(o_{i}\) and predicate nodes \(r_{k}\), where the correspondence between these nodes is established based on the
Figure 4: **Introduction to backbone SGG models and continual learning baselines. We use Scene graph Generation TRansformer (SGTR) (Li et al., 2022b) as the backbone SGG model (Sec. 3.2). SGTR consists of four modules indicated by each blue box. Arrows indicate the signal flows among modules. (b) Three continual learning baselines are listed: EWC (Kirkpatrick et al., 2017), Replay (Rolnick et al., 2019), Naive (Sec. 3.3) and PackNet (Mallya and Lazebnik, 2018)(Sec. 3.3). \(\theta^{*}_{A}\) denotes the optimal network parameters after learning on task A. The arrows in colors indicate the shifts of network weights in the parameter space when learning Task B for different baselines.**
Hungarian matching algorithm (Kuhn, 1955). All experimental results are based on the average over 3 runs. We adapt the public source codes from (Li et al., 2022b) and (Wang et al., 2021b) for implementations of the continual learning algorithms on SGTR. We use the hyper-parameters provided in the code as the default values. See **Sec. A.2.1** for training and implementation details of SGTR. All source code and data are publicly available here.
**Techniques for Learning in Long-tailed Distribution.** As shown in **Fig. 3** and **Sec. 3.1**, the data in the training set of every task in all learning scenarios is long-tailed. We adopt the three existing techniques to alleviate the problem of imbalanced data distribution during training. **LVIS**(Gupta et al., 2019) is an image-level over-sampling strategy. The number of image repeats depends on the object classes with minimal frequency over the entire dataset. **Bi-level sampling (BLS)**(Li et al., 2021) combines image-level oversampling and instance-level undersampling. LVIS is used for image-level oversampling. At instance levels, a drop-out rate is applied. The more frequent an instance belonging to the common classes, the higher the dropout rate. The two-level data re-sampling achieves an effective trade-off between the head and tail classes. **Equalized Focal Loss (EFL)**(Li et al., 2022a) is an effective loss function, re-balancing the loss contribution of head and tail classes independently according to their imbalance degrees. During the training of SGTR, EFL is enabled all the time, regardless of the learning scenarios or continual learning algorithms.
### Continual Learning Algorithms
We benchmark the continual learning baselines below (**Fig.4b**). **Naive (lower bound)** with the SGTR model as the backbone, is trained on each task in sequence without any measures to prevent catastrophic forgetting. **EWC**(Kirkpatrick et al., 2017) is a weight-regularization method, where the weights of the network are regularized in the parameter space, based on their "importance" to the previous tasks. **PackNet**(Mallya and Lazebnik, 2018) is a parameter-isolation method, where it iteratively prunes and pre-trains the network parameters so that it can sequentially pack multiple tasks within one network. **Replay**(Rolnick et al., 2019) includes a memory buffer with the capacity of storing \(M\) percentages of images in the entire dataset as well as their corresponding ground truth object and predicate notations depending on the task at each learning scenario. We vary \(M=\) 10%, 20%, and 100%. We randomly select images present in the current task and add them to the buffer. As the image may contain multiple object/relationship ground truths, the number of replays on ground truth notations in each task might vary. See **Fig. S1(c)(d)**, **Fig. S2(c)(d)** and **Fig. S3(c)(d)** in **Sec. A.3.4** where we report the number of ground truth notations stored in the memory buffer for each task in each learning scenario. However, these variations do not interfere with fair comparisons across replay methods with different memory buffer capacities. We also introduce **Joint Training** -- an upper bound where the SGG model is trained on the entire CSEGG dataset.
To alleviate the problem of long-tailed data distribution, we also introduce BLS and LVIS in replay methods (**Replay + BLS** and **Replay + LVIS**). In the data re-balancing replay, the data in the memory buffer of **Replay** gets first re-sampled and then mixed with the data from the current task for training. As LVIS is an image-level up-sampling technique, the images containing the tail classes in the memory buffer are over-sampled. In contrast, BLS adds an extra instance-level under-sampling technique, which under-samples instances of head classes on the replay images. With these two approaches, the exact number of ground truth notations per image in the memory buffer varies over tasks in each learning scenario. We report these numbers in the **Fig. S4** in **Sec. A.3**. The total number of instance replays for Replay+LVIS is slightly larger than the ones for Replay+BLS. We show the results of this effect in **Sec. 4.2**.
### Evaluation Metrics
Same as existing SGG works (Xu et al., 2017; Li et al., 2022b), we adopt the evaluation metric recall@K (**R@K**) on the predicted scene graphs \(G\). As CSEGG is long-tailed, we further report the results in mean recall (**mR@K**) over head, body, and tail relationship classes in **Sec. A.4.2**. For consistency, we provide the R@K results in the main text, while we provide and analyze results in mR@K in **Fig. S8** and **Fig. S9** in **Sec. A.4.2**.
To assess the catastrophic forgetting of CSEGG models, we define **Forgetfullness (F@K)** as the difference in R@K on \(D_{t=1}\) between the CSEGG models trained at task \(t\) and task 1. An ideal CSEGG model could maintain the same \(R@K\) on \(D_{t=1}\) over tasks; thus, \(F=0\) for all tasks. The more negative F is, the more severe in forgetting an model gets. To assess the overall recall of CSEGG
models over tasks, we also report the continual average recall (**Avg. R@K**). Avg. R@K is computed as the average recall on all the data at the previous and current tasks \(D_{i}\), where \(i\in\{1,2,...,t\}\).
To assess whether the knowledge at previous tasks facilitates learning the new task and whether the knowledge at new tasks enhances the performances at older tasks, we introduce **Forward Transfer (FWT@K)**(Lin et al., 2022) and **Backward Transfer (BWT@K)**(Lopez-Paz and Ranzato, 2017). **See Sec. A.2.3** and **Sec. A.4.3** for its definitions, results, and analysis.
In learning scenario S3, we evaluate CSEGG models on their abilities to generalize to detect unknown objects and classify known relationships on these objects, in the standalone generalization test set over all tasks. To benchmark these, we introduce two evaluation metrics: the recall of the predicted bounding boxes on unknown objects (**Gen R\({}_{bbox}\)@K**) and the recall of the predicted graph \(G_{i}\) (**Gen R@K**). As the CSEGG models have never been taught to classify unknown objects, we discard the class labels of the bounding boxes and only evaluate the predicted box locations with **Gen R\({}_{bbox}\)@K**. To evaluate whether the predicted box location is correct, we apply a hard threshold of Intersection over Union (**IoU**) between the predicted bounding box locations and the ground truth. Any predicted bounding boxes with their IoU values above the hard threshold are deemed to be correct. We vary IoU thresholds from 0.3, 0.5, and 0.7. To assess whether the CSEGG model generalizes to detect known relationships over unknown objects, we evaluate the recall **Gen R@K** of the predicted relationships \(r_{k}\) only on _correctly predicted_ bounding boxes. For simplicity and consistency, we report the results of **Avg.R@20** and **F@20**. See Sec. A.4.1** for results at \(K=50\), \(100\). In general, the conclusions at \(K=50\), \(100\) are consistent with the cases when \(K=20\).
## 4 Results
### Continual Scene Graph Generation Remains a Great Challenge.
We present Avg. R@20, F@20, FWT@20, and BWT@20 results for learning scenario 1 (S1) in **Fig. 5 (a)(b)** and **Fig.S11(a)(b)**. Notably, all continual learning baselines start from a similar Avg.R@20 in Task 1 and their performance drops over subsequent tasks. This implies that catastrophic forgetting about learned relationships occurs when the CSEGG models learn new relationships. Among all the methods, the naive method takes no measures of preventing catastrophic forgetting, resulting in the largest drop in Avg.R@20 and F@20. In contrast, a replay method with all the old data to rehearse in the current task (Replay(100%)) yields the least forgetting and maintains a high Avg.R@20. Surprisingly, even though Replay(100%), as an upper bound, replays all the data in the current and previous tasks, there is still a drop in performance. This could possibly be due to the long-tailed data distribution in the memory buffer, which makes the rehearsal of tail classes even less frequent in new tasks, and thus, deteriorate the recall performances of tail classes. We also compared EWC versus the replay methods. Though EWC outperforms the naive baseline in earlier tasks, it fails in longer task sequences. Different from EWC, Replay with 10% still achieves a higher Avg.R@20 score of 8.55% and a higher F@20 of -22.21%. This aligns with the existing continual learning literature that replay methods are more effective than weight-regularization methods in eliminating catastrophic forgetting (Lesort et al., 2019). PackNet is a parameter isolation method. While PackNet outperforms EWC, its performance is inferior to that of Replay(10%). As expected, we also compare the replay methods with different memory buffer sizes. Replaying more old data helps CSEGG performances.
Figure 5: **Results of average recall and forgetting over tasks in Learning Scenarios 1 (a and b) and 2 (c and d) based on the SGTR backbone as the SGG model. See Sec. 3.3 for introduction to continual learning baselines. See Sec. 3.4 for explanations about evaluation metrics. X-axis indicates the task numbers. The higher Avg.R@20 and F@20, the better. See Fig. S5,S6 in Sec. A.4.1 for results of Avg.R@K, F@K, and mR@K, when K\(=50,100\). See Sec. A.4.3 for results on FWT@K and BWT@K. See Sec. A.5 for the results based on the CNN-SGG backbone (Sec.3.2).**
Joint training demonstrates superior performance over all the tasks in Learning Scenario 1 as seen in Fig **5**. This aligns with the existing continual learning literature that joint training is a superior upper bound than Replay(100%). As the knowledge carried forward is important for the subsequent tasks, we also permuted the task sequences and explored their role in CSEGG performances. Aligning with the existing literature (Singh et al., 2022), we found a prominent effect of task sequences in CSEGG (**Fig. S13** and **Sec. A.4.4**).
Learning scenario 2 (S2) approximates the real-world CSEGG setting where there are constantly new demands in detecting new objects and new relationships simultaneously. The results of S2 in Avg.R@20 and F@20 are provided in **Fig. 5(c)(d)**. Compared with S1, the overall Avg.R@20 and F@20 drop more significantly over tasks. For example, even with 20% memory buffer size, the replay method only achieves Avg. R@20 score of 6.57% and F@20 of -17.17% in Task 2. This suggests that the real-world CSEGG remains a challenging task and there still exists a large performance gap for state-of-the-art CSEGG methods. Moreover, we also made an interesting observation that Replay (100%) outperforms the upper bound of the joint training in the first task of Scenario 2. This performance difference could be attributed to the presence of long-tailed data distribution across tasks, with the first task containing more tailed classes than head classes. This is in contrast to the task splits in Scenario 1 where both head and tail classes are uniformly sampled for every task. Consequently, joint training struggles in the first task due to sub-optimal performance in tailed classes. To gain a qualitative understanding of CSEGG performances, we provide visualization results of the predicted scene graphs on example images over tasks for all the CSEGG baselines in Scenario 1 (**Fig. S18** and **Sec. A.6.1**) and Scenario 2 (**Fig. S19** and **Sec. A.6.2**).
### Accounting for Long-tailed Distributions Enhances CSEGG
Due to the imbalanced data distribution in the real world, long-tailed distribution remains a unique challenge for CSEGG. Here, we introduce two data sampling techniques (LVIS and BLS) to counter-balance the long-tailed data distributions in the memory buffers as well as the feed-forward training tasks (**Sec. 3.2** and **Sec. 3.3**). We report the results of Replay(10%)+BLS and Replay(10%)+LVIS in learning scenario S1. In **Tab. 1**, both long-tailed methods with replay(10%) outperform naive replays by an average margin of 3.42% in Avg.R@20 and 3.35% in F@20. This implies that data re-sampling techniques enhance general continual learning performances in long-tailed incremental learning settings. Indeed, we made the same observations after splitting the classes from each task into tail, body, and head classes and reporting their mR@K in the **Fig. S10** in **A.4.2**. Interestingly, we see that Replay(10%)+BLS qualperforms Replay(10%)+LVIS by 4.32% in Avg.R@20 and 6.42% in F@20. This contradicts the findings that BLS is more effective than LVIS in the classical SGG problem (Li et al., 2021). The performance discrepancy could be due to the difference in the number of replay instances in both approaches after these two data re-sampling methods are applied to the memory buffer (see **Sec. 3.3**). This emphasizes that the long-tailed learning methods explored in the SGG problem may not be effective in CSEGG. We need to explore new long-tailed learning methods specifically for CSEGG.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Avg. R@20 \(\uparrow\)**} & \multicolumn{2}{c}{**F@20 \(\uparrow\)**} \\ \cline{2-5} & T1 & T5 & T1 & T5 \\ \hline Replay(10\%) & 28.7 & 8.55 & 0 & -22.21 \\ Replay(10\%)+LVIS & 28.7 & 14.38 & 0 & -15.39 \\ Replay(10\%)+BLS & 28.7 & 9.56 & 0 & -22.4 \\
**Replay(100\%)** & **28.7** & **16.17** & **0** & **-12.24** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results at Task 1 and 5 in Learning Scenario 1 when sampling techniques on long-tailed distribution are applied.** See **Sec. 3.2** for the introduction to techniques used for long-tailed distributions. The best results are in bold.
Figure 6: **Generalization results in Learning Scenario 3.** See **Sec. 3.4** for evaluation metrics. The higher the values, the better. Line colors indicate continual learning baselines. Line types denote the IoU thresholds for determining correctly predicted bounding box locations.
### CSEGG Improves Generalization in Unknown Scene Understanding
**Fig. 6** provides the generalization results in detecting unknown objects and classifying known relationships among these objects in Learning Scenario 3 (S3). In **Fig. 6 (a)**, we observed an increasing trend of Gen R\({}_{bbox}\)@20 for all CSEGG methods as the task number increases. This suggests that CSEGG methods improve generalization abilities in detecting unknown objects, as they learn to continuously detect new objects and classify known relationships among these objects. As expected, with increasing IoU threshold from 0.3 to 0.7, fewer detected bounding boxes are deemed to be correct; thus, there is a decrease in Gen R\({}_{bbox}\)@20. Subsequently, we observed a decrease in Gen R@20 in relationship generalization in **Fig. 6 (b)** as well. Moreover, we notice that even in Task 1, all CSEGG methods are capable of proposing 23% reasonable object regions with IoU = 0.7. This implies that the SGTR model generalizes to detect "objectness" in the scene even with minimal training only in Task 1. Interestingly, as seen in S1 and S2 (**Fig. 5**), the naive baseline only learns the current task at hand and often forgets the knowledge in old tasks; however, forgetting to detect objects from previous tasks does not interfere its generalization abilities. In fact, its generalization ability to detect unknown objects increases over tasks. Contrary to our previous observations in S1 and S2 (**Sec. 4.1**), where replay methods beat the naive baseline, a surprisingly opposite trend in object detection generalization is observed. One possible explanation is that all CSEGG methods output a fixed number of detected object bounding boxes. As replay methods forget less, they intend to detect more in-domain object boxes out of the total number of bonding boxes they can output, resulting in a decreased number of bounding boxes detected for unknown objects. The results in **Fig. 6 (b)** support this point. Given all the correctly detected unknown object locations, Replay(10%) outperforms the naive baseline. This emphasizes that the continual learning ability to forget less about previous tasks improves the overall generalization abilities of the CSEGG models in unknown scene understanding.
Notably, we also found that the SGTR model is very good at generalizing to classify relationships **Fig. 6 (b)**. Even in Task 1, both the naive method and the Replay(10%) achieve 45% recall of known relationships among unknown objects in the generalization test set. As the CSEGG models continuously learn to detect more new objects and classify their relationships in subsequent tasks, their relationship generalization ability among unknown objects saturates around Task 3. See **Fig. S20** in **Sec. A.6.3** for visualization examples.
## 5 Discussion
In the dynamic world, the incremental introduction of new objects and new relationships in scenes presents significant challenges for scene graph generation (SGG) models to effectively adapt and generalize without forgetting previously acquired semantic knowledge. However, despite the progress made in SGG and continual learning research, there remains a lack of comprehensive investigations specifically targeting the unique challenges of Continual Scene Graph Generation (CSEGG). To close this research gap, we take the initial steps of operationalizing CSEGG and introducing benchmarks, datasets, and evaluation protocols. Our study delves into three distinct learning scenarios, thoroughly examining the interplay between continual object detection and continual relationship classification for existing CSEGG methods under long-tailed class-incremental settings.
Our experimental results reveal intriguing insights. First, applying standard continual learning approaches combined with long-tailed techniques to SGG models yields moderate improvements. However, a notable performance gap persists between current CSEGG methods and the joint training upper bound. Second, we investigated the model's generalization ability and found that the models are capable of generalizing to classify known relationships involving unfamiliar objects. Third, we compared the CSEGG performance of the traditional CNN-based and the transformer-based SGG models as backbones. We observed consistent relative CSEGG performance across all continual learning methods using both backbones, with CNN-SGG models underperforming SGTR-based ones.
Moving forward, there are several key avenues for future research. Our current endeavors focus on learning CSEGG problems from static images in Independent and Identically Distributed (i.d.d) manner, diverging from human learning from video streams. Future research can look into CSEGG problems on video streams. Our plans also involve expanding continual learning baselines and integrating more long-tailed distribution sampling techniques. Furthermore, we aim to construct a synthetic SGG dataset to systematically quantify the aspects of SGG that influence continual learning performance under controlled conditions. Although the CSEGG method holds promise for many downstream applications like monitoring systems, medical imaging, and autonomous
navigation, we should also be aware of its misuse in privacy, data biases, fairness, security concerns, and misinterpretation. We invite the research community to join us in maintaining and updating the safe use of CSEGG benchmarks, thereby fostering its advancements in this field.
## Ethics Statement
The development and deployment of Scene Graph Generation (SGG) technology present potential negative societal impacts that warrant careful consideration (Li et al., 2022b). Firstly, privacy concerns arise as SGG may inadvertently capture sensitive information from images, potentially violating privacy rights and raising surveillance issues. Secondly, bias and fairness challenges persist, as SGG algorithms can perpetuate biases present in training data, leading to discriminatory outcomes that reinforce societal inequalities. Misinterpretation and misclassification by SGG algorithms could result in misinformation and incorrect actions, impacting decision-making. The risk of manipulation and misuse of SGG-generated scene representations for malicious purposes is also a concern. For example, attackers might manipulate scene graphs to deceive systems or disrupt applications that rely on scene understanding.
## Reproducibility Statement
We are committed to ensuring the reproducibility and transparency of our research. In accordance with the guidelines set forth by ICLR 2024, we provide detailed information to facilitate the replication of our experiments and results.
1. **Code Availability:** All code used for our experiments is available at here.
2. **Data Availability:** Any publicly accessible datasets used in our research are specified in the paper, along with their sources and access information.
3. **Experimental Details:** We have documented the specific details of our experiments, including hyper-parameters, model architectures, and pre-processing steps, to enable others to replicate our results.
We are dedicated to supporting the scientific community in replicating and building upon our work. We welcome feedback and collaboration to ensure the robustness and reliability of our research findings.
## Acknowledgement
This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests. The authors would like to thank Stan Weixian Lei, Difei Gao, and Mike Shou for their feedback and suggestions.
|
2307.02928 | AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with
Zero-Shot Learning Capability | Tactile sensing is a necessary capability for a robotic hand to perform fine
manipulations and interact with the environment. Optical sensors are a
promising solution for high-resolution contact estimation. Nevertheless, they
are usually not easy to fabricate and require individual calibration in order
to acquire sufficient accuracy. In this letter, we propose AllSight, an optical
tactile sensor with a round 3D structure potentially designed for robotic
in-hand manipulation tasks. AllSight is mostly 3D printed making it low-cost,
modular, durable and in the size of a human thumb while with a large contact
surface. We show the ability of AllSight to learn and estimate a full contact
state, i.e., contact position, forces and torsion. With that, an experimental
benchmark between various configurations of illumination and contact elastomers
are provided. Furthermore, the robust design of AllSight provides it with a
unique zero-shot capability such that a practitioner can fabricate the
open-source design and have a ready-to-use state estimation model. A set of
experiments demonstrates the accurate state estimation performance of AllSight. | Osher Azulay, Nimrod Curtis, Rotem Sokolovsky, Guy Levitski, Daniel Slomovik, Guy Lilling, Avishai Sintov | 2023-07-06T11:28:53Z | http://arxiv.org/abs/2307.02928v2 | # AllSight: A Low-Cost and High-Resolution Round Tactile Sensor
###### Abstract
Tactile sensing is a necessary capability for a robotic hand to perform fine manipulations and interact with the environment. Optical sensors are a promising solution for high-resolution contact estimation. Nevertheless, they are usually not easy to fabricate and require individual calibration in order to acquire sufficient accuracy. In this letter, we propose _AllSight_, an optical tactile sensor with a round 3D structure potentially designed for robotic in-hand manipulation tasks. AllSight is mostly 3D printed making it low-cost, modular, durable and in the size of a human thumb while with a large contact surface. We show the ability of AllSight to learn and estimate a full contact state, i.e., contact position, forces and torsion. With that, an experimental benchmark between various configurations of illumination and contact elastomers are provided. Furthermore, the robust design of AllSight provides it with a unique zero-shot capability such that a practitioner can fabricate the open-source design and have a ready-to-use state estimation model. A set of experiments demonstrates the accurate state estimation performance of AllSight.
## I Introduction
The sense of touch endows humans with neural sensory-motor feedback regarding the shape, weight and texture of objects within contact [1]. Hence, touch is vital for humans in order to ensure stable grasps and safe object manipulations [2]. Similar to humans, robots require touch sensing in order to acquire information regarding the state of contact events. In order to manipulate objects effectively in complex and changing environments, a robot must be able to perceive when, where and how it is interacting with the objects [3, 4, 5]. Touch, or tactile sensing, can augment visual perception or replace it when occlusions occur by the robot fingers themselves or by obstacles. It has the potential to enable robots to infer about the object's relative state, geometry and texture [6]. Accurate and low-cost high-resolution tactile sensors that can provide a full state of contact, namely contact locations and forces, would have a significant role in, for example, material handling, assembly [7], in-hand manipulation [8] and prosthesis [9].
Tactile sensors are commonly used to measure a range of touch stimuli including contact pressure [11], vibrations [12], deformation of the contact pad [13] and surface texture [14]. Within the range of tactile usages, a variety of tactile sensing technologies exists including force sensitive resistors [5], capacitive transducers [15], photoelectric sensing [16] and piezo-resistors [11]. Although they can provide useful data, these sensors are usually designed for specific tasks in limited environments. Recently, camera-based optical tactile sensors have become increasingly common due to high-resolution signals and soft contact surfaces [17, 18]. An optical sensor typically uses an internal camera to track the deformation of a soft elastomer upon contact with an object [19]. A captured image can encode information regarding the state of contact, i.e., contact location with respect to the sensor's coordinate frame and contact forces. Despite the abundance in configurations of optical sensors, they yet to provide a robust tactile solution and are limited in various aspects.
While camera-based sensors are not a new notion, significant advancement and integration in robotic systems were achieved in the last few years with the increase of computing capabilities and hardware minimization. Various small sensors with flat contact surfaces were introduced [14, 17, 18, 20]. However, these may have difficulties in general manipulation tasks due to the flat contact surfaces. Hence, other sensors introduced spatial surface geometries [19, 21]. Nevertheless, these sensors yet to provide complete and reliable contact information. Some were not demonstrated
Fig. 1: Three _AllSight_ sensors on the fingers of an OpenHand Model-O [10] robotic hand. The sensors provide real-time tactile images for contact state estimations during the manipulation of an object. Surface deformations due to contact are marked with a circle at the bottom tactile images.
to provide a complete contact state [22], are limited in load forces [23] or require complex and expensive fabrication process [24]. Moreover, no sensor has demonstrated an ability for zero-shot transfer of a trained contact state estimation model to a new one.
In this paper, we cope with the limitations of previous optical designs and present a novel spatial sensor termed _AllSight_. AllSight is designed to be small and low-cost for the use on multi-finger hands in in-hand manipulation tasks. The 3D contact surface of AllSight is in the shape of a cylinder with an hemispherical end as seen in Figure 1. Most of the sensor's components, excluding electronics and elastomer, are printable. In particular, the transparent shell of AllSight is 3D printed making the sensor low-cost, easily fabricated and more accessible to practitioners. While fabricated in a low-cost process, AllSight is shown to provide an accurate full contact state including position, normal and tangential forces, and torsion. The sensor is the smallest of its kind while able to measure larger forces than previous designs and up to 15 N. In addition, the fabrication process results in a durable sensor able to withstand high and recurring loads. AllSight is modular with easily replaceable components where different types of elastomer and illumination can be rapidly replaced. Through a comparative analysis of AllSight, we try to answer some fundamental questions in the design of optical tactile sensors such as preferred illumination and surface texture. The comparative analysis is conducted through supervised learning. Structure and various tested sensor configurations are seen in Figure 2.
The design, trained models, simulation environment, and code are provided open-source1 for the benefit of the community and to advance research in the field. However, the trained models should provide sufficient accuracy on a newly fabricated sensor. Therefore, we analyze the transfer learning capabilities of AllSight in zero-shot and in fine-tuning with limited new data. We show that these can be done by pretraining with real data collected from a source sensor or with simulated data from TACTO, a physics engine tactile simulation [20]. Overall, AllSight is capable of achieving sufficient accuracy in zero-shot transfer and high accuracy with fine-tuning on limited new data. Consequently, advanced and novice practitioners can have access to low-cost, easy to fabricate and reproducible sensors with a ready-to-use model.
Footnote 1: AllSight Open-source design, fabrication instructions, trained models, code and simulation: [https://github.com/osheraz/allsight](https://github.com/osheraz/allsight)
A prominent goal of this paper is to share insights with the robotics community and to help overcome the numerous bottlenecks faced in the fabrication of spatial tactile sensors. We aim to encourage the wider adoption and development of such sensing technology. To summarize, the contributions of this work are as follow. First, we propose a novel design of spatial optical tactile sensor termed AllSight which is compact and with high-resolution. Since most of the parts are 3D printed, AllSight is low-cost, easy to fabricate, modular and available open-source. Due to its structure, AllSight can provide a full contact state including contact localization, torsion, normal force and shear. Then, an informative comparative analysis is provided involving various known sensor configurations which could assist practitioners in design choices. Finally, we exhibit the ability of AllSight to transfer to newly fabricated sensors through zero-shot and fine-tuning. To the best of the author's knowledge, AllSight is the only 3D optical-based sensor that measures the entire contact state, capable of zero-shot learning and is available open-source.
## II Related work
Seminal work on optical sensors introduced the use of a black and white camera for observing the deformation of a soft membrane through a glass plate [29]. Later work have shown the ability to use the same technology for a round or finger-shaped sensor [30, 31]. The GelSight is the first to present a relatively small tactile finger-tip with a flat pad able to measure high-resolution contact geometry [14]. Photometric Stereo (PS) was integrated where surface normals during contact deformation are estimated by observing the object under different lighting conditions. Hence, contact force, slip and shape were inferred by observing deformation and calculating geometry gradients. While exhibiting good
Fig. 2: Illustration of AllSight (1) assembled and (2) in an exploded view. (3) Images of the corresponding fabricated parts are seen including marked and clear elastomers. (4) Three different LED illumination configurations when (5) AllSight is in contact with a screw: (4a) white, (4b) RRRGGGBBB and (4c) RGBRGBRB. Top and bottom rows of the camera view show elastomers with and without markers, respectively.
performance, GelSight and similar ones (e.g., [17, 20]) may have difficulties in general dexterous manipulation tasks due to the flat contact surface. Flat sensors require constant alignment with the surface of the object and may not maintain contact during object sliding and rolling [22]. Hence, the TacTip set of sensors was introduced having a variety of different contact pads including flat and hemi-spherical ones [19]. However, the contact pads did not include a rigid support for the elastomer and, thus, were reported to be too soft for feasible manipulation tasks [23].
Tactile sensors that can efficiently manipulate objects must have a spatial surface structure. Yet, recent attempts to develop tactile sensors with 3D sensing surfaces have raised a number of challenges. To begin with, creating tactile sensors with round contact surfaces can be difficult from a manufacturing standpoint. Fabrication may require intricate designs and the use of high-budget machinery such as industrial (e.g., Stratasys) [19] or Aluminum 3D printers [24]. Furthermore, the outer layer of the sensor is in constant contact with objects during use and, therefore, the surface pad is prone to wear and tear over time, adding to the complexity of creating a round tactile sensor that is both sensitive and reliable [16]. It can also be challenging to make the sensor modular for convenient component replacement and easy-to-use through a plug-and-play interface [18].
Researchers have conducted extensive application studies with flat sensors regarding contact localization [18], depth reconstruction [14, 27] and directional force distribution [17]. Sensors with round contact surface, on the other hand, yet to provide complete and reliable contact information. Some sensors can only provide contact localization with no load information [21, 22, 25]. However, manipulation capabilities require also information regarding contact forces. Recent sensor developments have tried to provide full contact state. A cone-shape thumb-sized sensor, for instance, provides a full force map along with contact localization [24]. However and due to its skeletal structure, it is sensitive to object penetration and, hence, limited to contact forces of up to 2 N. Similarly, the DenseTact provides contact loads through an hemi-spherical pad. A randomized pattern was added to the surface of the contact pad in order to increase features in the images [28]. The ability for transfer learning was also demonstrated in a limited setting without zero-shot and with some portion of new data used for calibrating the target sensor. The hemi-sphere of DenseTact is made of an elastomer without a rigid structure. This and the lack of a cylindrical extension to the hemi-sphere may reduce its applicability in manipulation tasks. Transfer learning was also recently demonstrated in classification on the DIGIT flat sensor [32]. In the work, a diffusion model was trained to generate realistic tactile images and later calibrated to unseen sensors. Table I provides a comparative summary of state-of-the-art work on optical-based tactile sensors.
## III Design and Fabrication
AllSight is an optical tactile sensor designed to be compact and is suitable for usage on various robotic end-effectors and multi-fingered hands. In addition, the contact region of AllSight is round with full 360\({}^{\circ}\) sensing clearly visible without blind spots or obsucrance. While there have been some advances in round-shaped tactile sensors, their reproduction may fail due to complex and sensitive fabrication. AllSight sensing surface is more robust and easily interchangeable than previous designs, making the sensor more appealing. The estimated manufacturing cost for AllSight is 30 USD per sensor excluding a micro-controller. The main challenges in fabricating a compact and all-around tactile sensor are related to its small size and curved surface. However, we have devised a fabrication process based on in-depth experimentation so that the sensor is easily fabricated and robustly reproduced by novice users.
### _General Structure_
An illustrative description of AllSight is given in Figure 2. Similar to previous optical tactile sensors, the core of the design is a single camera. The camera is covered by a three-layered tube in the shape of a cylinder with an hemispherical end. The inner layer of the tube is a rigid crystal-clear shell. A transparent elastomer covers the shell and is coated on its exterior by a reflective silicone paint. Such tube formation provides an opaque structure where the camera observes the deformation of the elastomer from within upon contact. For better visibility, the inner-surface of the shell is evenly illuminated by an annular printed circuit board (PCB) with embedded LEDs. Photometric effects and structured lighting enable the camera to detect small deformations of the
elastomer in physical contact. Prior work uses either white or RGB lights in different variations for contact localization and shape reconstruction. A collimator covers the LED PCB for channeling the light towards the shell and for minimizing illumination losses. All components are assembled on a mounting plate which is the connecting link to a desired hand. Unlike other sensor designs, AllSight is the smallest all-around tactile sensor which supports various elastomers and illumination configurations with simple assembly. Experiments in this work provide analyses to common variations.
### _Fabrication_
As described above, AllSight has six main components: camera, mounting plate, customized LED PCB, collimator, shell and elastomer. The fabrication process for these components, illustrated in Figure 3, is described including design principles and lessons learned.
_Camera:_ To keep the AllSight sensor compact and accessible, a Raspberry-Pi zero camera is used. The camera is inexpensive costing approximately $16 and has a wide 160\({}^{\circ}\) fisheye lens. Video is streamed directly to a PC via USB using Raspberry-Pi Zero with camera mode for easy plug-and-play support. It operates at a frame rate of 60fps and outputs \(640\times 480\) resolution frames. Similar to [27], in order to obtain color images that are uniform and balanced, it is crucial to disable the automatic white balance function and adjust the fixed gains for the red and blue channels, along with the exposure compensation for the RGB channels.
_Rigid transparent shell:_ The purpose of the shell is to provide rigidity to the structure of the sensor upon contact while enabling clear visibility of the external deformed elastomer. Different methods for fabricating the shell were experimented including clear epoxy resin [22], off-the-shelf plastic tube [25] and a printable skeleton [24]. While the clear epoxy resin allows complex designs, the resulting shell was not sufficiently clear and required too many fabrication steps. The plastic tube is clear yet not modular. A 3D-printed skeleton provides modularity while not strong enough to withstand various pressures and point contacts within its gaps. In addition, occlusions by the ribs exist. Therefore, we propose fabrication through Stereolithography (SLA) 3D printing. The shell is designed in a custom size and shape which can be modified and scaled. Then, the shell is printed with clear resin followed by surface polishing and application of lacquer. Such approach provides both a crystal-clear shell and modularity. The shell can be easily adapted to additional shapes or scaled.
_Elastomer:_ The elastomer covers the entire exterior of the shell. In such way, the camera can observe deformations of the elastomer through the shell. Here also, the elastomer is made relatively clear. However, in this work we test two designs of the elastomer, both clear while one has additional dot markers. The elastomer is fabricated through molding with a two-piece mold seen in Figure 3. Different materials were tested for 3D-printed mold including SLA resin and Polylactic Acid (PLA). SLA was chosen as it provides a much smoother mold surface which affects the quality of the elastomer. For the dotted elastomer, our approach does not require any complex laser cutting [17] but merely printing a mold with tiny spikes. Smooth-On Solaris(tm) is a clear and colorless silicon used for molding the elastomer. Also, it was found to be resistant to tearing and better suitable for in-hand manipulation [18]. Prior to casting, the interior of the mold was sprayed with lacquer to prevent sticking and allow easy release. After casting, the mold should be placed in a vacuum desiccator at a pressure of 1 bar for removing gas bubbles within the silicon. Having bubbles may damage the clarity of the sensor and prevent its robustness in model transfer. The mold is left to cure for approximately 24 hours. Finally, the elastomer is carefully removed from the mold and glued to the shell using a clear silicone adhesive (e.g., Smooth-On Sil-PoxyTM).
_Reflective coating:_ The exterior of the elastomer is coated with an opaque reflective material aimed to contain and intensify the lighting within the shell. Aluminum powder and grey silicone ink were tested for coatings while the latter proved to be more robust to wear and tear. The silicone base coating is applied by mixing an ink catalyst with a Print-OnTM gray silicone ink and Smooth-On NOVOCSTM silicone solvent gloss in a 1:10:30 mass ratio. Upon testing, the coating was yet prone to tearing. Hence, we formulated a new mixture of silicone by adding Smooth-On EcoFlexTM (00-10) to the mixture as suggested in [18]. The final mixture ratio of 1:10:10:30 was used for catalyst, paint, gel and solvent, respectively. The acquired coating tested durable and reliable upon high contact forces. Prior to applying the coating for the dotted elastomer, the notches formed by the spiked mold were coated in a dip-and-wipe method. The elastomer was covered with a black silicon pigment in the same mass ratio as the reflective marker and then wiped.
Fig. 3: Steps 1-3 depict the fabrication process of the elastomer including (1) mold printing, (2) molding and (3) reflective coating. Steps 4-5 show the fabrication of the clear rigid shell through (4) 3D printing with clear resin and (5) polishing. In step 6, the coated elastomer is glued onto the rigid shell with clear silicon adhesive.
Only the notches retained the black paint after wiping. The reflective coating was then applied.
_Mounting plate:_ The mounting plate is 3D printed with either FDM or SLA printers. Hence, AllSight can be adapted to various robotic hands by simply modifying the design of the interfacing plate.
_Illumination:_ While off-the-shelf LED PCBs are available, they can be bulky and limit the design. Hence, a customized annular PCB was designed. The PCB includes three sets of LEDs with a total of nine LEDs. Note that different combinations of LED colors are supported. Hence, we evaluate and compare between three sequences of LEDs including all white [33], RRRGGGBBB [22, 25] and RGBRGBGB [24] as seen in Figure 2. These combinations will be analyzed for performance. The PCB is placed between the mounting plate, while surrounding the camera, and the edge of the elastomer. The LEDs produce nine cones of light. In order to provide a uniform illumination pattern, a collimator was designed and 3D printed for light piping [22, 24]. The collimator is a ring covering the PCB with holes that adjust the direction of the lighting into the volume of the elastomer.
All components are assembled onto the mounting plate with three screws. The sensor is designed such that each component can easily be replaced or modified. The final shape of the assembled sensor yields a membrane with an hemisphere of 24 mm diameter on a 14 mm height cylindrical base. The mounting plate with the PCB and collimator has a height of 12 mm. Figure 4 shows examples of high-resolution and clear tactile images during contact with various objects.
## IV Tactile State Learning
A contact state \(\mathbf{s}\in\mathbb{R}^{7}\) of AllSight is defined by the spatial location of contact \(\mathbf{x}\in\mathbb{R}^{3}\) on the shell, force vector \(\mathbf{f}\in\mathbb{R}^{3}\) at the contact point, and torsion \(\tau\in\mathbb{R}\) with respect to the normal at the contact. Note that a force vector at the contact includes the normal force \(f_{z}\) and tangential forces \(f_{x}\) and \(f_{y}\) as seen in Figure 5. The proposed approach for training a state estimation model based on real and simulated image datasets is illustrated in Figure 6 and discussed next.
### _Data collection_
We use two sources of training data:
#### Iv-A1 Real-world data
Dataset \(\mathcal{P}_{real}\) is collected by labeling images captured by the internal camera during premeditated contact. A robotic arm equipped with a Force/Torque (F/T) sensor and an indenter touch the surface of the sensor in various contact locations and loads. During contact, an image \(\mathbf{I}_{i}\) is taken along with a state measurement \(\mathbf{s}_{i}\). Contact position \(\mathbf{x}_{i}\) is calculated through the forward kinematics of the arm. Load at the contact (i.e., force vector \(\mathbf{f}_{i}\) and torsion \(\tau_{i}\)) is measured by the F/T sensor fixed at the wrist of the robotic arm. In addition to the contact state, the maximum penetration depth \(d_{i}\) of the indenter is also measured. The acquisition and labeling process yields dataset \(\mathcal{P}_{real}=\{(\mathbf{I}_{i},\mathbf{x}_{i},\mathbf{f}_{i},\tau_{i},d _{i})\}_{i=1}^{N}\) of \(N\) labeled images. In addition, reference image \(\mathbf{I}_{ref}\) is recorded for a sensor without any contact.
#### Iv-A2 Simulated data
AllSight was implemented in the TACTO physics-engine simulator for optical-based tactile sensors [20]. In TACTO, we calibrated the renderer to sufficiently match the real-world by including reference images from real AllSight sensors. To enable and optimize sim-to-real pre-training of the state estimation model, we collected different reference images from different AllSight sensors and used them for augmentation. The acquired images were augmented by adding noise and varying the lighting conditions. TACTO simulator does not support marker motion and, therefore, only images from AllSight sensors with clear shells were used. A simulated dataset \(\mathcal{P}_{sim}\) was generated by labeling \(M\) images captured in TACTO during random premeditated contacts. During contact, an image \(\mathbf{I}_{i}\) is taken along with the contact position \(\mathbf{x}_{i}\) such that \(\mathcal{P}_{sim}=\{(\mathbf{I}_{i},\mathbf{x}_{i})\}_{i=1}^{M}\). Penetration depth \(d_{i}\) can also be acquired but not used here.
### _State estimation_
We adopt a modified ResNet-18 architecture [34] as the state estimation model. The top layer is removed and the flattened output features are fed through two Fully-Connected (FC) layers of size 512 and 256, and with ReLU activation functions. At each iteration, both reference \(\mathbf{I}_{ref}\) and contact \(\mathbf{I}_{t}\) images are down-sampled to resolution \(224\times 224\) and stacked along the channel. The stacked image is then passed through the model to get the estimated state \(\tilde{\mathbf{s}}_{t}\). Simulation data offers a means to collect training data with a much lower effort. While a simulator often cannot provide data similar to the real world, one can pre-train a model prior to fine-tuning it with real data. In such way, a smaller dataset of real world data is required. Hence, the decoder of the contact localization model to approximate \(\mathbf{x}\) is pre-trained with \(\mathcal{P}_{sim}\). Finally, the entire contact model is fine-tuned on the real dataset \(\mathcal{P}_{real}\).
## V Experiments
### _Data collection_
**Simulated dataset.** Dataset \(\mathcal{P}_{sim}\) comprises of simulated tactile images and corresponding contact poses involving six
Fig. 4: Tactile images during contact of AllSight with (left to right) a circuit board, a screw driver, a locking pliers, a screw and an embossed ‘x’.
Fig. 5: The contact state is defined by the position \(\mathbf{x}\) of contact with respect to the sensors coordinate frame, normal force \(f_{x}\) at the contact, tangential forces \(f_{x}\) and \(f_{y}\) and the torsion \(\tau\) about the normal axis.
types of indenters: three spherical indenters, one rectangular, one elliptical and one squared. These indenters were utilized only on AllSight with clear shells. To calibrate the simulation, we employed reference images from six different sensors. In addition, Gaussian noise and various illumination settings were used to augment the simulated images. These are intended to make the model independent of the background and focus on capturing the color gradient observed at the contact pixels. For the localization pre-training, our dataset consists of \(18,000\) samples, with \(1,500\) samples allocated for each indenter-configuration pair.
**Real dataset.** As described in Section IV-B, dataset \(\mathcal{P}_{real}\) is collected in an automated process. The collection setup, seen in Figure 6, consists of an AllSight sensor mounted on a fixed frame. An indenter is mounted to an OpenMANIPULATOR-P arm equipped with a Robotiq FT-300 F\(\backslash\)T sensor. The system is controlled using the Robot Operating System (ROS). During the collection, data stream is acquired in a frequency of 60 Hz. The train and test data are collected in episodes where, at each episode, the robot selects a contact point to press on the sensor's surface. Upon contact, the arm either presses perpendicular to the surface, tilts back and forth about the normal to the surface in order to exert tangential forces or twists the end effector with respect to surface normal for torsion samples. These are chosen arbitrarily and in varying magnitudes within the ranges \(f_{z}\in[-12N,0.8N]\), \(f_{x},f_{y}\in[-5N,5N]\) and \(\tau\in[-0.05Nm,0.05Nm]\). During the pressing, images are taken in \(480\times 480\) resolution, after circular masking and centering, along with contact states. 3D-printed indenters are used for generating different contact geometries including round indenters of radius 3, 4 and 5 mm, square (edge length 6 mm), hexagonal (edge length 3 mm) and elliptical (axis lengths 8 mm and 4 mm) heads.
### _Contact State Estimation_
We assess the precision of state estimation using collected data. To compare performance, we evaluate a series of six sensor configurations seen in Figure 2. These configurations involve cross combination of shells with and without markers, along with three illumination setups: all white, RRRGGGBBB, and RGBRGBRB. Several experiments were conducted to evaluate the contact estimation capabilities of AllSight. In each experiment, the dataset was divided, allocating 80% for training and 20% for testing purposes.
In the first experiment, a comparative analysis was conducted among the six different AllSight configurations. Each sensor was trained using optimized hyper-parameters, utilizing \(N=12,000\) collected samples in \(\mathcal{P}_{real}\) featuring a single spherical indenter of 3 mm radius. Figure 7 shows state estimation results over the test set with three different spherical indenters, 3 mm, 4 mm and 5 mm radii. The contact location, force magnitude \(\|\mathbf{f}\|\) (with \(\mathbf{f}=(f_{x},f_{y},f_{z})^{T}\)) and torsion errors are shown with with respect to \(\|\mathbf{f}\|\). Results show that all configurations achieve low estimation errors with subtle differences. Nevertheless, using markers provide lower errors compared to clear elastomers. Overall, the RRRGGGBBB with markers configuration provides the best estimation. Videos of experiments and demonstrations with the sensor, including in-hand manipulation (Figure 1), can be seen in the supplementary material.
The above model was trained on data collected solely from spherical indenters. Hence, we observe the ability to learn a model for contact state estimation with various indenter geometries. A model was trained for the RRRGGGBBB-markers configuration with a data of size \(N=20,000\) samples featuring spherical, hexagonal, ellipse and square headed indenters. Figure 8 illustrates a heatmap of state estimation errors with respect to the indenter position on the contact surface. The mean position, force and torsion errors are significantly low at 0.59 mm, 0.15 N and 0.0002 Nm, respectively. We also separately trained a model to estimate the penetration depth \(d_{i}\) of the indenter. The low mean error of 0.15\(\pm\)0.14 mm obtained from the sensor indicates its ability to provide reliable geometric information about the contact shape.
### _Data Efficiency_
The above trained models were based on pre-training of the model with simulated data. We now evaluate the
Fig. 6: Data collection system including a robotic arm and an F/T sensor with several indenters. During premeditated contact, images are labeled with contact location and loads. (a) The contact localization part of the model is pre-trained using simulated data from TACTO. (b) Then, the entire state estimation model is fine-tuned using real data. Reference images from the real sensor are used for augmenting the collected data and for generating simulated data for pre-training the contact model.
contribution of the simulation and the sim-to-real effort in the fine-tuning of the model. First, \(1,000\) samples from a real RRRGGGBBB-Clear sensor were taken as test data for evaluation. Two models were trained with up to \(10,000\) real samples: one without any fine-tuning while the second was initially pre-trained with \(4,500\) simulated samples. Figure 9 shows the accuracy improvement over the test data for both models with the addition of real training samples. In zero-shot and no fine-tuning, the pre-trained model already provides good performance. With a small amount of real data (approximately 2,000 samples) for fine-tuning, the pre-trained model achieves good performance. These results emphasize the contribution of the simulation for reducing the effort of real data collection.
### _Transfer Learning_
The ability of the proposed AllSight sensor to provide an accurate contact state estimation has been shown above. Nevertheless, the analysis was performed on the same sensor that was used to collect training data. A practitioner may desire to instantly use a newly fabricated sensor without further training, i.e., zero-shot inference. Alternatively, the practitioner can collect a limited amount of labeled data from the new sensor and fine-tune the model for better results. Hence, we now wish to observe the ability of transferring a learned state estimation model to a newly fabricated sensor. Expanding upon previous findings that demonstrated superior results of the RRRGGGBBB-marker configuration, we conducted an assessment of transfer learning capabilities for the AllSight sensor on both zero-shot and fine-tuning approaches.
We strengthen the state estimation model and train it with \(N=40,000\) samples collected from three distinct sensors using round indenters of 3 mm, 4 mm and 5 mm radii. To enhance the ability of the model to generalize, we augmented the images with lighting randomization. Furthermore, \(2,300\) image samples labeled with full contact states were collected from a newly fabricated sensor for possible fine-tuning of the model. An additional \(600\) labeled image samples not included in the training were collected from the new sensor for testing the model.
Figure 10 exhibits the accuracy of state estimation over the test data of the new sensor with regards to the number of new samples used to fine-tune the model. First, when no data was used to fine-tune the model, i.e., zero-shot inference, the position, force and torsion errors are already low at 3.49\(\pm\)0.41 mm, 2.06\(\pm\)0.23 N and 0.0068\(\pm\)0.0016 Nm, respectively. With the addition of a limited amount of new training data for fine-tuning, accuracy was improved even more. The results provide compelling evidence that the AllSight sensor achieves satisfactory zero-shot performance with ability to further improve. We attribute this success to
Fig. 8: Heatmap of (a) position, (b) force and (c) torsion estimation errors on test data from different indenters and with respect to contact position.
Fig. 10: Transfer estimation errors for a newly fabricated AllSight (RRRRGGGBBB-markers) with regards to the number of new samples used for fine-tuning the model. Results with zero new tactile images are the zero-shot transfer errors without any fine-tuning.
Fig. 7: State estimation errors for the six AllSight configurations with respect to the force magnitude \(\|\mathbf{f}\|\) at the contact.
Fig. 9: Position estimation accuracy with regards to the amount of real training data with and without pre-training using simulated data. |
2308.14717 | Equity Pay In Networked Teams | A group of agents each exert effort to produce a joint output, with the
complementarities between their efforts represented by a (weighted) network.
Under equity compensation, a principal motivates the agents to work by giving
them shares of the output. We describe the optimal equity allocation. It is
characterized by a neighborhood balance condition: any two agents receiving
equity have the same (weighted) total equity assigned to their neighbors. We
also study the problem of selecting the team of agents who receive positive
equity, and show this team must form a tight-knit subset of the complementarity
network, with any pair being complementary to one another or jointly to another
team member. Finally, we give conditions under which the amount of equity used
for compensation is increasing in the strength of a team's complementarities
and discuss several other applications. | Krishna Dasaratha, Benjamin Golub, Anant Shah | 2023-08-28T17:17:57Z | http://arxiv.org/abs/2308.14717v1 | # Equity pay in networked teams
###### Abstract.
A group of agents each exert effort to produce a joint output, with the complementarities between their efforts represented by a (weighted) network. Under _equity compensation_, a principal motivates the agents to work by giving them shares of the output. We describe the optimal equity allocation. It is characterized by a _neighborhood balance_ condition: any two agents receiving equity have the same (weighted) total equity assigned to their neighbors. We also study the problem of selecting the team of agents who receive positive equity, and show this team must form a tight-knit subset of the complementarity network, with any pair being complementary to one another or jointly to another team member. Finally, we give conditions under which the amount of equity used for compensation is increasing in the strength of a team's complementarities and discuss several other applications.
Dasaratha: [email protected], Boston University. Golub: [email protected], Northwestern University. Shah: [email protected], Northwestern University. We thank Vitalii Tubdenov for excellent research assistance. We are grateful (in random order) to Ilya Segal, George Mailath, Aravindan Vijayaraghavan, Omer Tamuz, Marina Halac, Thomas Steinke, Marzena Rostek, Juan Ortner, Alex Wolitzky, Jason Hartline, Evan Sadler, Adam Szeidl, Michael Ostrovsky, Melika Laporace, and Michael Powell, as well as many seminar and conference participants, for valuable conversations and comments.
Our contribution is to solve for the principal's optimal contract, which takes the form of an allocation of equity stakes in the firm's returns. More precisely, there is a principal who allocates the output of a project in case of success, giving each worker a certain percentage. These payments, along with the technology of production described above, induce a network game, whose equilibrium outcomes determine effort levels and team performance. The principal's problem is therefore to design a contract in this moral hazard setting with network spillovers. The effective network of incentive spillovers endogenous that governs equilibrium effort depends on the exogenously given complementarities but is also shaped by the principal's choice of contract. This makes solving the principal's problem much less straightforward than analyzing a network game with exogenously given spillovers.
Our results can be divided into three main categories. The first set concerns the intensive margin: among those agents who get equity, how much should they get? How do their positions in the network of complementarities determine their optimal equity pay? The second set concerns the extensive margin. It turns out that it need not be optimal to give all agents positive equity shares, and the optimal team induced to work may be a subset of the potential contributors. We study the problem of choosing this optimal team. Then, armed with a characterization of optimal equity allocations on both the intensive and extensive margins, the third set of results focuses on some implications of applied interest, examining how equity shares and outcomes vary with the network and the strength of complementarities.
### Intensive margin
We first study the intensive margin of the principal's contracting problem. _Active_ agents in a certain allocation are defined to be those who receive positive equity; these turn out to be precisely those agents who exert positive effort in equilibrium (since the incentive to work comes only from equity pay). Our first set of results analyzes the optimal allocation among these agents, showing that it satisfies a simple _neighborhood balance_ condition. Consider the weighted sum of equity shares allocated to the neighbors of an active agent, where the weights are the complementarities between an agent and the neighbor. Our first result, which underlies our analysis of the intensive margin, states that this weighted sum does not depend on the agent's identity--i.e., the weighted sum is the same for all active agents.
A notable property implied by this is that agents' equilibrium efforts under it are proportional to their equity shares. This property is interesting in that it does not hold for a generic equity allocation, but we show that it must be satisfied for the principal's allocation to be optimal. The optimal allocation also features _balanced neighborhood actions_: the total actions of an agent's neighbors, weighted by the strength of their complementarities, is the same for all active agents. We show the balanced neighborhood equity and action conditions are together equivalent to the principal's first-order conditions. Satisfying these turns out to balance the spillover effects of eliciting more effort across active agents and to equalize the value of inducing any active agent to work more.
The endogenous equalization of incentives and activity across active neighborhoods is notable. In standard models of games with network complementarities (see below for a brief discussion of
the literature), a theme is that more central agents and communities endogenously have higher incentives and higher neighborhood activity. In our setting, within the set of active agents, this inequality is endogenously muted: the principal has incentives to allocate equity so as to balance out both incentives and activity at the neighborhood level in the way we have described. In our discussion of the literature below, we comment on how this changes the analysis relative to canonical studies of network games.
The neighborhood balance result permits an explicit characterization of the share of equity each agent receives under the optimal allocation. The result can be reformulated as stating that an agent's share is proportional to a certain measure of that agent's centrality in the subnetwork of active agents. A vector \(\mathbf{x}\) is called an _equity centrality vector_ for a network with weight matrix \(\mathbf{W}\) if \(\mathbf{W}\mathbf{x}=\mathbf{1}\), where \(\mathbf{1}\) is the column vector of all ones. Equity centralities exist and are uniquely determined (\(\mathbf{x}=\mathbf{W}^{-1}\mathbf{1}\)) whenever \(\mathbf{W}\) is invertible--a property that holds generically.
### Extensive margin
The above results fully characterize optimal equity allocations and associated equilibria given an active set, i.e., a set of agents receiving positive equity shares at the optimal allocation. To characterize optimal contracts fully, however, we must also optimally choose the active set, which need not contain all agents. This discrete optimization problem requires new insights beyond the intensive margin analysis.
We approach the problem by first reducing it to a quadratic program. This allows us to deduce two results implying considerable structure on the active set. Substantively, we find that highly connected subnetworks are optimal for the principal. Our first result on this is that the active set under any optimal allocation always has diameter at most two in the complementarity network, meaning that any two active agents either have complementarities with each other or both have complementarities with some shared active neighbor. Our interpretation of this result is that an optimal active set should be sufficiently "tight-knit." We give a rough intuition for this result. When equity is given to members of a tight-knit group, the incentives given to one member also motivate effort by the others due to spillovers. On the other hand, if a given amount of equity is given to two subsets of agents that are not tightly linked in the complementarity network, then the equity given to one subset dilutes the incentives of the other without a strong counteracting beneficial effect of spillovers. Thus, a principal prefers to "concentrate" incentives and focus them on a single highly complementary group. This force is seen even more sharply in our second main result in this section, when we restrict attention to the standard benchmark of unweighted networks--in which all non-zero complementarities have the same strength. In this case, any maximum clique (a subnetwork with complementarities between all pairs of agents) is an optimal active set.
Our overall interpretation of these results is that when a firm relies on a single joint outcome to provide incentives, teams with dense or tightly-knit complementarities outperform more dispersed teams. A further implication is that the principal often prefers to make a small team exert large efforts in order to make the most use of complementarities, rather than eliciting less effort from a larger group with more diffuse complementarities. Under convex effort costs, this can entail a considerable loss in agent welfare compared with other policies yielding somewhat less output.
**Implications.** Our final set of results explores some implications of our theoretical characterizations motivated by applied questions about the structure of teams and their compensation.
An important set of questions concerns how the optimal allocation and the probability of success depend on the network and the strength of complementarities. Our expression for equity centrality lets us calculate explicitly how the optimal shares vary as the network changes. These comparative statics show that equity centrality can behave quite differently from measures such as Bonacich centrality, relevant in network games with exogenous incentives. In particular, monotonicity properties, whereby strengthening one's network links necessarily increases centrality, do _not_ hold for equity centrality. We illustrate this non-monotonicity and others in examples of three-agent networks. An implication is that investments that strengthen complementarities, though they may be beneficial for aggregate output, are not necessarily in agents' own interests--and this can occur even if the investments are not costly for the agents.
Questions of network design are interesting more generally. Which links are most valuable to the principal? To make some progress toward understanding this issue, we ask how strengthening links between agents affects the team's probability of a successful outcome. The value to the team of strengthening a link takes a surprisingly simple form: it is proportional to the product of the equity shares allocated to the two agents. So if the principal can strengthen some complementarities, it is most valuable to focus on connections between agents who are already (equity) central.
A final, practically important, question is how much equity a firm should devote to compensation. The trade-off is that allocating more equity to the team elicits more effort from them, but the principal gets a smaller share of the resulting pie. How a principal manages this trade-off depends on the environment, including the network and the strength of complementarities. Our last result gives conditions under which a profit-maximizing firm wants to distribute more shares to agents when complementarities are stronger. Intuitively, greater complementarities in production increase the returns to allocating shares to agents, since each share now drives more additional effort through spillovers. Establishing this for general complementarity networks turns out to be subtle.
**Literature.** We close with a brief discussion of relevant literature and the nature of our contribution. The literature on network games is extensive, going back to seminal papers including Goyal and Joshi (2003), Ballester et al. (2006), and Galeotti, Goyal, Jackson, Vega-Redondo, and Yariv (2010). The quadratic model of effort that we study has emerged as a focal point due to its tractability, connections to network centrality measures that are of independent interest, and amenability to empirical work. Much recent research has occurred even since the latest major surveys, such as Jackson and Zenou (2015), Bramoulle and Kranton (2016), and Zenou (2016).1 However, the study of how network complementarities interact with the design of incentive schemes--while a topic of obvious theoretical interest and practical relevance--is in its early stages. For example, Belhaj and Deroian (2018) studies a problem where the principal must target a single agent and offer a contract. Shi (2022) studies a model in which the network affects output via a distinct
"helping effort" that agents can exert to change others' marginal costs of effort (rather than direct complementarities as in our model).
We contribute to this literature both by posing a simple optimal contracting problem in a canonical network games model and by deriving a sharp description of incentives and behavior at the optimum quite different from any appearing in the works just mentioned. At a technical level, our problem has an interesting complication. Most network game analyses have a fixed network of spillovers, describing how an agent's optimal action (or some other analogous variable) depends on others' actions. To the extent that planner interventions of various kinds on nodes' incentives are studied in these network game models--as for example in Galeotti, Golub, and Goyal (2020), Leister, Zenou, and Zhou (2022), or Parise and Ozdaglar (2023)--the interventions typically affect a node attribute and do not change the spillover network. However, in our setting, when a principal varies the equity stakes that different agents hold, the effective network of spillovers determining equilibrium behavior also changes. For example, when an agent gets a larger share of the group output, he cares more about the joint output with every collaborator, making him more strategically sensitive to those collaborators' efforts. The resulting endogeneity makes the principal's optimization problem substantially richer than it would be with a fixed spillover network. It is therefore interesting that the model nevertheless affords a simple characterization of optimal interventions in terms of the exogenous complementarity network.
Our work also ties into a large economics literature on the design of incentives, going back to Holmstrom's (1982) seminal contribution on incentives for teams when individual effort is not observable or not contractible. Within this literature we are closest to Bernstein and Winter (2012), which analyzes optimal incentives to induce all agents to exert effort in a binary-action game with network spillovers.2 We note two differences. First, the binary-action game in Bernstein and Winter (2012) admits multiple equilibria for many parameters, and the focus of their analysis is full implementation of the maximal action profile. We instead consider a framework with a unique equilibrium and design incentives that maximize performance at that equilibrium. Second, network structure affects the optimal contract in Bernstein and Winter (2012) primarily through asymmetry in links: on undirected networks, there is a lot of multiplicity in optimal contracts--with one such contract for every possible ranking of agents. In our model, by contrast, optimal contracts on a given undirected network depend intricately on the network structure and typically must motivate particular agents more than others.
Footnote 2: Related implementation problems are studied in Halac, Kremer, and Winter (2020), who consider heterogeneous agents without a network structure, and Lu and Song (2022), who allow network monitoring rather than network spillovers.
## 2. Model
We consider a model with one principal and \(n\) agents, \(N=\{1,2,\ldots,n\}\). These agents take real-valued actions \(a_{i}\geq 0\). Denote the joint action profile by \(\mathbf{a}=(a_{1},\ldots,a_{n})\). To represent the complementarities among the agents, we define a weighted network with adjacency matrix \(\mathbf{G}\), so
\(G_{ij}\geq 0\) is the weight of the link from \(i\) to \(j\). The neighborhood of agent \(i\) is \(N(i)=\{j:G_{ij}>0\}\). We call a network unweighted if \(G_{ij}\in\{0,1\}\) for all \(i\) and \(j\).
Agents jointly work on a project which either succeeds or fails. The project outcome depends on agents' actions \(\mathbf{a}\), the network \(\mathbf{G}\), and a parameter \(\beta>0\) measuring the strength of complementarities between agents. Let \(S\in\{0,1\}\) be a binary variable corresponding to project success. We assume the probability of success is \(P(Y)\), where \(Y(\mathbf{a})\) is called the _team performance_ and \(P:\mathbb{R}_{\geq 0}\to[0,1)\) is strictly increasing, concave, and twice differentiable. The team performance is the sum of a term that is linear in actions--corresponding to agents' standalone contributions--and a quadratic complementarity term:
\[Y(\mathbf{a})=\sum_{i\in N}a_{i}+\frac{\beta}{2}\sum_{i,j\in N}G_{ij}a_{i}a_{j}.\]
A successful project produces an output, whose value we normalize to one, whereas a failed project produces a value equal to zero.
Throughout, we take the matrix \(\mathbf{G}\) to be symmetric, or equivalently the network to be undirected. Because all payoffs will only depend on \(\mathbf{G}\) through the team performance \(Y(\mathbf{a})\), this assumption is without loss of generality (as we can replace \(\mathbf{G}\) with \((\mathbf{G}+\mathbf{G}^{T})/2\) without changing team performance). We also assume \(\mathbf{G}\) is not identically zero.
The principal observes the project outcome but does not observe agents' actions. (When we use pronouns, we use "she" for the principal and "he" for an agent.) To incentivize effort, the principal offers a contract, which specifies a non-negative transfer \(t_{i}(S)\) to each agent that can depend on the project outcome. Agents maximize the expectation of the following payoff, which is quasi-linear in monetary transfers and has a quadratic cost of effort:
\[u_{i}=t_{i}(S)-\frac{a_{i}^{2}}{2}.\]
The environment (including the network and contract) are common knowledge among agents, and the network is known to the principal when she is choosing the contract.
### Objectives
We will see in Section 3.1 that, given any contingent payments, there is a unique Nash equilibrium, which we call \(\mathbf{a}^{*}\). The principal optimizes over contracts, expecting this equilibrium to be played. In this section, we define two objectives for the principal; our results will apply to both objectives, except where we explicitly state otherwise.
Under the _residual profit_ objective, the principal maximizes the expected value of project success (normalized to 1) minus payments to agents
\[P(Y(\mathbf{a}^{*}))-\mathbb{E}\left[\sum_{i\in N}t_{i}(S)\right].\]
It turns out to be optimal to give all agents a transfer of zero when the project fails (as we will argue formally in Section 3.1), so without loss of optimality we can consider contracts as transfers \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{n})\) to each agent when the project succeeds. We will interpret these payoffs as equity
shares in the project. Rewriting the expected residual profit of the principal, the residual profit maximization problem can be written as
(RP) \[\text{choose }\boldsymbol{\sigma}\text{ to maximize }V(\boldsymbol{\sigma})= \left(1-\sum_{i=1}^{n}\sigma_{i}\right)P(Y(\mathbf{a}^{*})).\]
Under the _success probability_ objective, the principal maximizes the probability of success \(P(Y(\mathbf{a}^{*}))\) subject to the constraint that the total transfers are no larger than the output from the project. This constraint rules out positive transfers when the project fails. Thus, the success probability maximization problem is
(SP) \[\text{choose }\boldsymbol{\sigma}\text{ to maximize }P(Y(\mathbf{a}^{*})) \text{ subject to }\sum_{i}\sigma_{i}\leq 1.\]
The \(\sigma_{i}\) can again be interpreted as equity shares.
An alternative model, which turns out to be very similar, is that the project yields a deterministic monetary output \(P(Y)\), where \(P:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) is a strictly increasing, concave, and twice differentiable function. Our analysis applies essentially unchanged to the study of this production function with either objective, provided that the principal uses _linear_ contracts, i.e., giving each agent a transfer equal to a fixed equity share \(\sigma_{i}\) of output.3 In the rest of the paper, we will work with the binary-outcome model.
Footnote 3: If \(P\) is not bounded, this deterministic model requires an additional assumption that \(\beta\) is small enough so that the principal’s feasible payoffs are bounded.
### Discussion of modeling assumptions
There are several dimensions of our modeling assumptions that are worth commenting on. First, we assume that the project outcome--the random variable \(S\)--is the only observable consequence of any agent's effort. In other words, agents cannot be paid directly for their efforts \(a_{i}\). The motivation behind this modeling assumption (discussed in Holmstrom (1982) and the ensuing literature) is that many aspects of agents' productive efforts are not observable, or not possible to make binding legal commitments over.
One could enrich the model with more general signals about agents' efforts that a contract could condition on. Our basic model highlights relationships between networks and equity pay that we expect would be relevant in such extensions, as long as equity pay was an important part of motivating (the marginal unit of) effort.
Second, we consider a binary-outcome model where the contracts available to the principal are essentially equity schemes, in which each agent's compensation is a share of the output of the firm (see Tirole (2012), for example, for a similar modeling technique). Our main motivation for being interested in equity payments is that this is an extremely popular form of incentive in certain types of organizations, along with closely related instruments such as options (see, e.g., Levin and Tadelis (2005)). Various models have been used to analyze the reasons for using equity pay as opposed to other contracts when they are available (Holmstrom and Milgrom, 1987; Dai and Toikka, 2022).
## and actions for active agents
This section characterizes the optimal allocation of equity among those who receive positive shares, as well as the induced equilibrium. The first subsection describes the unique equilibrium of the network game given a fixed contract. The second subsection states our first main result, which describes the equity shares under the optimal contract and the corresponding equilibrium actions, and discusses various economic consequences.
### Equilibrium of the network game
We now show equilibrium is unique given any contract \((t_{i})_{i\in N}\) and provide a characterization of equilibrium actions. Because agents' incentives depend only on the difference \(t_{i}(1)-t_{i}(0)\) between transfers conditional on success and failure, we can shift payments and assume \(t_{i}(0)=0\) without loss of generality for proving uniqueness. Similarly, this shift can only improve the principal's payoff, so it is without loss of optimality in the principal's problem. Thus, from now on, we will let contracts be described by equity shares \(\boldsymbol{\sigma}\). Fixing such a vector (which need not be optimal), agents' payoffs are
\[U_{i}(\mathbf{a},\mathbf{G},\boldsymbol{\sigma})=P\left(Y\right)\sigma_{i}- \frac{a_{i}^{2}}{2}.\]
Since agents receive shares of the team's output, their marginal returns to effort depend on others' actions. The first-order conditions for agents' best responses are
\[a_{i}=P^{\prime}(Y)\sigma_{i}\left(\beta\sum_{j}G_{ij}a_{j}+1\right).\]
The following result states that these first-order conditions characterize the unique Nash equilibrium.
**Proposition 1**.: _Fixing \(\boldsymbol{\sigma}\), there exists a unique Nash equilibrium. The equilibrium actions \(\mathbf{a}^{*}\) and team performance \(Y^{*}\) solve the equations_
\[[\mathbf{I}-P^{\prime}(Y^{*})\beta\boldsymbol{\Sigma}\mathbf{G}]\mathbf{a}^{* }=P^{\prime}(Y^{*})\boldsymbol{\sigma}\text{ and }Y^{*}=Y(\mathbf{a}^{*}), \tag{1}\]
_where \(\boldsymbol{\Sigma}=\operatorname{diag}(\boldsymbol{\sigma})\) is the diagonal matrix with entries \(\Sigma_{ii}=\sigma_{i}\)._
Note that the result entails a positive equilibrium action for those agents with \(\sigma_{i}>0\), and an action of zero otherwise.
For intuition, suppose that \(P(Y)=\alpha Y\) for all relevant action vectors, where \(\alpha>0\) is a constant. Then equilibrium actions are given by
\[\mathbf{a}^{*}=[\mathbf{I}-\alpha\beta\boldsymbol{\Sigma}\mathbf{G}]^{-1} \alpha\boldsymbol{\sigma}. \tag{2}\]
When all agents receive equal shares, equilibrium actions are proportional to Bonacich centralities (as in Ballester et al. (2006)). For arbitrary shares, the actions are a modified version of Bonacich centrality with respect to a network \(\boldsymbol{\Sigma}\mathbf{G}\).
The network \(\boldsymbol{\Sigma}\mathbf{G}\) reflects spillovers; its \((i,j)\) entry is the slope of \(i\)'s best-response in \(j\)'s action. One can see from the form of this matrix that it is endogenous to the principal's choice of equity
compensation, as discussed in the introduction: when an agent gets a larger share of the group output, the agent cares more about the joint output with every collaborator, making the agent more strategically sensitive to those collaborators' efforts.
### Balance conditions in the optimal contract
Our first main result characterizes the optimal allocation and equilibrium actions among the set of agents receiving positive shares.
We have noted that an agent exerts positive effort under a given contract if and only if he receives positive equity. We will thus call an agent _active_ under a given equity allocation \(\boldsymbol{\sigma}\) if he receives a positive equity share \(\sigma_{i}>0\) and _inactive_ otherwise.
**Theorem 1**.: _Suppose \(\boldsymbol{\sigma}^{*}\) is an optimal allocation and \(\mathbf{a}^{*}\) and \(Y^{*}\) are the induced equilibrium actions and team performance, respectively. The following properties are satisfied:_
1. _Balanced neighborhood equity: There is a constant_ \(c>0\) _such that for all active agents_ \(i\)_, we have_ \((\mathbf{G}\boldsymbol{\sigma}^{*})_{i}=c\)_._
2. _Actions are proportional to shares:_ \(\mathbf{a}^{*}=\mu\boldsymbol{\sigma}^{*},\) _where_ \[\mu=\frac{P^{\prime}(Y^{*})}{1-P^{\prime}(Y^{*})\beta c}.\]
3. _Balanced neighborhood actions: For all active agents,_ \((\mathbf{G}\mathbf{a}^{*})_{i}=\mu c\)_._
Omitted proofs are in Appendix A. The conditions in the theorem state, at a high level, that it is optimal for the principal to equalize the complementarities motivating various active agents to work. We will give an intuition for why this is a necessary feature of an optimal contract below, but first we spell out the content of the result and some simple relationships among its parts.
The property of balanced neighborhood equity says that for each active agent \(i\), the sum \(\sum_{j}G_{ij}\sigma_{j}\) of shares given to neighbors of \(i\), weighted by the strength of \(i\)'s connections to those neighbors in \(\mathbf{G}\), is equal to the same number (i.e., is not dependent on \(i\)).
Part (b) says that under the optimal allocation, each active agent's equilibrium effort is a constant multiple of the agent's equity share. This turns out to follow from part (a), as we now explain. For a given optimal share vector \(\boldsymbol{\sigma}^{*}\), let us write \(\boldsymbol{\Sigma}^{*}=\text{diag}(\boldsymbol{\sigma}^{*})\); then (a) is equivalent to saying that \(\boldsymbol{\sigma}^{*}\) is a right-hand eigenvector of \(\boldsymbol{\Sigma}^{*}\mathbf{G}\) with eigenvalue \(c\). Now consider the equation we get when we solve (1) for \(\mathbf{a}^{*}\):
\[\mathbf{a}^{*}=\underbrace{P^{\prime}(Y^{*})\left[\mathbf{I}-P^{\prime}(Y^{*} )\beta\boldsymbol{\Sigma}^{*}\mathbf{G}\right]^{-1}}_{\mathbf{M}}\boldsymbol{ \sigma}^{*}.\]
Our observation about \(\boldsymbol{\sigma}^{*}\) implies that it is also an eigenvector of the matrix \(\mathbf{M}\) with eigenvalue \(\mu\); that establishes (b).4
Footnote 4: An alternative argument is to expand the expression for \(\mathbf{a}^{*}\) using the Neumann series to write \(\mathbf{a}^{*}=P^{\prime}(Y^{*})\sum_{k=0}^{\infty}P^{\prime}(Y^{*})^{k} \beta^{k}(\boldsymbol{\Sigma}^{*}\mathbf{G})^{k}\,\boldsymbol{\sigma}^{*}\); then repeatedly use balanced neighborhood equity to rewrite this as \(P^{\prime}(Y^{*})\sum_{k=0}^{\infty}P^{\prime}(Y^{*})^{k}\beta^{k}c^{k} \boldsymbol{\sigma}^{*}\).
The property of balanced neighborhood actions states that for each active agent \(i\), the sum of actions of neighbors of \(i\), weighted by the strength of \(i\)'s connections to those neighbors in \(\mathbf{G}\), is equal to the same number, \(\mu c\). This follows immediately from (a) and (b).
The system of equations in part (a) of Theorem 1 can be solved explicitly for the optimal shares \(\boldsymbol{\sigma}^{*}\) as long as the relevant adjacency matrix is invertible, which holds for generic weighted networks. Motivated by this, we define equity centrality:
**Definition**.: _Given a weighted network with non-singular adjacency matrix \(\mathbf{W}\), the **equity centrality** of agent \(i\) is \(\big{(}\mathbf{W}^{-1}\mathbf{1}\big{)}_{i}\)._
Theorem 1 then entails that under an optimal allocation, for each active agent \(i\), the equity share \(\sigma_{i}^{*}\) is proportional to \(i\)'s equity centrality in the subnetwork \(\widetilde{\mathbf{G}}\) of active agents for that allocation; the same is true for actions, with a different constant of proportionality.
Equity centrality behaves quite differently from standard measures such as Bonacich centrality. In particular, the inverse \(\mathbf{W}^{-1}\) changes non-monotonically as \(\mathbf{W}\) changes. We will see in Section 5 that this can induce non-monotonicities in the optimal allocation and the resulting actions and utilities.
An implication of our characterization of the optimal equity is that the ratio of shares allocated to two active agents is independent of the complementarity parameter \(\beta\), and depends only on the network \(\mathbf{G}\) and the set of active agents. (Our results in Section 4 will imply that the optimal active sets are also independent of \(\beta\).) Since the induced actions are proportional to shares, the ratio between the equilibrium actions of any two active agents is independent of \(\beta\) as well. This property is surprising, because in standard network games analyses (Ballester et al., 2006), equilibrium actions are highly sensitive to \(\beta\). This dependence is endogenously exactly canceled out by the planner's optimal equity allocation.
_Remark 1_.: The result of Theorem 1 holds under either the residual profit or success probability objective, as will all our characterizations of optimal contracts. In fact, the proof of Theorem 1 establishes a stronger statement: if \(\boldsymbol{\sigma}^{*}\) solves the problem of maximizing \(Y(\mathbf{a}^{*})\) subject to the constraint \(\sum_{i}\sigma_{i}=s\), where \(s\) is any positive number, then \(\boldsymbol{\sigma}^{*}\) and \(\mathbf{a}^{*}\) must satisfy the conditions given in the theorem.
#### 3.2.1. Intuition for Theorem 1
We can provide some intuition for the balanced neighborhood equity and action conditions by informally arguing they are _sufficient_ for a certain principal first-order condition that must hold at the optimal contract. First, we can redescribe the principal's problem as choosing _actions_ from among those that can be implemented at some optimal contract. Suppose we perturb agent \(i\)'s action exogenously by \(\epsilon\) and follow the consequences through the system of best responses. Each agent \(j\)'s best response is
\[P^{\prime}(Y^{*})\sigma_{j}\left(1+\beta\sum_{j^{\prime}\in N}G_{jj^{\prime}} a_{j^{\prime}}\right),\]
so the direct impact on \(j\)'s action, given by the \(j^{\prime}=i\) term, is to increase it by \(\beta P^{\prime}(Y^{*})\sigma_{j}G_{ji}\epsilon\), which (by symmetry of \(\mathbf{G}\)), is equal to \(\beta P^{\prime}(Y^{*})\sigma_{j}G_{ij}\epsilon\). The balanced neighborhood equity condition implies that the sum of these direct impacts does not depend on \(i\). That is, the direct impact on
the aggregate effort level does not depend on which agent's action we perturbed. Iterating this argument, the indirect impact of increasing \(i\)'s actions on the total of actions does not depend on \(i\)'s identity either. So the balanced neighborhood equity condition implies that increasing any agent's action marginally has the same effect on the total of all actions.
Two gaps remain between this indifference and the principal's first-order condition, which requires that redistributing shares among active agents locally does not affect output. First, some agents might increase their actions more than others when given \(\epsilon\) additional equity. (So even if a principal is indifferent between any same-sized perturbation to actions, she may be able to achieve some of these more cheaply than others.) This problem does not arise precisely in case actions are proportional to shares, which is implied by the two conditions in the theorem. Second, output is not only the sum of shares; the quadratic term in output could change the principal's first-order condition. But it turns out the balanced neighborhood action condition also implies the first-order conditions for output is actually the same as for the sum of efforts.
We have argued that the two conditions in Theorem 1 imply the principal's first-order condition is satisfied. It is not obvious that these two conditions--balanced neighborhood equity and balanced neighborhood actions--can be satisfied simultaneously; indeed, this depends on some of the specific structure of our model. The full proof establishes that these conditions can be jointly satisfied and that they are also _necessary_ conditions for an allocation to be optimal.
## 4. The extensive margin: Active and inactive agents
We next ask which agents are active and which agents are inactive under optimal contracts for a given complementarity network. Recall an agent is defined to be active under a given allocation if he receives positive equity.
The main results in this section show that the active sets under optimal allocations are highly connected subnetworks. We first show that any optimal active set has diameter at most two in the complementarity network \(\mathbf{G}\). We then show that in the special case of unweighted networks, there is an optimal allocation with a clique as the active set. We interpret these results as saying that, when incentives to exert effort are based only on global outcomes (and not local measures of performance), smaller and more highly connected teams outperform larger and more dispersed teams.
It is not immediate from Theorem 1 which agents are active; indeed, there can be several candidate active sets compatible with the condition of the theorem. (Theorem 1 implies that the candidate active sets are the subnetworks \(\widetilde{\mathbf{G}}\subseteq\mathbf{G}\) such that the row sums of \(\widetilde{\mathbf{G}}^{-1}\) are positive.)
The key to our analysis is the fact (formalized as Lemma 3 in Appendix A) that we can find the active sets under optimal allocations by solving the following optimization problem5 among
non-negative equity shares summing to a fixed value:
\[\begin{split}\max_{\boldsymbol{\sigma}}& c\\ \text{subject to}&(\mathbf{G}\boldsymbol{\sigma})_{i}=c \text{ whenever }\sigma_{i}>0.\end{split} \tag{3}\]
The reason for this reduction is that, when the balanced equity condition holds, we can rewrite output as the following function of the total shares allocated to agents and \(c\), the total weighted equity in each active agent's neighborhood:
\[Y(\mathbf{a}^{*})=\left(\sum_{i=1}^{n}\sigma_{i}\right)\left(\frac{P^{\prime} (Y^{*})}{1-\beta P^{\prime}(Y^{*})c}+\frac{\beta P^{\prime}(Y^{*})^{2}c}{2(1- \beta P^{\prime}(Y^{*})c)^{2}}\right). \tag{4}\]
The right-hand side is an increasing function of \(c\). So the principal can choose an optimal active set by maximizing the constant \(c\) in the balanced equity condition.
One important implication of this is the following invariance of the principal's optimization to the complementarity parameter:
**Proposition 2**.: _If \(\boldsymbol{\sigma}_{0}^{*}\) is a solution to the principal's problem under \(\beta_{0}>0\) and \(\beta_{1}\) is another complementarity parameter, then there is a constant \(k>0\) so that \(k\boldsymbol{\sigma}_{0}^{*}\) solves the principal's problem under \(\beta_{1}\)._
This says essentially that the active set and the ratios in which equity is optimally allocated are both independent of \(\beta\). This follows from observing that the optimization problem to solve for the active set does not depend on \(\beta\). For the success probability objective, the constant \(k\) is equal to \(1\). Under the residual profits objective, the principal may adjust the total share of output distributed to agents as \(\beta\) changes (see Section 5.2).
Before turning to other general implications of the lemma, we study the example of a three-agent network. This example, which is the smallest interesting case of our model, shows that the active set can depend in non-trivial ways on network structure. We describe the optimal allocation here and provide details in Appendix B.
**Example 1**.: Consider a weighted network with three agents without self-links (see Figure 1). Since it is optimal to have both agents active in networks with two agents, three-agent networks are the smallest non-trivial example of our model.
Without loss of generality, we can assume \(G_{12}\geq G_{13}\geq G_{23}\) and choose the normalization \(G_{12}=1\), so that the adjacency matrix is
\[\mathbf{G}=\begin{bmatrix}0&1&G_{13}\\ 1&0&G_{23}\\ G_{13}&G_{23}&0\end{bmatrix}.\]
The optimal active set consists of either agents \(1\) and \(2\) or all three agents. If all agents are active, then Theorem 1 implies that the optimal shares must solve
\[\sigma_{1}^{*}=\frac{1+G_{13}-G_{23}}{2G_{13}}c,\quad\sigma_{2}^{*}=\frac{1+G _{23}-G_{13}}{2G_{23}}c,\quad\sigma_{3}^{*}=\frac{G_{23}+G_{13}-1}{2G_{23}G_{13 }}c,\]
for some constant \(c\). Since we must have \(\sigma_{3}^{*}>0\) for all agents to be active, a necessary condition for \(\{1,2,3\}\) to be an optimal active set is \(G_{13}+G_{23}>1\).
This necessary condition also turns out to be sufficient; we now sketch the argument for this. A calculation shows that distributing a total amount \(s\) of equity consistent with these ratios gives a value of
\[c=\frac{2G_{13}G_{23}s}{2(G_{23}+G_{13})-1-(G_{13}-G_{23})^{2}}.\]
On the other hand, if a total amount \(s\) of equity is distributed among two active agents, then \(c=s/2\). A bit more algebra confirms that
\[\frac{2G_{13}G_{23}s}{2(G_{23}+G_{13})-1-(G_{13}-G_{23})^{2}}>\frac{s}{2}\]
whenever \(G_{13}+G_{23}>1\).
So the active set includes all three agents if and only if \(G_{13}+G_{23}>1\). In this example, the principal maximizes \((\mathbf{G}\boldsymbol{\sigma})_{i}\) subject to the balanced equity condition by choosing an active set that maximizes the minimum weighted degree of an agent in the induced subnetwork. If the least connected agent's complementarities are too weak, the principal prefers to exclude that agent from the team and concentrate on incentivizing the two agents with stronger complementarities.
Moving back to general networks, we now state two results showing that the principal prefers a highly connected active set. The first result on this holds for any network satisfying our maintained assumptions. It states that the network distance between any pair of agents in the active set is small. Recall that the diameter of a network is the longest distance6 between any two agents in the network.
Footnote 6: Shortest path consisting of links with positive weights.
**Proposition 3**.: _The diameter of the active set under any optimal allocation is at most \(2\)._
The idea is that if two agents \(i\) and \(j\) are at distance larger than two from each other, their neighborhoods are disjoint. An optimal allocation must divide shares between the disjoint sets \(\{i\}\), \(\{j\}\), and \(N(i)\), and \(N(j)\) (as well as any other potential agents in the active set). It must also satisfy the balanced neighborhood equity condition, which implies that the shares allocated \(N(i)\) and \(N(j)\) cannot be very large. Indeed, the proof shows that any such allocation is dominated by
Figure 1. Three agent weighted graph with weights \(G_{12},G_{13}\), and \(G_{23}\).
allocating shares to only two agents: splitting shares evenly between two agents connected by a link with the largest weight in the network gives a higher value of the constant \(c\).
In the special case of unweighted networks, the message that highly connected active sets are optimal can be sharpened: It is an optimal solution for the principal to choose any clique7 of maximum size and divide shares equally among the agents in this clique.
Footnote 7: A subnetwork with links between all pairs of agents.
**Theorem 2**.: _If \(\mathbf{G}\) is an unweighted network, then any maximum clique is the active set at an optimal allocation._
When connections are unweighted, choosing a subset that is as densely connected as possible leads to at least as high a payoff as choosing a larger but more sparsely connected subset, even if all agents in the larger subset have higher degree.
The proof applies Lemma 3: given an optimal allocation with an arbitrary active set, we find a clique within that active set for which the constant \(c\) is as large. We produce such a clique exists by sequentially constructing a series of agents who are all connected to each other. To do so, the balanced equity condition for the optimal allocation has to be used carefully at each step of the construction.
In general, the active set need not be unique in unweighted networks, so there can be other optimal allocations giving the same payoff to the principal as a clique of maximum size. For example, in a star network, any set including the central node and at least one peripheral node is the active set at some optimal allocation.8 The next example shows that there can also be optimal allocations that differ more substantially from maximum cliques.
Footnote 8: Among allocations giving \(s\) shares to agents, a given allocation is optimal if and only if it gives \(s/2\) shares to the central agent and \(s/2\) shares to the peripheral agents.
**Example 2**.: Consider an even number of agents \(n\) arranged in a circle. Let \(\mathbf{G}\) be the undirected network in which each agent is connected to all other agents except the diametrically opposite agent in the circle: for all distinct \(i\) and \(j\) we let \(G_{ij}=1\) if \(|i-j|\neq n/2\) and \(G_{ij}=0\) if \(|i-j|=n/2\). The network structure is shown in Figure 2.
Suppose that we want to allocate \(s\) shares to agents optimally. The maximum cliques have size \(n/2\), and dividing the \(s\) shares evenly within any maximum clique gives \((\widetilde{\mathbf{G}}\boldsymbol{\sigma})_{i}=\frac{n-2}{n}\cdot s\). All agents in the full network have degree \(n-2\), so dividing \(s\) shares evenly among all agents also gives \((\widetilde{\mathbf{G}}\boldsymbol{\sigma})_{i}=\frac{n-2}{n}\cdot s\). It follows that the set of all agents, as well as all maximum cliques, are possible active sets, depending on which optimal allocation is chosen.
This example shows that the principal can be indifferent between very different active sets--a point which has implications for the welfare of agents, as we discuss further in Section 6.3.
## 5. Implications
In this section, we explore some implications of our analysis that illuminate how the optimal contract depends on the environment. We focus on the effects of changes in the network \(\mathbf{G}\) and the
parameter \(\beta\) describing the strength of complementarities. Section 5.1 examines how the equity allocation and the resulting team performance depend on the network of complementarities. The results provide some insights about which networks might be preferred by the principal and by agents. Section 5.2 then asks how the share of equity retained by the principal (in the residual profit maximization problem) depends on the strength of complementarities.
### Varying the network
Our first result describes how optimal allocations vary as the network changes. We write \(\frac{\partial}{\partial\mathcal{G}_{jk}}\) for the derivative in the weight \(G_{jk}=G_{kj}\) of the link between \(j\) and \(k\). Recall that given an allocation, we write \(\widetilde{\mathbf{G}}\) for the adjacency matrix restricted to active agents.
**Proposition 4**.: _Suppose that under \(\mathbf{G}\) there is a unique optimal equity allocation9\(\boldsymbol{\sigma}^{*}\), with agents \(i\), \(j\), and \(k\) all active. The derivative of agent \(i\)'s optimal share as we vary the weight of the link between \(j\) and \(k\) is_
Footnote 9: We expect this hypothesis to be satisfied for generic networks.
\[\frac{\partial\sigma_{i}^{*}}{\partial G_{jk}}=-(\widetilde{\mathbf{G}}^{-1} )_{ik}\sigma_{j}^{*}-(\widetilde{\mathbf{G}}^{-1})_{ij}\sigma_{k}^{*}+\frac{ \partial c}{\partial G_{jk}}\frac{\sigma_{i}^{*}}{c}.\]
The value \(c\) is the balanced equity in each neighborhood from Theorem 1(a). The proof is based on the matrix calculus expression
\[\frac{\partial\mathbf{G}(t)^{-1}}{\partial t}=-\mathbf{G}(t)^{-1}\frac{ \partial\mathbf{G}(t)}{\partial t}\mathbf{G}(t)^{-1} \tag{5}\]
for the derivative of the inverse of a matrix.
The result provides a fairly explicit expression for the impact of changing a link on equity allocations. However, calculating the change \(\frac{\partial c}{\partial G_{jk}}\) in \(c\) may be difficult under the residual profits objective, where the total amount of equity allocated can change as we strengthen a link. We can be more explicit under the success probability objective, since the total amount of equity allocated
Figure 2. Ten agent unweighted graph with each agent connected to all other agents except the diametrically opposite one.
sums to 1. In that case, we have an explicit expression
\[c=\frac{1}{\mathbf{1}^{T}\widetilde{\mathbf{G}}^{-1}\mathbf{1}}\]
for the total equity in each neighborhood. Differentiating this expression gives a version of Proposition 4 without an unknown value \(c\).
Under either objective, the ratio \(\sigma_{i}^{*}/\sigma_{i^{\prime}}^{*}\) between two agents' shares is independent of \(c\), so (5) lets us calculate the change in this ratio (under either objective) as the link between \(j\) and \(k\) is strengthened.
The matrix inverse \(\mathbf{G}^{-1}\) need not vary monotonically as \(\mathbf{G}\) changes. This implies that equity centrality need not satisfy monotonicity properties that hold for standard centrality measures such as Bonacich centrality. As an illustration, we return to our example of three-agent networks from Section 4. Details are again deferred to Appendix B.
**Example 1** (**continued**).: Recall that we normalized \(G_{12}=1\). We will vary the weight \(G_{23}\) over the interval \((1-G_{13},G_{13})\). We will further suppose \(G_{13}>\frac{1}{2}\); under these conditions, there is a unique optimal allocation and all three agents are active under this allocation.
We begin by studying the effect on optimal shares \(\sigma_{i}^{*}\)--or, in other words, equity centralities. Under the success probability objective, we show these centralities can be non-monotonic in an agent's own links. There exists a threshold \(g^{*}\in(1-G_{13},G_{13})\) such that for \(G_{23}\in(1-G_{13},g^{*})\), increasing the weight \(G_{23}\) decreases the share \(\sigma_{2}^{*}\) allocated to agent 2. So strengthening one of an agent's links can decrease his share of output under the optimal allocation. Intuitively, as \(G_{23}\) is strengthened, the principal would like to increase agent 3's shares, and initially is willing to do so at the expense of agent 2. (When \(G_{23}\in(g^{*},G_{23})\), meanwhile, increasing this weight decreases the share \(\sigma_{1}^{*}\) allocated to agent 1.)
Under the residual profit objective, comparative statics are more challenging because of the additional choice of how much equity to allocate (corresponding to the last term of the formula in Proposition 4). We turn to a numerical example to illustrate that non-monotonicities like those discussed above can nevertheless continue to be present. Figure 3 shows the optimal equity shares and the corresponding equilibrium payoffs as we vary the link weight \(G_{23}\), under parameter values specified in the caption. Figure 2(a) depicts the optimal equity allocation of each agent as a function of \(G_{23}\). The equity allocation is non-monotonic in own links: increasing \(G_{23}\) initially decreases agent 2's equity, mirroring our analytical result described above.
The numerical example also illustrates a corresponding non-monotonicity in payoffs: strengthening one of an agent's links can decrease his equilibrium payoff under the optimal contract. Figure 2(b) depicts the equilibrium payoffs under the optimal equity allocation as a function of \(G_{23}\). Strengthening the link between agents 2 and 3 can _decrease_ the resulting payoffs for agents 1 and 2. This contrasts with the standard network games intuition: under a fixed equity allocation, all agents' payoffs are monotone in the network. In the present setting, however, agent 2 can benefit from weakening one of his links. This suggests a tension between the network formation incentives of the
principal and the agents. Agents may not be willing to form links that would benefit the principal or the team as a whole, even if link formation is not costly.
We next look at how team performance under an optimal allocation varies as the network changes. Recall that \(Y^{*}\) denotes the equilibrium team performance under an optimal allocation. Then \(\frac{\partial Y^{*}}{\partial G_{ij}}\) is the change in this team performance as the weight on the link between agent \(i\) and \(j\) increases.
**Proposition 5**.: _Suppose \(\boldsymbol{\sigma}^{*}\) is an optimal allocation. Then the change in equilibrium team performance as \(G_{ij}\) varies can be expressed as_
\[\frac{\partial Y^{*}}{\partial G_{ij}}=\sigma_{i}^{*}\sigma_{j}^{*}h,\]
_where \(h\) does not depend on the identities of \(i\) or \(j\)._
The proposition says that the increase in output from strengthening a link is precisely proportional to the product of the equity shares given to the two agents connected by that link. The proof gives an explicit formula for the quantity \(h\), which depends on the model parameters and the allocation.
The proposition has implications for a designer who can make small changes in the network of complementarities. If the principal could marginally strengthen some links, she would want to focus on links between pairs of agents with high equity centralities. This is consistent with the intuition from Section 4 that highly connected groups of agents are especially productive under equity compensation.
### Varying complementarities
We now turn to how outcomes change as the complementarity parameter \(\beta\) increases. Recall from Proposition 2 that ratios of optimal shares do not depend on the value of \(\beta\). But under the residual profits objective, we can ask how the total fraction of shares allocated to agents depends on \(\beta\).
We study the comparative static in the special case when \(P(\cdot)\) is linear in the range of feasible team performances. We assume for simplicity that the optimal allocation is unique, but could easily relax this assumption. The principal faces a trade-off between keeping a larger share of the profits and using a larger share to encourage workers to exert more effort. The following result states that when complementarities in production are larger, it is optimal to keep a smaller share of a larger pie.
**Proposition 6**.: _Suppose that \(P(Y)=\alpha Y\) on an interval \([0,\overline{Y}]\) containing the equilibrium team performance under any feasible allocation and that there is a unique optimal allocation \(\boldsymbol{\sigma}^{*}\). Under the residual profits objective, the sum of agents' equity shares under the optimal allocation is increasing in \(\beta\), i.e.,_
\[\frac{\partial\left(\sum_{i\in N}\sigma_{i}^{*}\right)}{\partial\beta}>0.\]
Figure 3. The optimal share allocation and resulting equilibrium payoffs as a function of the weight \(G_{23}\). We work with the residual profits objective. Here \(G_{13}=0.8\) and \(\beta=0.1\), while \(P(Y)=\min\{0.9Y,1\}\) (the kink is not relevant for the principal's problem). In both diagrams, the curve corresponding to agent 1 is the topmost (solid blue) one; the curve corresponding to agent 2 is the second from the top (dashed red); and the curve corresponding to agent 3 is the lowest (dotted orange) one.
The basic idea behind the proof is that the benefits to retaining more of the firm are linear in the output while the benefits to allocating more shares to workers are convex, and become steeper as complementarities increase.
If \(P(Y)\) is strictly concave, there is a trade-off between the concavity of \(P(Y)\) and the convexity of \(Y(\mathbf{a})\). Depending on which effect is stronger, the fraction of shares allocated to agents may increase or decrease as complementarities grow stronger.
## 6. Discussion
### The balance condition in more general environments
Our characterization of the optimal equity allocation relies on several features of our model. In particular, we assume quadratic functional forms for agents' utility and the joint output and assume that heterogeneity across agents arises only from their different network positions. If these assumptions are relaxed, the balanced neighborhood equity result will no longer hold exactly. Nevertheless, the key insight behind the result is more general: optimal incentives favor balancing the spillover effects of incentivizing higher actions.
In general, the principal will trade off the benefits of such balance with other concerns that could be introduced to the model. For example, if agents' individual returns to effort are heterogeneous, a trade-off arises between balancing spillovers and allocating equity to the most individually productive agents. A more complicated balance condition would then be relevant.
Nevertheless, the forces underlying our main results would remain relevant, and would be the dominant ones in some cases--for instance, in the limit as the spillovers grow large. More precisely, consider extending our model to allow heterogeneous returns \(b_{i}>0\) to individual effort, so that team performance is
\[Y(\mathbf{a})=\sum_{i\in N}b_{i}a_{i}+\frac{\beta}{2}\sum_{i,j\in N}G_{ij}a_{i }a_{j}.\]
Then our main results continue to hold in the limit \(\beta\to\infty\). Intuitively, when complementarities can be sufficiently large, it becomes much more important to exploit those complementarities optimally (which requires balance) than to exploit the heterogeneity in individual productivities.
### Tightly-knit teams
Our extensive margin results can be summarized as saying that the principal prefers to concentrate equity in teams whose members have strong mutual complementarities. In terms of interpretation, this need not mean that the agents involved work closely together or are nearby in an organizational sense--just that their efforts are highly complementary in producing the output.
The details of the extensive margin results depend on the specific structure of our model, but we believe the economic intuition underlying these results has broader implications. When the principal motivates an agent by giving him a larger share of the equity in a single common output, it dilutes the equity of the others. Strong complementarities among those getting equity shares countervail this dilution, and this is what makes tight-knit teams valuable to the principal. As we remarked in our discussion of the model's assumptions, in reality a principal may have signals
of effort richer than we have studied--for example, outcomes reflecting contributions of specific organizational units. It may be interesting to study how a principal would optimally use multiple signals of this sort to provide incentives to a networked team.
### A tension between the principal's interests and agent welfare
Example 2 shows that the principal can be indifferent between active sets, and associated allocations of equity, that have very different welfare implications for workers. In that example, since the cost of effort is convex, agents are better off (on average) if the same performance is achieved by a larger team. But the principal is indifferent between two different team sizes. If the complementarities are perturbed slightly to strengthen those in some maximum clique, then the principal's indifference is broken and she has a strict preference for the compensation scheme that motivates a smaller team--and which happens to leave workers substantially worse off.
This is a consequence of the fact that, subject to paying out a certain total share in equity compensation, the principal is maximizing the probability of a project's success rather than utilitarian welfare. The mechanism of equity pay can do a very poor job of transmitting workers' interest in a more equal distribution of effort. This highlights an interesting tension between welfare and the principal's preferred mode of incentive-provision, and the binary-outcome model we work with brings it out particularly sharply.
### Connection to a spectral radius maximization problem
The optimal allocation turns out to have a simple description in terms of a problem of maximizing a spectral radius: if \(\boldsymbol{\sigma}\) solves the success probability optimization problem, then \(\boldsymbol{\Sigma}=\operatorname{diag}(\boldsymbol{\sigma})\) also maximizes the spectral radius \(\rho(\boldsymbol{\Sigma}\mathbf{G})\) among nonnegative vectors \(\boldsymbol{\sigma}\) summing to \(1\).10 To show this, we show that when \(\beta\) is large enough (so that very large spillovers are possible), the principal wants to choose shares \(\boldsymbol{\sigma}\) inducing a large spectral radius to capture these spillovers. By Proposition 2, the optimal allocations do not depend on \(\beta\), so in fact such a \(\boldsymbol{\sigma}\) is optimal for any \(\beta\). We formally state and prove the connection in Appendix C.
Footnote 10: The spectral radius of a matrix, which we denote by \(\rho(\cdot)\), is the largest magnitude of an eigenvalue of the matrix.
An applied mathematics literature discusses spectral radius maximization problems of this form (e.g., Elsner and Hadeler (2015), Nesterov and Protasov (2013), and Axtell, Han, Hershkowitz, Neumann, and Sze (2009)). Most closely related, (Elsner and Hadeler, 2015) consider the same spectral radius maximization problem and discuss algorithms for efficiently computing the optimal diagonal matrix \(\boldsymbol{\Sigma}\). Our analysis turns out to provide several insights into this problem. For instance, Theorem 2 implies a characterization of the highest achievable spectral radius when \(\mathbf{G}\) is the adjacency matrix of an unweighted network, showing that it is achieved by a dividing shares \(\sigma_{i}\) equally among the members of a maximum clique.
|
2301.07506 | Exploring the Temporal Variation of the Solar Quadrupole Moment J2 | Recently, Rozelot & Eren pointed out that the first solar gravitational
moment (J2) might exhibit a temporal variation. The suggested explanation is
through the temporal variation of the solar rotation with latitude. This issue
is deeper developed due to an accurate knowledge of the long-term variations in
solar differential rotation regarding solar activity. Here we analyze solar
cycles 12-24, investigating the long-term temporal variations in solar
differential rotation. It is shown that J2 exhibits a net modulation over the
13 studied cycles of approximately (89.6 +- 0.1) yr, with a peak-to-peak
amplitude of approximately 0.1 x 10-7 for a reference value of 2.07 x 10-7).
Moreover, J2 exhibits a positive linear trend in the period of minima solar
activity (sunspot number up to around 40) and a marked declining trend in the
period of maxima (sunspot number above 50). In absolute magnitude, the mean
value of J2 is more significant during periods of minimum than in periods of
maximum. These findings are based on observational results that are not free of
errors and can be refined further by considering torsional oscillations for
example. They are comforted by identifying a periodic variation of the J2 term
evidenced through the analysis of the perihelion precession of planetary orbits
either deduced from ephemerides or computed in the solar equatorial coordinate
system instead of the ecliptic coordinate one usually used. | Saliha Eren, Jean-Pierre Rozelot | 2023-01-18T13:24:34Z | http://arxiv.org/abs/2301.07506v1 | ###### Abstract
###### Abstract
Recently, Rozelot & Eren pointed out that the first solar gravitational moment (\(J_{2}\)) might exhibit a temporal variation. The suggested explanation is through the temporal variation of the solar rotation with latitude. This issue is deeper developed due to an accurate knowledge of the long-term variations in solar differential rotation regarding solar activity. Here we analyze solar cycles 12-24, investigating the long-term temporal variations in solar differential rotation. It is shown that \(J_{2}\) exhibits a net modulation over the 13 studied cycles of \(\approx\)(\(89.6\pm 0.1\)) yr, with a peak-to-peak amplitude of \(\approx\)\(0.1\times 10^{-7}\) for a reference value of \(2.07\times 10^{-7}\)). Moreover, \(J_{2}\) exhibits a positive linear trend in the period of minima solar activity (sunspot number up to around 40) and a marked declining trend in the period of maxima (sunspot number above 50). In absolute magnitude, the mean value of \(J_{2}\) is more significant during periods of minimum than in periods of maximum. These findings are based on observational results that are not free of errors and can be refined further by considering torsional oscillations for example. They are confronted by identifying a periodic variation of the \(J_{2}\) term evidenced through the analysis of the perihelion precession of planetary orbits either deduced from ephemerides or computed in the solar equatorial coordinate system instead of the ecliptic coordinate one usually used.
Solar physics (1476); Solar activity (1475); Solar rotation (1524); Sunspots (1653); Gravitation (661); Fundamental parameters of stars (555); Equatorial coordinate system (467); Ecliptic coordinate system (445); Solar evolution (1492); The Sun (1693); Solar motion (1507) +
Footnote †: journal: 942:90 (7pp), 2023 January 10
+
Footnote †: journal: 942:90 (7pp), 2023 January 10
+
Footnote †: journal: 942:90 (7pp), 2023 January 10
## 1 Introduction
It has been suggested by Dicke (1976), an astronomer at Princeton (USA), as early as the 1970s that the measured excess of solar oblateness over the oblateness due to the surface rotation alone might be due to the existence of a solar gravitational moment that in turn, could be due to a rapidly rotating solar core. Note that this thesis has been (re)brought up to date by Fossat et al. (2017) without any consideration, which could be drawn so far on the surface oblateness.
Let us recall that, in a spherical harmonics expansion in \(n\),\(m\) (\(n\), order; \(m\), mode) of the gravity potential outside a star, the gravitational moments are determined by the tesseral coefficients \(c_{mn}\) falling off inversely as the cube of the distance from the star's center. Because the Sun is essentially symmetric about its rotation axis \(m=0\); thus the second-order \(n=2\), or zonal coefficient \(c_{2,0}\), determines the quadrupole moment. As \(c_{2,0}\) is always negative, by convention and simplification, \(J_{2}\) is taken to be \(-c_{2,0}\) (note that \(J_{2}\) is the dynamical flattening and not the solar oblateness as it is sometimes written; see also footnote 1 in Pireaux & Rozelot 2003).
For a very long time, \(J_{2}\) has hardly attracted interest due to two significant facts: on the one hand, its order of magnitude is very faint, and on the other hand, it cannot be measured directly; models are required. On the first point, Pireaux & Rozelot (2003) assigned \(J_{2}\) to be \(\approx\)(\(2.0\pm 0.4)\times 10^{-7}\), a range of values now commonly accepted, sometimes slightly revised as \(\approx\)(\(2.2\pm 0.4)\times 10^{-7}\). On the second point, several indirect observations have been proposed. Among them, let us quote Armstrong & Kuhn (1999), who explored rotation models that smoothly match the observed surface rotation and interior measurements deduced from the helioseismic interior rotation. Other different methods of theoretical calculations have been advanced; one is to express the distortions of the solar shape under the assumption of a slow rotation (i.e., when the centrifugal acceleration is slight compared to the gravitational acceleration) and where all solar structure quantities are described in terms of perturbations (expanded based on Legendre polynomials) of the spherically symmetric nonrotating star. The gravitational moments \(J_{2n}\) are thus obtained assuming the continuity of the gravitational potential at the solar surface (see, i.e., Equation (3) in Lefebvre et al. 2007 or Equation (17) in Mecheri et al. 2004). In this formulation, the \(J_{2}\) determination represents solely the purely gravitational contribution, which is likely not entirely correct. Another less common method is considering a rotating star's equilibrium formation. Under the assumption of hydrostatic equilibrium, the body's shape is spheroidal in response to the self-gravitational and centrifugal potentials. Hence the shape is defined by the angular spin velocity and the radial density profile. Thermodynamical parameters render the analytical treatments complicated but possible when ellipticities are close to zero (generally associated with states of low rotation as in the solar case).
An alternative indirect approach is to access \(J_{2}\) by analyzing the orbits of planets and asteroids of the solar system. For many years, the accuracy of the ephemeris of such bodies has been incredibly improved, and numerical solutions lead to determining \(J_{2}\) by a postfit residual minimization. However, it is not simple to reach this goal because of the interplay between the effects of the solar multipolar moments with those induced by the post-Newtonian gravito-electromagnetic forces (Lense-Thirring effect; see Iorio 2018, and Section 5).
We are here interested in the \(J_{2}\) variation with time, a dependence that has been hardly studied until now. We comprehend that our analysis is based on observations, which are subject to errors; the results will inevitably be affected. However, they do provide indications of the long-term behavior of \(J_{2}\).
## 2 Evolution of \(J_{2}\) over the Solar Cycles 12-24
Today the question of the temporal dependence of the gravitational moments \(J_{n}\) (and so, \(J_{2}\) at first) is not determined as (i) observations are at the cutting edge of the techniques and (ii) the mapping of the surface magnetic fields, which could produce a supplementary shape distortion (or not) due to the rotation, is not known with sufficient accuracy to be accurately modeled. The same approach goes for other factors that may be sensitive, such as turbulent pressure, shear effects, or other stresses, and would contribute to affecting the solar shape. However, contemporary measurements of the solar figure made utilizing the MDI-SOHO experiment (Scherrer et al., 1995) or by the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory (Scherrer et al., 2012) indicate a temporal variability of the asphericities' coefficients (see, i.e., Emilio et al., 2007; Kuhn et al., 2012, and Kosovichev & Rozelot, 2018). Even if the contribution of the solar limb shape through these parameters is a few percent due to the gravitational moments, it turns out that temporal variability is expected. Indeed, determining their order of magnitude this way requires very high sensitivity methods.
Helioseismology provided the premise for a variation of the gravitational moments associated with the solar cycle. This has been highlighted by Antia et al. (2008), who found an amplitude modulation of less than 0.04% for \(J_{2}\) over the time range 1996-2006. However, such a tiny modulation has not been confirmed so far.
\(J_{2}\) was here computed as usual by setting the gravity field. As we wanted only to highlight the temporal dependence, we determined \(J_{2}\) at the latitude at which the rotational gradient (\(\partial\log(\omega)/\partial\log(r)\)) is reversing, passing from negative to positive values. Indeed, the (logarithmic) average gradient in the outer 15 Mm or so is close to \(-1\) and is quite independent of latitude below 30\({}^{\circ}\); between 30\({}^{\circ}\) and 50\({}^{\circ}\) latitude, it is still negative but makes a transition to absolute value at 56\({}^{\circ}\) of latitude (Corbard & Thompson, 2002).3 At this specific latitude, the centrifugal force that affects the solar shape can be derived from the potential as the observed surface rotation is very similar to the equatorial excess. This simplifies the calculations without loss of generality, bearing in mind that the geodetic parameter \(q\!=\!(\omega^{2}R^{3})/GM\) remains a small quantity, albeit latitude dependent. Taking \(M_{\odot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
(\(p=0.07\)); we find the following probabilities for \(J_{2}\) to be in the range of \([1.95\)-\(2.1]\times 10^{-7}\): \(0.71\) (\(J_{2\,\mathrm{max}}\)), \(0.43\) (\(J_{2\,\mathrm{min}}\)), and \(0.93\) (\(J_{2\,\mathrm{whole}}\)). The probability rises to \(0.71\) for \(J_{2\,\mathrm{min}}\) to be in the range \([1.95\)-\(2.30]\times 10^{-7}\). These last findings give some confidence in the results.
The inspection of the three plots in Figure 1 (TM, Tm, and the whole data) according to the date reveals a net sinusoidal oscillation whose periods are, respectively, \(P_{\mathrm{RM}}=(89.5\pm 0.2)\) yr, \(P_{\mathrm{rm}}=(80.3\pm 1.5\) yr), and (\(P_{\mathrm{whole}}=89.6\pm 0.1\)) yr, with significant correlation coefficients: \(r=0.45\), \(0.70\), and \(0.61\). The weighted average (by errors) of the found period is thus \((89.56\pm 0.09)\) yr. The largest amplitude of the modulation is \(\approx\)\(0.1\times 10^{-7}\) (peak to peak). The uncertainties on the amplitude of the sinusoidal fits are, respectively, about \(2.0\times 10^{-8}\), \(2.3\times 10^{-8}\), and \(2.1\times 10^{-8}\).
An outlier can be noticed for cycle 24 (\(2.47\times 10^{-7}\)) due to the value of \(A\) in Javaraiah (2020)'s Table 2:14.384 (degree per day), which is not logical. If we remove this point, the correlation coefficient becomes \(0.65\) (without affecting the period). However, we keep it, and this remark on this outlier point stands for the remainder of this paper.
Over the studied temporal span (solar cycles 12 to 24), the three detected periodicities show that the oscillation would be somewhat shorter during periods of solar minimum. If such an issue proves to be accurate, we will try to explain it in Section 5, but suggesting (if the values found are significantly different) that \(J_{2}\) could be more time sensitive with periods of minimum solar activity. The following sections will attempt to check such findings.
As a partial conclusion, considering the errors in the experimental parameters, we may infer a periodic oscillation of the first gravitational moment of about eight solar cycles, which roughly corresponds to the so-called Gleissberg cycle (Gleissberg 1939), generally taken to be equal to \(\approx\)\(87\) yr (or eight solar cycles).
### Analysis versus the Periods around the Solar Maxima and Minima
Figure 2 depicts the \(J_{2}\) behavior with a date of around the 3 yr, enclosing the maximum (left) and the minimum (right). A very slight declining trend can be noticed also in the second case that we did not plot as almost identical to the mean. Here, \(J_{2\,\mathrm{min}}\) does not significantly differ from the mean. The presence of the outlier does not significantly influence this result; when removed, a slightly positive trend emerges.
The situation is different around the maximum: a net negative trend (\(r\!=\!-0.46\)) for a mean estimated at \(2.00\times 10^{-7}\). In any case, the \(J_{2}\) magnitude seems more significant in the period of minimum activity than in the period of maximum, a situation that we will find again by studying the \(J_{2}\)'s dependence on the sunspot number (Section 3.3).
To partially conclude, this analysis shows that \(J_{2}\) behaves differently during the various phases of the solar cycle.
Figure 1: Solar quadrupole moment \(J_{2}\) as a function of time (1872 to 2010), considering the mean epoch of each solar cycle 12 to 24 (bottom). Top left, at the epochs around the maximum of solar activity TM, then, top right, at the epochs around the minimum Tm. A clear periodic oscillation is visible on all the data, evidencing a temporal dependence of the first gravitational moment, of around (\(89.6\pm 0.1\)) yr. For the sake of clarity, error bars have been multiplied by 20.
Figure 4.— Solar quadrupole moment \(J_{2}\) as a function of the solar activity (1872 to 2010), described by the sunspot numbers. Top left, for small sunspot numbers (0–40), the trend is slightly positive: \(r=+0.12\). Top right, for higher sunspot numbers (80–200), the trend is clearly negative: \(r=-0.62\). For the sake of clarity, error bars have been multiplied by 20.
Figure 3.— Solar quadrupole moment \(J_{2}\) as a function of the solar activity (1872 to 2010) for the whole data available. Although the general trend seems to be negative (dotted line), the first solar quadrupole moment seems to follow two regimes: during periods or minimum activity (sunspots number 0–40, left leg), \(J_{2}\) is positively correlated, while during periods of maximum of activity (80–200, right leg), the trend is negative. For the sake of clarity, error bars have been multiplied by 20.
Figure 2.— Solar quadrupole moment \(J_{2}\) vs. the solar activity analyzed during the 3 yr span time around the periods of maximum (RM; left) and minimum (rm; right). During periods of solar minima, \(J_{2}\) does not significantly differ from the mean (\(2.06\times 10^{-7}\), thin line), which is of the same order of the \(J_{2}\) reference value. During periods of solar maxima, \(J_{2}\) shows a marked negative trend (\(r=0.46\)). For the sake of clarity, error bars have been multiplied by 20.
### Analysis versus the Solar Activity Described by the Sunspot Numbers
In Figure 3, we present the results for all the data available according to the sunspot number. The \(J_{2}\) temporal evolution depicts a somewhat more complex behavior. Although the overall trend is negative when plotting the whole data as a function of the sunspot number, it appears a positive trend for small values (0-40) (marked (1)) and a negative trend for larger values (80-250) (marked (2)). Splitting the data into these two series yields Figure 4 showing how the solar quadrupole moment \(J_{2}\) evolves as a function of the solar activity (1872 to 2010). Left, for small values of the sunspot numbers, during the 3 yr of the minimum (rm: 0-40), the trend is slightly positive (\(r=+0.12\)). Right, for higher sunspot numbers (RM: 80-200), around the maximum 3 yr span, the trend is negative (\(r=-0.48\)).
If the obvious outlier is removed in Figure 4 (left, i.e., \(2.47\times 10^{-7}\)), the correlation coefficient jumps to \(r=+0.69\). Thus, the identification of the two regimes is well highlighted.
Together with the 89 yr period, a shorter one (a subharmonic) is detected in the three plots shown in Figure 1, respectively, (max-min-whole): \(8.80\pm 0.20\), \(8.53\pm 0.11\), and \(9.20\pm 0.10\) yr, with correlation coefficients of 0.67, 0.51, and 0.44. The weighted average is thus \(8.89\pm 0.07\) yr. (Note that the periodicity found during the period of minimum is less than the two others.) This issue could be related to the oscillations of short timescales ranging from \(\approx\)0.5 yr to \(\approx\)9 yr (short-term variations of \(\approx\)155 days are called Rieger periodicities, more frequently found in epochs of maximum activity). The longer periodicities are related to the quasi-biennial oscillation (QBO), whose lengths are poorly defined while their amplitudes are modulated by the solar cycle. Their signatures have been seen in solar rotational rate residuals at near-surface depths (Inceoglu et al., 2022), and it has been shown that their relative amplitude is highly correlated with sunspot number.
They also suggest that the amplitude of the QBO in both frequency shift data and solar activity proxy data scales with the activity of the solar cycle. We drew such an inference in our results. Explanations have been put forward; among them, Jupiter, Saturn, Uranus, and Neptune) could influence the solar differential rotation rate and hence, the strength of the solar dynamo (Javaraiah, 2020; de Paula et al., 2022; Zioutas et al., 2022). We are wondering if our findings could not highlight any relation of \(J_{2}\) with such issues, as well as on long timescales of the solar cycles, and even in shorter ones, as detected, for instance, in the variability of the sunspots N-S asymmetry activity (see for instance de Paula et al., 2022): "the 7.0-07.9 yr periodicity could not be an artifact but a real signal in the solar N-S asymmetry and only its fluctuating apparition along the time can explain why it evaded its manifestation in the analysis of other preceding works. Future solar models should integrate this period."
## 4 Discussion
Is the differential rotation law on which this study is based sufficiently valid? As sunspots only cover a rather small range of latitudes, it can be argued that the overall surface differential rotation is better described by a law of type \(\omega(\theta)=A+B\sin^{2}(\theta)+C\sin^{4}(\theta)\). Based on magnetogram data obtained at Mount Wilson (USA), Howard et al. (1980) determined these coefficients \(A\), \(B\), and \(C\). They suggested that a strong correlation between \(B\) and \(C\) might occur, albeit some authors consider it spurious. They also showed that a simple linear relationship between three coefficients can be constructed (orthogonal functions), eliminating the crosstalk among the coefficients and for rotation, providing a convenient set of functions (Gegenbauer polynomials) for separating modes in torsional oscillations. At first sight, Howard's law seems preferable. However, the Mount Wilson survey was performed only over the years 1973-1977, i.e., over only 4 yr, which seems insufficient for our purpose. A reanalysis of the data in 1984 (Howard et al., 1984) covered the years 1921-1982, which is 61 yr and would be better. But they determine a law in only \(A+B\sin^{2}(\theta)\), which however fits well the observations. Snodgrass and Howard (1985) established again a law in \(A+B\sin^{2}(\theta)+C\sin^{4}(\theta)\), using Gegenbauer polynomials, for the years 1967-984, i.e., 17 yr covering hardly two solar cycles, SC 20 and SC 21. Sometime later, Snodgrass and Ulrich (1990) reexamined this law over 20 yr (from 1967 to 1987), also considering the Gegenbauer polynomials. Thus, this analysis covers approximately the two same solar cycles (SC 20 and SC 21), which is again a bit limited for our purpose. All these reasons led us to a first approach to consider only Javaraiah's law (2020), essentially also because the coefficients \(A\) and \(B\), which are time-dependent, are tabulated over 13 solar cycles, i.e., 137 yr, which moreover makes the detected period significant.
However, it seemed interesting to compare \(J_{2}\) as deduced from Howard's law (1980) considering \(A\), \(B\), and \(C\), which gives \(2.18\times 10^{-7}\). This is not fundamentally different from our reference value \(2.07\times 10^{-7}\), and is in the error limits \((2.1\pm 0.4)\times 10^{-7}\)(Pireaux and Rozelot, 2003). Note that the results obtained by Snodgrass and Ulrich (1990), determined by Doppler features, are about 4% higher than those deduced from Mount Wilson spectroscopic observations, and would lead to a higher \(J_{2}\) estimate (\(>\)\(2.88\times 10^{-7}\)). It can be assumed that the visualization of the curves obtained by this new method, at least to first order, would only differ from those obtained in this paper by a simple translation in ordinates.
Regarding torsional oscillations, as seen just before, an \(A+B\sin^{7}(\theta)\) law does not capture them very well. These oscillations, which periodically speed up or slow down the rotation in certain zones of latitude, mainly accentuated at high latitudes (\(>\)62\({}^{\circ}\), while elsewhere the rotation remains essentially steady), could probably be considered later because modern methods covering the whole solar disk are not yet available for the timescale of the Gleissberg cycle. For the time being, such an analysis is clearly beyond the scope of this study, for which we only wanted to show that \(J_{2}\) might be temporally dependent (or not).
Figure 1 shows a different behavior at minimum (or more precisely around the minimum, and not necessarily during the whole minimum) and at maximum (or more precisely around the maximum, and not necessarily during the whole maximum). Such a complex behavior has already been detected by Emilio et al. (2007) from MDI measurements on SOHO between 1997 (period of minimum) and 2001 (period of maximum), suggesting that the outer solar atmosphere expands nonhomologously during the cycle. This result was also found by Rozelot et al. (2009a) who moreover showed that there could be a change in the relative importance of the hexadecapolar term and the dipolar one in the course of the
activity cycle. In times of high activity, only the first moment has a significant contribution, but in times of low activity, the second one is predominant. This could be interpreted also as a periodic angular momentum exchange between the photosphere and deeper layers of the convection zone. Such a complex mechanism takes certainly its root in the leptocline (for the definition see footnote 6 in the abovementioned paper and Figure 6 in Rozelot et al. 2009b); a zone in which it has been shown that the rotational gradient is nonconstant, which might explain also our results.
Another point is raised about possible changes in the internal rotation model that do not necessarily appear in the surface rotation data. Recent 3D simulations (Kitiashvili et al., 2022) supported by observations have shown that the rotational effects in solar subsurface convection produce the formation of rotational shear and meridional circulation at midlatitudes. The structure of this near subsurface layer is "not uniform but contains a sharp shear layer in the top \(\approx\)8 Mm," which has been identified as a leptocline (see also Li et al. 2022) in which radial variations of the differential rotation occurs, contributing obviously to more complex rotational laws.
All these issues will be addressed in later works.
## 5 Conclusion
The long-term variations in solar differential rotation revealed that the first gravitational moment \(J_{2}\) seen from the surface distortion is variable. The current temporal evolution for the last 138 yr (2010-1872) shows a periodic modulation of about (89.6 \(\pm\) 0.1) yr, of \(\approx\)0.1 \(\times\) 10\({}^{-7}\) modulation amplitude.
If such a period of \(\approx\)8 solar cycles is highlighted, it could then be associated with the Gleissberg period.
We have identified that \(J_{2}\) seems to follow two regimes. In the period of minimum solar activity, the mean magnitude order of \(J_{2}\) is around the reference mean and shows a positive trend with increasing sunspot numbers from 0 up to around 30-40. By contrast, in the period of maximum solar activity, the mean magnitude order of \(J_{2}\) is less than the reference mean and shows a declining trend with increasing sunspot numbers from around 40 (up to 200 and more).
We want to emphasize the importance of determining \(J_{2}\) with great precision, as this parameter also plays a vital role in relativistic astrometry and relativistic celestial mechanics (Rozelot and Fazel, 2013). In this last paper, \(J_{2}\) was compiled from several observations since 1877, with modern observations starting from 1966. It already inferred a dynamical flattening, seeming unlikely at that time but a bit more realistic in light of this study.
Let us briefly comment on the two fields mentioned above while avoiding confusion between the potential temporal variation of the solar shape and estimating it together with Gravitational Rotation (GR) parameters.
First, regarding the parameterized post-Newtonian (PPN) modern theories, it has been shown that the PN parameters \(\beta\) (which encodes the amount of nonlinearity in the superposition law of gravitation) and \(\gamma\) (which encodes the amount of curvature of spacetime per unit rest-mass) are linked to the solar quadrupole moment \(J_{2}\) through a linear relation. Even though it would be possible to extract \(J_{2}\) from planetary ephemerides in principle, it is significantly correlated with other solution parameters (semimajor axis of planets, the mass of asteroids...). Focusing on the \(J_{2}\) correlations, Rozelot et al. (2022) have found that, in general, the correlations [\(\beta\), \(J_{2}\)] and [\(\gamma\), \(J_{2}\)] are \(\approx\)45% and \(\approx\)55%, respectively. In this respect, the contribution of the quadrupole competes with that of PN parameters of the order of 10\({}^{-2}\). The effect of a change in \(\beta\) can be distinguished from a change in \(\gamma\); a determination of other significant figures of \(J_{2}\) is equally essential to be able to say anything significant about the PN parameters (Sebastian et al., 2022).
The situation could be improved with additional spacecraft measurements, but it remains challenging. We are still waiting for results from space missions for which precision at a level of 10\({}^{-8}\) (or more) is expected; thus, it can be awaited to highlight a \(J_{2}\) temporal dependence. In this context, in exploring the available \(J_{2}\) values deduced from the precession of Mercury's perihelion along the orbit plane due to the Sun's quadrupole moment, we have shown its possible variation with time (Rozelot and Eren, 2020).
Figure 5 shows the solar quadrupole moment \(J_{2}\) deduced from solutions to the planetary motions (especially Mercury), fitted to observational data retrieved from 14 contributions ranging from 1997 to 2019. This figure is extracted from the data used in Rozelot and Eren (2020), Table 1, for which we added two measurements since new values were available in 2017 and 2019 (see data in "Notes Scientifiques et Techniques de l'Institut de Mecanique Celeste", 2017, 2019). The
Figure 5: Solar quadrupole moment \(J_{2}\) deduced from solutions to the planetary motion fitted to observational data permitting to assign estimates to all unconstrained ephemeris parameters. The solid line represents a part of a long period signal of 88 yr (\(J_{2}=2.04\times 10^{-7}\)-\(3.00\times 10^{-9}\) sin(\(2\pi\times\) date/88); \(r=0.8\). For the sake of clarity, the sine function ordinate was multiplied by (16). The dotted line represents the mean: \(2.02\times 10^{-7}\).
computation process permits assigning estimates to all unconstrained ephemeris parameters so that \(J_{2}\) can finally be obtained. It should be noted that even if the sample used, of no more than 12 yr, is much smaller than the one used in this study (137 yr), it appears to have modulation of \(\approx\)88 yr. Note that the sin \(\epsilon\) function \(J_{2}=2.04\times 10^{-7}+3.00\times 10^{-9}\) sin(\(2\pi\times\) date/88) gives a modulation of the amplitude of \(0.06\times 10^{-7}\) (3%) fully compatible with GR.
Regarding the second item, the secular solutions for the oblateness disturbance in consideration of the periodic variation of the \(J_{2}\) term have been studied by Xu et al. (2017) to derive the perihelion precession of Mercury. The results show that the difference in Mercury's perihelion precession between the solar equatorial plane and the ecliptic plane can reach a magnitude of \(126\,708\times J_{2}\), which is even more noteworthy than the perihelion precession itself (\(101\,516\times J_{2}\)). In this context, when a periodic variation of the \(J_{2}\) term is considered, instead of simply a constant, the periodic \(J_{2}\) has an effect of nearly 0.8% of the secular perihelion precession of Mercury. This indicates that a better understanding of solar oblateness is required, which could be done, for instance, through observation in the solar orbits instead of on Earth.
Finally, we would like to point out that the Pioneer's anomalous acceleration could be explained utilizing the observed solar quadrupole moment. Indeed, it generates an acceleration of the same order of magnitude as Pioneer's constant acceleration, within the accuracy range of the observed anomalous acceleration (Quevedo, 2005). Hence the need for greater precision on \(J_{2}\).
In a general conclusion, we have underlined the importance of knowing the temporal variations of the first solar gravitational moment \(J_{2}\) with remarkable accuracy and its changing behavior with the solar cycle that may lead to a better understanding of the physical phenomena involved in the leptocline. It shows that a long periodic oscillation of \(J_{2}\) of the order of the Gleissberg's period, which has never been put in evidence before, with amplitude modulation of the order of less than 5%. Considering the errors inevitably linked to the observations (we detected an outlier in Javaraiah's data, but we voluntarily kept it), the true modulation is certainly lower. It is, however, fully compatible with the GR. The impact of \(J_{2}\) is challenging to better determine its role in solar activity, suggesting some form of correlation between QBOs and even N-S solar asymmetry variability.
The authors thank the referee for valuable remarks made on the article, which led to better discussion of the results.
## ORCID iDs
Saliha Eren[https://orcid.org/0000-0001-7603-2488](https://orcid.org/0000-0001-7603-2488)
Jean-Pierre Rozelot[https://orcid.org/0000-0002-5369-1381](https://orcid.org/0000-0002-5369-1381)
|
2307.14192 | Unveiling Security, Privacy, and Ethical Concerns of ChatGPT | This paper delves into the realm of ChatGPT, an AI-powered chatbot that
utilizes topic modeling and reinforcement learning to generate natural
responses. Although ChatGPT holds immense promise across various industries,
such as customer service, education, mental health treatment, personal
productivity, and content creation, it is essential to address its security,
privacy, and ethical implications. By exploring the upgrade path from GPT-1 to
GPT-4, discussing the model's features, limitations, and potential
applications, this study aims to shed light on the potential risks of
integrating ChatGPT into our daily lives. Focusing on security, privacy, and
ethics issues, we highlight the challenges these concerns pose for widespread
adoption. Finally, we analyze the open problems in these areas, calling for
concerted efforts to ensure the development of secure and ethically sound large
language models. | Xiaodong Wu, Ran Duan, Jianbing Ni | 2023-07-26T13:45:18Z | http://arxiv.org/abs/2307.14192v1 | # Unveiling Security, Privacy, and Ethical Concerns of ChatGPT
###### Abstract
This paper delves into the realm of ChatGPT, an AI-powered chatbot that utilizes topic modeling and reinforcement learning to generate natural responses. Although ChatGPT holds immense promise across various industries, such as customer service, education, mental health treatment, personal productivity, and content creation, it is essential to address its security, privacy, and ethical implications. By exploring the upgrade path from GPT-1 to GPT-4, discussing the model's features, limitations, and potential applications, this study aims to shed light on the potential risks of integrating ChatGPT into our daily lives. Focusing on security, privacy, and ethics issues, we highlight the challenges these concerns pose for widespread adoption. Finally, we analyze the open problems in these areas, calling for concerted efforts to ensure the development of secure and ethically sound large language models.
ChatGPT, Large Language Model (LLM), Security, Privacy, Ethics
## I Introduction
In December 2022, OpenAI released an interactive chat platform, called ChatGPT. Its powerful in-context learning and naturally generation ability created a big shock to the whole world. Before this, there are some powerful large language models (LLM) like GPT [1] or BERT [2], which can perform well on many nature language processing (NLP) tasks, but carefully processed inputs are required in the query procedure. In other words, these machine learning tools can only be used to complete specific tasks with constrained inputs. Nevertheless, ChatGPT has greatly improve this by showing a wonderful interactive ability, which can respond almost any legal questions in different styles or targeting on different tasks. For example, users can request ChatGPT to write a series of codes with comments for each part to address certain problems. ChatGPT can also be used to summarize given texts or provide detailed illustrations for some complex concepts. ChatGPT can provide long but natural responses, which are aligned with human's knowledge. It integrates a variety of ability for NLP and possesses the ability to clarify its knowledge boundary and refuse the illegal queries. Currently, ChatGPT has over 100 million users and there are over 1.6 billions visits in June 2023. ChatGPT has become the most famous AI application and the center of the world's attention.
However, similar to other AI applications, ChatGPT brings ethical concerns and misuse risks. For example, due to its powerful text reasoning and generation ability, students find it very helpful in writing homework. In the beginning, ChatGPT was used to explain some difficult content or rephrase the written project reports, but soon, it was utilized to write the entire homework. Such misuse immediately attracts attentions from the teachers and schools and it was soon identified as plagiarism. Another concern is the copyright from ChatGPT. With increasing number of people who use ChatGPT to create original-like text content without citation, the copyright of the content created by ChatGPT become a serious concern. No one is responsible for the correctness and accuracy of the content. It becomes necessary to regulate the copyright of machine's generation, including both visual and textual content. Besides, although there are privacy protection mechanisms in ChatGPT, such as the block of access to personal data about individuals, it is not guaranteed that no leakage of its training data would occur. Malicious attacks, such as jailbreaking attacks, may utilize its great generation ability to infer some information from personal data or even use them to attack other AI models. Therefore, It is well recognized that despite great advantages ChatGPT brings to our world, the potential security, privacy, and ethical problems cannot be overlooked.
In this paper, we will introduce the security, privacy and ethic issues behind the recent most famous AI technique: ChatGPT. The main contributions of our paper can be summarized as follows.
* We give a detailed introduction of the upgrade path from GPT-1 to GPT-4 and a detailed comparisons of these methods on model size, data size and performances. Their features and limitations with advanced applications are discussed to highlight the promising applications of LLMs, especially for ChatGPT.
* We examine how ChatGPT poses new threats to data security and how ChatGPT is utilized for compromising security. The impact on security mainly includes assisting in the generation of attack codes and assisting in the generation of phishing websites. Both increase attack capabilities of adversaries. Also, we discuss the unintentional impact of inaccurate information generated by ChatGPT, possible safety hazards caused by human misuse, and potential threats to social security brought about by the deep dependence on ChatGPT in the future.
* We examine the privacy policy of OpenAI and the current privacy laws on personal data protection to emphasize the privacy violation of ChatGPT. Also, We discuss the privacy leakage threats brought by numerous data collection, personal input reference, privacy reference
attacks, and the concerns on transparency.
* We analyze the general ethical effects of AI technology on individuals, society and environment and discuss the fairness and bias issues behind AI. For ChatGPT, we summarize the ethical and legal challenges it faces.
* We discuss how to detect whether the communication object is ChatGPT in a conversation, how to detect texts generated by ChatGPT, and introduce some difficulties for such detection.
## II ChatGPT
In this section, we brief review the technical path from GPT-1 to GPT 4 and their features and limitations.
### _From GPT-1 to GPT 4_
In 2018, OpenAI introduced the initial version of the generative pre-trained transformer (GPT) [1], a highly capable large language model for natural language processing. GPT has exhibited exceptional performance across a wide range of complex language tasks, positioning it as a formidable competitor to other similar models such as BERT [2], which was proposed by Google in the same year. Prior to the success of these methods, numerous effective algorithms and remarkable applications had been developed in NLP, including machine translation [3, 4], voice recognition [5, 6], and summary generation [7, 8]. However, these applications heavily relied on extensively annotated data, resulting in time-consuming and expensive model training as the models grew in size. Moreover, even with a well-performing NLP model, its generalization to other tasks remained challenging. Essentially, these models were domain experts limited to specific areas of expertise, lacking the versatility exhibited by human beings in performing diverse tasks. Consequently, there arose a need for a methodology that could be trained without labeled data while possessing superior generalization capabilities across multiple tasks. This need served as the driving force behind OpenAI's development of GPT.
One notable advantage of GPT is its ability to train the model without relying on large annotated datasets. It employs a two-step process: unsupervised pretraining, that is operated on rich text materials and supervised fine-tuning, that is closely connected to the end applications. During the initial unsupervised pretraining phase, GPT employs 12 transformer blocks as decoders. Unlike the original transformer decoders, each block consists solely of mask multi-head attention. The objective of the pretraining process is to predict the subsequent word in a sentence based on the preceding words. Consequently, this unsupervised learning approach only necessitates raw, yet comprehensive, text materials. Following the pretraining, the model proceeds to the supervised fine-tuning phase, which focuses on a specific problem such as sentiment classification. Importantly, this stage requires a significantly smaller annotated dataset compared to the one utilized in the pretraining phase. When applying GPT to different task types, users only need to modify the input format to facilitate fine-tuning. By employing these two distinct processes, GPT no longer necessitates amassing a vast labeled dataset, while still possessing the capacity for generalization across a variety of tasks, all at a reasonable fine-tuning cost. This novel approach presents an outstanding solution to the aforementioned challenges.
While GPT-1 significantly reduced the reliance on labeled data for training, it still necessitates annotations during the fine-tuning phase. In order to further diminish the dependence on labeled data and enhance the model's generalization capabilities, GPT-2 was introduced in 2019 to address these concerns [9]. The core concept behind this groundbreaking algorithm involves a complete shift from supervised learning to unsupervised learning. The knowledge obtained through unsupervised training encompasses fragments of the information required for all supervised learning tasks. By imparting models with common knowledge in a given domain through unsupervised learning, the supervised learning task becomes a mere application of this preexisting knowledge. Essentially, GPT-2 is designed to leverage its vast pretrained knowledge, acquired from a vast dataset with rich materials, to tackle complex tasks. To achieve this objective, the architecture of GPT-2 remains largely unchanged from GPT-1, with the key modifications being an increase in the number of layers and the dataset size. These enhancements aim to imbue the model with the necessary knowledge to proficiently handle a wide range of problems.
The advancements observed from GPT-1 to GPT-2 underscored the potential for enhancing the generalization capabilities of large language models (LLMs) by increasing the model's parameter size and the training dataset. Building upon this concept, GPT-3 was introduced in 2020 as a significantly more powerful LLM [10]. With the largest number of parameters and an extensive training dataset, GPT-3 achieved state-of-the-art performance across numerous NLP tasks. In addition to its increased scale, GPT-3 introduced a novel training paradigm known as in-context learning. Departing from the conventional approach of predicting outputs solely based on queries, this new model is trained to predict outputs by considering both the queries and their corresponding examples. By incorporating contextual examples during the learning process, along with the utilization of extensive training datasets, GPT-3 is capable of acquiring comprehensive knowledge from texts. This approach empowers the model with remarkable generation capabilities, enabling it to deliver exceptional performance in a diverse range of NLP tasks, often comparable to human-level performance.
A brief summary of GPT-1, GPT-2, and GPT-3 is shown in Table.I. It is noticeable that the increasing performance of GPT series models appears with the exponential explosion of parameters. Only in the third year since the creation of GPT-1, the final model has become one of the biggest models in the world, costing 285k CPUs and 10k GPUs to train with 12 million dollars consumption.
Despite the impressive performance of GPT-3, it falls short of being considered truly 'intelligent'. Surprisingly, smaller methods such as T5 [11] have even outperformed GPT-3 in
certain tasks, which is unexpected considering the vastness of its training datasets and the complexity of its parameters. One hypothesis for this discrepancy is that while GPT-3 may have acquired rich foundational knowledge during training, it still struggles with accurately understanding and providing valid responses to user queries. To address this issue, it becomes crucial to enhance GPT-3's reasoning capabilities and its responsiveness to instructions. Recognizing the need for improvement, OpenAI embarked on refining GPT-3's performance through meticulously designed training methodologies. They introduced code-based training [12] and instruction tuning [13] to activate the model's reasoning abilities and its responsiveness to human instructions. The updated version of GPT-3 demonstrates the capacity for more reasonable responses, incorporating complex reasoning, and exhibiting greater generalization power even across unseen tasks. It has speculated that the remarkable capabilities of GPT-3 may have remained latent, requiring specific training techniques to unlock their full potential [14]. OpenAI proposed the use of reinforcement learning from human feedback (RLHF) to further improve the alignment between machine-generated answers and human common knowledge. This pursuit led to the development of ChatGPT, which quickly garnered widespread attention upon its release, capturing the interest of a global audience.
Merely four months following the launch of ChatGPT, OpenAI made an exciting announcement about the release of GPT-4 [15], showcasing a plethora of enhanced generation capabilities compared to its predecessor. GPT-4 introduces several notable advancements. Firstly, it empowers users to engage in more creative and collaborative endeavors such as personalized writing and song composition. The model possesses the ability to learn specific writing styles provided by users, enabling it to generate valuable and natural works, including songs and poems, tailored to individual preferences. Secondly, GPT-4 exhibits a significant boost in reasoning abilities. It delivers more accurate and nuanced reasoning outcomes when faced with complex and lengthy questions. Furthermore, GPT-4 surpasses ChatGPT's performance by a substantial margin in various text benchmarks, even achieving impressive results in simulating exams with significantly higher scores. Moreover, GPT-4 now supports visual inputs, allowing users to input queries comprising both text and visual content. The model can either mimic or emulate the style of the inputs in its responses. The evaluation performances on various visual benchmarks further demonstrate the model's superiority in handling multimodal inputs. Beyond its enhanced generation and reasoning capabilities, GPT-4 also exhibits improved reliability and alignment. OpenAI has implemented RLHF techniques to enhance the model's safety. As a result, GPT-4 is much less likely to respond to illegal requests and more inclined to generate factual and appropriate replies. These advancements in ChatGPT's successor, GPT-4, have generated significant attention and sparked immense interest worldwide. The model's enriched generation abilities, improved reasoning capabilities, support for visual inputs, and heightened reliability have captivated both researchers and the broader community.
### _Features and Limitations_
ChatGPT, despite sharing the same architecture with GPT-3, has overcome several limitations of its predecessors. Firstly, it can now explicitly express when it does not know the answer to questions beyond its knowledge scope. For example, when asked about events happening after 2021 but before the present moment, it responds that it cannot predict future events. Secondly, ChatGPT generates longer and more neutral responses, which align better with human common knowledge. This improvement stems from RLHF training, which favors such responses based on real human preferences. Additionally, ChatGPT can decline to provide a response to queries considered inappropriate or unsuitable. These advancements make ChatGPT more versatile and attuned to user needs.
First, there are instances where it may produce incorrect or unrelated answers. Some responses might contain inaccurate facts or biased perspectives rooted in specific regional domains. Second, retraining the model is costly, which limits its knowledge to datasets before 2021. There is a lack of contemporary training for ChatGPT. Lastly, ChatGPT is limited to providing statements in a dispassionate voice, lacking emotional expressions. In other words, it is still not capable of displaying emotions. Addressing these aspects would bring ChatGPT closer to achieving more human-like conversational abilities.
### _Applications_
One of the most prevalent applications of ChatGPT is as a chat robot or artificial assistant, akin to well-known platforms like Siri or Cortana. The exceptional fluency and rapidity with which it generates responses have attracted millions of users. Whether users seek to unravel complex concepts or delve into theoretical discussions, such as querying "what is Fourier Transform and how to apply it," or to receive personalized health advice based on their physical condition, as exemplified by inquiring about a "proper diet plan for a 65 kg man at 21," ChatGPT consistently delivers captivating, natural, accurate, and helpful responses. As a result, it has emerged as a viable alternative to the prevailing search engines like Bing or Google.
Another significant application lies in ChatGPT's prowess as a code generator. For instance, by describing the image classification task, it can craft a PyTorch code complete with clear explanations, as demonstrated in Figure 1. Remarkably, the responses not only furnish the code but also serve as a
tutorial, elucidating how to construct functional code for the specified task. This remarkable ability indicates ChatGPT's proficiency in comprehending and articulating artificial machine languages. Furthermore, ChatGPT serves as a valuable code debugger. Users can present problematic code, and ChatGPT will offer a comprehensive correction plan, thus proving itself to be a valuable tool for programmers.
A remarkable application of ChatGPT's exceptional text generation capabilities lies in its ability to craft engaging stories or articles. For instance, when prompted with a request like, "please write a story starting with: 'there is a single man left in the world after a great catastrophe,'" ChatGPT can deflty compose a narrative spanning hundreds of words, or even longer if the user allows it. The generated sentences bear an air of originality, making it nearly impossible to find similar content elsewhere. This remarkable composition skill stems from the vast repository of text materials it has assimilated during its training. However, this very proficiency raises some authorship concerns, as numerous users have been directly copying the generated content into their own work, leading to potential issues regarding ownership and authenticity.
## III ChatGPT vs. Security
The security threats raised by ChatGPT exhibit an exponential increase. On one hand, its enhanced intelligence amplifies traditional security threats, making adversaries more adept at exploiting vulnerabilities. On the other hand, ChatGPT introduces new threats to its users and the general public, necessitating heightened vigilance and protective measures.
### _ChatGPT for Cyberattacks_
MML, including ChatGPT may amplifies traditional security threats in the following aspects.
#### Iii-A1 Social Engineering Threat
The advent of LLMs has raised concerns about potential misuse and socially harmful activities that rely on text generation. Applications exploiting high-quality text generation may lower existing barriers and increase the effectiveness of malicious activities [10]. While ChatGPT claims to implement various cybersecurity measures, it is challenging to anticipate all possible scenarios of misuse. ChatGPT may be used to generate URLs, references and even code libraries and functions that do not actually exist. Goldstein et al. [16] assert that there are no easy solutions for mitigating AI-generated disinformation risks. As ChatGPT becomes an unconstrained "weapon factory" in the cybersecurity realm, the lack of public awareness of its capabilities poses a significant challenge. Without precautions, individuals may unwittingly fall victim to its misuse, prompting the need for the public to quickly learn to defend against such "new weapons." Table II illustrates how ChatGPT can facilitate easy and efficient phishing email generation, highlighting the potential risks associated with its misuse.
#### Iii-A2 Malware Creation
Although ChatGPT is capable of rejecting inappropriate queries like writing code for malware, hackers can find other ways to deceive ChatGPT and use it for malicious purposes, such as generating malware code or providing guidance on discovering vulnerabilities. While there
Fig. 1: Generating code for image classification task with ChatGPT
Fig. 2: Limitation of ChatGPT’s sensitive information detection algorithm
are concerns about how ChatGPT can assist hackers, it should be noted that its capabilities are still limited. Currently, it may be considered not too advanced for individuals with limited technical skills. It may pose a threat to less secure systems as it lowers the threshold for learning to become a hacker and increases the efficiency of generating attack variants. However, LLMs are not yet powerful enough to completely surpass human hackers, as they still require some adjustments to work properly and may make mistakes when structuring complex projects.
#### Iii-A3 AI Package Hallucination
By utilizing the code generation capabilities of ChatGPT, hackers can distribute malicious packages through fabricated code libraries. This new malicious package spreading technique is called AI package hallucination. This technique contains the initial steps to posing questions to ChatGPT, requesting packages to addressing coding problems, and obtaining a set of package recommendation, including unpublished packages in legitimate repositories. Then, hackers can publish a malicious package to the repositories with the name of the non-existent packages recommended by ChatGPT. Subsequently, if a user queries the same question to ChatGPT and it will suggest the initially non-existent package. Finally, the user utilizes the package and executes the malicious code in the package. Here, the hacker successfully delivers its created malicious package to the innocent users with the aid of ChatGPT. Furthermore, it is challenging to detect this threat by traditional methods, like typosquatting or masquerading, because it utilizes the inaccuracy of ChatGPT responses to customize attacks, so that the attack can use obfuscation techniques and create functional trojan packages to escape conventional detection.
### _Security Threats in ChatGPT_
Now we discuss new security threats raised by ChatGPT.
#### Iii-B1 Propaganda Threat
MMLs, including ChatGPT, are facing a myriad of challenges and opportunities as they progress towards achieving artificial general intelligence (AGI) [17]. In the task of classifying model-generated news, OpenAI's research reveals that the accuracy ranges from 48% to 57%, following a power law with 95% confidence intervals [10]. This indicates that it is challenging to differentiate between model-generated and human-written news articles [18], which brings the problem of AI-based plagiarism. Moreover, studies show that the current generation of language models can convincingly persuade humans, even on polarized policy issues [19]. However, malicious uses of language models can be hard to predict, as they often involve re-purposing these models in different environments or for unintended purposes [10]. Although ChatGPT is restricted to have the ability to create violent content, access up-to-date information, or encourage illegal activity, AI-based plagiarism becoming increasingly serious after the release of ChatGPT.
#### Iii-B2 Misinformation
With the Internet being a primary source of information, the challenge now lies not just in obtaining relevant content, but also in filtering out incorrect information from the vast amount available. Prior to ChatGPT, people relied on various methods to filter information, such as verifying the content directly, assessing the knowledge level of the creator, and evaluating language rigor, format correctness, and text length as indicators of reliability. However, ChatGPT's content generation capabilities excel in these aspects, creating a false sense of reliability. Users may fall into the trap of blindly trusting the content generated by ChatGPT after running simple tests. This blind trust can lead to wrong judgments due to ingrained habits. In critical fields like medical research papers, large-scale experiment background information, and policy content, referencing erroneous information from ChatGPT and drawing incorrect conclusions can have unpredictable consequences. These hazards arise not from intentional deceit, but rather from ChatGPT's factual errors despite its initially credible appearance.
#### Iii-B3 Overreliance on LLM-generated Content
The impact of MMLs like ChatGPT on people's access to information cannot
be overlooked. Currently, mainstream search engines serve as the primary source of information, where users enter keywords and receive related website links through web crawlers. Users must then sift through a large amount of information, considering factors like internal logic, information sources, and comments, to determine its veracity and usefulness until they are satisfied. ChatGPT can significantly save time in obtaining satisfactory information. Its few-shot strategy considers user satisfaction as a training standard rather than relying solely on strictly demonstrated factual data. However, this convenience may inadvertently lead users to become complacent, gradually giving up their critical thinking skills in evaluating information. As a result, MMLs such as ChatGPT could become the primary source of information for the general public. This shift in reliance on MMLs may create a situation where the public becomes more vulnerable to the influence and agendas of a limited number of individuals or organizations. Currently, Microsoft has announced the launch of New Bing, and Google and Baidu has followed suit with Bard and ERNIE Bot. Even individuals does not solely rely on models like ChatGPT to make decisions, their views may still be influenced by the specially curated information these models provide. While ChatGPT implements measures to prevent the generation of strongly biased views, it is possible for users to easily bypass these safeguards. For example, if you ask "the best restaurant," ChatGPT refuses to give a direct answer, but if you ask "1 best restaurant," it provides you with a direct response.
#### Iii-B4 Prompt Injections and Evasion
Prompt injections [20] involve bypassing LLM filters or manipulating LLMs to ignore previous instructions or perform unintended actions through carefully crafted prompts. By using such prompts, attackers can manipulate the LLM into unintended consequences, such as revealing sensitive information, obtaining responses that are restricted by the LLM (e.g., instructions on hacking an enterprise's server), or misleading the LLM into performing unintended actions with misleading context attacks. A similar attack in conventional attack is the evasion attack [21], which is the most common type of attack directed at machine learning models during inference. These attacks aim to deceive the model by introducing carefully crafted input data that leads it to make incorrect or unexpected predictions. What makes evasion attacks particularly concerning is that they can cause the model to behave incorrectly without needing access to its internal parameters or architecture. In the context of language models, evasion attacks involve deliberately constructing input text that exploits the model's weaknesses to produce unintended or biased responses. Given that ChatGPT is a LLM, it carries a higher risk of vulnerability to evasion attacks, even though no practical evasion attacks have been released yet.
#### Iii-B5 Training Data Poisoning
Training data poisoning attacks [22] pose a significant threat to the field of AI as they involve contaminating the training data used to train machine learning models, leading to erroneous outputs and unreliable decision-making. This manipulation of training data is a serious concern, particularly in the context of LLM training, as it can result in models behaving maliciously during inference. In LLMs, attackers can exploit vulnerabilities by manipulating the training data or fine-tuning procedures, introducing backdoors or vulnerabilities that compromise the security and effectiveness of the models. Despite being black-box models, LLMs are still susceptible to attacks, where an attacker can infiltrate the training data pipeline and inject malicious data. Unfortunately, LLMs lack robust data sanitization methodologies and do not integrate training data integrity checks or audits. This makes them vulnerable to potential issues and malicious manipulations in the training data. As a result, malicious insiders can compromise the fine-tuning process, introducing backdoors or vulnerabilities into the LLM to compromise its security and effectiveness.
## IV ChatGPT vs. Privacy
In this section, we investigate privacy violation of ChatGPT that was trained from the Internet data, including personal information.
### _Privacy Policy and Privacy Laws_
A privacy policy is a crucial legal document that provides users with detailed information about how their personal data is collected, processed, shared, and deleted. Personal data encompasses any information related to an identified or identifiable individual. For instance, social insurance numbers are widely recognized as personal data and serve as an indicator for assessing privacy protection.
In OpenAI's privacy policy, it informs users that various forms of personal information, including account details, user content, communication information, and social media data, are collected when users create accounts to access ChatGPT services. Additionally, data such as log data, usage data, device information, cookies, and analytics are automatically obtained by OpenAI through the usage of its services. Moreover, the privacy policy indicates that certain personal information may be shared with third-party entities, such as cloud vendors, web analytics service providers, government authorities, and industry peers. This sharing may be necessary for business operations and legal compliance, and data owners may not be notified of such disclosures. It is essential to acknowledge that the protection of users' personal information is entirely reliant on OpenAI's actions. As the custodian of all personal data, OpenAI makes decisions regarding the management, handling, and sharing of such information. Users, however, are granted certain rights, including access to their personal information, the ability to update, correct, or delete it, the option to restrict how OpenAI processes this data, and the right to withdraw consent for data collection and processing.
However, the regulation of OpenAI's handling of personal information is solely dependent on the privacy laws of different countries. For instance, the General Data Protection Regulation (GDPR) in Europe has strengthened data protection rules for individuals within the European Union (EU). It mandates that organizations must obtain explicit and informed consent from individuals for the collection and processing of their personal data and implement appropriate technical measures to protect this data. Moreover, GDPR grants individuals certain rights, including the right to access and delete their personal data, as well as the right to transfer their data from one service provider to another. While OpenAI claims to comply with GDPR and other relevant laws, such as the California Consumer Privacy Act (CCPA), as detailed in its privacy policies, these measures may not fully address individuals' privacy concerns regarding ChatGPT. For instance, OpenAI's flagship chatbot allows users to disable the chat history feature, but this alone may not suffice to alleviate all privacy concerns related to ChatGPT. Users may still feel uneasy about the potential risks associated with the storage and handling of their personal information by OpenAI.
### _Privacy Risks in ChatGPT_
It is noticeable that ChatGPT does not provide sufficient methods to preserve personal data according to GDPR. For example, ChatGPT may share users' data with third-party entities without explicit permissions of users. Here, we discuss the privacy risks in ChatGPT in details.
1) Privacy Leakage Due to Public Data Exploitation: ChatGPT's training process systematically involves scraping data from various sources such as websites, posts, books, and articles, which may include personal data. The size of the training dataset is growing exponentially, with ChatGPT's dataset exceeding 570 GB, necessitating a significant amount of real-world data for training. This raises concerns as it is possible that comments, blog posts, or product reviews authored by individuals might have been utilized in training ChatGPT without proper consent from data owners. This raises significant privacy concerns and may constitute a violation of privacy laws, such as GDPR and CCPA. Despite ChatGPT having a cutoff date in September 2021, the model's performance benefits from using the most recent data for training to avoid presenting users with outdated or inaccurate information. Consequently, as LLMs proliferate, the privacy violations arising from such data collection practices become increasingly serious, impacting a larger number of individuals.
2) Privacy Leakage Due to Personal Input Exploitation: ChatGPT's unique aspect lies in its reinforcement learning component, which allows it to train from users' prompts to minimize harmful, untruthful, or biased outputs. By leveraging users' prompts, ChatGPT aims to provide better solutions that align with users' expectations. However, the management of users' data by OpenAI has sparked significant privacy concerns. This resulted in Italy's decision to ban ChatGPT due to its violation of GDPR regulations. Although ChatGPT returned to Italy with added user controls over chat history and an age confirmation service for users below 18 years old, privacy concerns have prompted other countries, such as Canada, Germany, Sweden, and France, to launch their own investigations into this language model. Furthermore, ensuring the absolute security of personal data stored on OpenAI's cloud or third-party servers is challenging. Despite their efforts to protect data centers and machines, the frequent occurrence of cybersecurity incidents raises the risk of privacy leaks. Even though ChatGPT does not directly output personal information in response to inquiries, inference tasks can potentially reveal that ChatGPT has stored and recorded such data. Fig.3 illustrates an example where ChatGPT inferred a user's birth information from a Chinese identity number provided in the past, despite claiming not to have the ability to record personal information. Fig.2 highlights the method of avoiding sensitive information in ChatGPT and exposes the loopholes in this approach.
3) Emerging New Privacy Attacks on LLMs: In addition to the privacy violation stemming from the usage of public
Fig. 3: ChatGPT refuses to admit that he recorded my information
data and user inputs, the issue of privacy leakage from LLMs is currently under investigation. Traditional attacks on deep learning models, including language models, such as inference attacks, reconstruction attacks, and model extraction attacks, are not directly applicable to LLMs due to the limited accessibility of model parameters and the utilization of application programming interfaces (APIs) in most LLMs. Moreover, these attacks are typically studied on publicly available datasets, while MMLs like ChatGPT are employed in specific applications such as New Bing, TeleportHQ, and Wordtune. However, some vulnerabilities in ChatGPT's privacy have been identified. For instance, New Bing is susceptible to multi-step jailbreaking privacy attacks, allowing malicious actors to accurately extract personal information from the research results obtained through New Bing. Additionally, probing attacks can be employed by users to effectively ascertain if their personal data is being leaked from the language model. It is crucial to identify such privacy vulnerabilities in LLMs to gain a comprehensive understanding of the potential risks. This understanding will pave the way for the development of robust privacy preservation solutions that can effectively mitigate privacy risks in LLMs, including ChatGPT, and ensure the protection of user data.
4) Lack of Transparency: OpenAI bears the responsibility of storing, managing, and processing user data, granting them the authority to share this information with third parties, as explicitly stated in their privacy policy. However, ensuring that OpenAI adheres to stringent data protection measures and avoids any deliberate or unintentional compromises in the confidentiality of personal data poses significant challenges. There exists the possibility that personal data could be stored on unsecured data centers or shared with potentially unreliable industry partners. The lack of strict regulations or laws mandating transparency in data management exacerbates individuals' concerns regarding potential privacy violations. The fact that OpenAI operates as a black box to users further compounds the issue, making it difficult to conduct audits or verify how personal data is handled. The absence of transparency hinders the identification and prevention of potential privacy threats, leaving users unable to assess the privacy risks fully. When users opt for ChatGPT, their decision is primarily based on reading the privacy policy, but they may not be aware of the true extent of their personal data exposure until it is too late and the data has already been disclosed to the public. This lack of transparency and delayed awareness heighten users' apprehensions about their privacy and reinforce the need for stronger data protection measures and regulatory oversight to safeguard individuals' personal information effectively.
## V ChatGPT vs. Ethics
In addition to serious security and privacy issues, the ethical problem raised by ChatGPT has been recognized.
### _AI Ethics_
AI technology is a double-edged sword, with both positive and negative effects on human security, privacy, and dignity. On one hand, AI has been harnessed to protect people's privacy through techniques like federated learning [23, 24] and machine unlearning methods [25, 26]. It has also enhanced various aspects of people's lives, such as in automobile technology. Conversely, adversarial attack methods have been proposed to exploit vulnerabilities in machine learning models. These attacks, including poisoning, backdoor, membership inference, and model inversion attacks [27, 28, 29, 30], pose significant risks of information leakage when used by malicious individuals. Furthermore, accidents involving AI-controlled systems, such as automobiles and robots, can jeopardize human physical security and well-being. While AI was initially intended to assist and improve individuals' lives, the potential safety hazards and risks to privacy highlight the need for continued research and measures to mitigate and address these challenges. Balancing the benefits and risks of AI technology is crucial to ensure its responsible and ethical deployment.
In addition to its impact on individuals, AI technology has significant implications for society as a whole. As an artificial tool with reasoning and knowledge similar to humans, AI development has introduced a host of new challenges and complexities. These issues have sparked numerous discussions surrounding the fairness, impartiality, accountability, and transparency of AI. One key concern is the fairness issue, where AI models trained on biased or discriminatory data may perpetuate and even amplify these biases in their outputs. For instance, a machine learning model trained on data containing bias could inadvertently spread harmful behaviors and discriminatory practices among its users, leading to serious societal problems. Furthermore, the opacity of AI models poses another risk. The complex inner workings of these models make it challenging for humans to fully understand how they arrive at certain decisions or predictions. This lack of transparency hampers our ability to effectively control the behavior of AI systems and ensure they adhere to human-defined ethical principles.
Besides, AI has the potential to indirectly impact the environment, an aspect that has not yet received enough attention globally. Nowadays, numerous companies, educational institutions, and individuals are utilizing AI algorithms to train models for specific tasks. However, it is important to recognize that the training and application processes of these models consume significant amounts of electricity, leading to increased demand for electricity generation. This heightened demand for electricity generation, in turn, results in elevated carbon emissions, contributing to environmental pollution. Moreover, the trend of using increasingly large training datasets to train complex AI models further amplifies the need for electricity consumption. Additionally, the increased requirement for computational resources leads to a greater number of used and discarded devices, potentially contributing to electronic waste and pollution if not managed carefully.
### _Fairness and Bias_
current AI techniques exhibit unfair predictions that target certain groups of people. This bias arises because AI models are trained on data collected from human beings, who are not always objective in their actions and decisions. The widespread bias and discrimination prevalent in human society can also be found in the behavior of AI models. For instance, a study by Lahoti et al. (2019) [31] revealed a troubling case of bias in a job recommendation platform called XING. The platform gave a higher preference to a less qualified male candidate over a more qualified female candidate. This unfairness stemmed from the biases already present in the data used to train the machine learning-based recommendation system. The discrimination exhibited by human beings is transmitted to their students, i.e., AI models, which have the ability to further propagate such prejudice. The presence of bias and discrimination in AI models is a significant concern, as it can lead to harmful and unjust outcomes for individuals and society as a whole.
ChatGPT is not immune to the problem of bias, given its training on massive text data containing diverse opinions, including incorrect ones. Its impressive ability to generate long, natural sentences allows for fluent communication with users. However, this generation capability heavily relies on the vast knowledge acquired from a dataset as extensive as 50TB. As this data is extracted from the real world, it inevitably incorporates various stereotypes and discriminatory content, leading to occasional generation of inappropriate responses. Addressing bias is a common challenge in AI, and there have been efforts to tackle this issue in ChatGPT. For instance, Fig.4 illustrates ChatGPT's ability to recognize and reject biased statements when users ask questions with inherent bias. In such cases, ChatGPT provides a more impartial opinion, demonstrating its capacity to identify and counteract discrimination present in the query. Despite these attempts to mitigate bias, ethical challenges persist in the usage of ChatGPT. The need for ongoing awareness and improvement in handling bias is essential to ensure AI applications like ChatGPT uphold fairness and inclusivity.
### _Legal and Ethical Challenges_
The emergence of ChatGPT has given rise to several legal challenges, mainly due to the absence of specific regulations governing content produced by non-human entities. One contentious issue revolves around the copyright of texts generated by ChatGPT. Unlike merely copying data from its training set, ChatGPT can create original and natural-sounding text, complicating the determination of copyright ownership. This raises questions about whether one can use ChatGPT's responses for academic purposes, such as homework, essays, or research papers, and whether ChatGPT should be credited as a co-author in such cases. Additionally, there is uncertainty about the accountability if content generated by ChatGPT is misused for malicious purposes, leading to potential legal implications. Given the significant impact on education, there is a growing recognition of the challenges posed by ChatGPT and other LLMs. Therefore, governments are actively collaborating to develop comprehensive regulations that address legal concerns, including copyright issues. It is possible to add watermark on the images generated by AI models, but how to mark the texts produced by LLMs is challenging. It is important to establish a balanced framework that ensures appropriate attribution, ownership, and responsible use of content generated by AI models like ChatGPT, paving the way for a more informed and regulated landscape in the realm of AI-generated content.
Moreover, the writing proficiency displayed by ChatGPT is the result of analyzing numerous well-written texts crafted by skilled writers who invested substantial time and effort in honing their craft. For human writers, it takes years of practice and competition to attain such expertise. In contrast, ChatGPT can rapidly reach a comparable level, which has raised concerns and criticisms from professionals in the writing industry, including news writers and journalists worldwide. They contend that ChatGPT's ability to produce content akin to their carefully crafted works poses a threat to their livelihoods and raises issues of fairness and acceptability. The disparities between high-income and low-income countries also come into play. Many low-income countries lack the resources and capabilities to train or effectively utilize such advanced AI techniques. Additionally, they may lack the regulatory framework necessary to govern the usage of ChatGPT-like tools. Consequently, the technology gap between these two categories of countries is likely to widen, further exacerbating existing disparities.
## VI Detection and Classification of ChatGPT
ChatGPT's remarkable performance in various AI tasks places it at the forefront of NLP, sometimes even outperforming humans in complex tasks. This proficiency often blurs the line between human and model interaction, making it difficult for users to discern if they are conversing with ChatGPT.
### _Detect ChatGPT in a Conversation_
To help users identify if they are interacting with ChatGPT, Borji et al. [32], compiled a list of ChatGPT's limitations in different question categories, including reasoning, logic, math,
Fig. 4: Unbiased response example from ChatGPT when being asked with a biased question.
factual errors, bias and discrimination, wit and humor, coding, syntactic structure, spelling and grammar, and self-awareness. Three common question types where ChatGPT might falter are real-time problem-solving, reference, and facts, as illustrated in IV. By posing questions from these categories, users can potentially discern whether they are conversing with ChatGPT or a human. Obviously, the gap between the ChatGPT's responses and the ground truth is distinguishable, so it is helpful to identify. However, it is not guaranteed. The degree to which ChatGPT understand the prompts and responses is still unknown. The failure of ChatGPT addressing users' questions should be avoided. Therefore, it is meaningful to investigate and understand the limitations of ChatGPT and find possible approaches to identifying the valid texts generated by ChatGPT.
### _AI-Written Text Detection_
AI-written text detection plays a crucial role to identify and categorize text produced by ChatGPT based on its content. Guo et al. [33], analyzed linguistic features in both English and Chinese texts. They discovered that ChatGPT uses more nouns (English: Human=18.7%, ChatGPT=21.1%; Chinese: Human=26.0%, ChatGPT=27.5%), longer sentences, more determiners, conjunctions, auxiliary relations, and neutral sentiments compared to human answers. They also evaluated the performance of the RoBERTa-based-detector [34], achieving F1 scores of 88.53-98.78% on their datasets. The study further compared various aspects, highlighting the usefulness of DL-based models, the challenges of detecting ChatGPT-generated texts in single sentences versus full texts, and the importance of fine-grained corpus data in model training. Despite these efforts, there is currently no fully reliable detection model or scheme. OpenAI, the creator of ChatGPT, has acknowledged the limitations of their AI-written detector, which requires a minimum of 1,000 characters and may not always be accurate, especially for non-English content. As of July 20, 2023, the AI-written classifier is no longer available due to its low rate of accuracy. Other detectors, such as ZeroGPT, GPTZero, and GPTKit, have low accuracy to detect the ChatGPT-written texts. They acknowledge producing false negatives and false positives, making it unsuitable for reliable detection of issues like plagiarism from ChatGPT. In conclusion, while text classification is an essential aspect of ChatGPT's development, the current detection models have certain limitations, and further research is needed to improve their reliability and effectiveness.
### _Model Adversary Promotion_
Kreps and Kriner [18] highlight the dual nature of vulnerability discovery tools, which can have both positive and negative consequences. While faster tools can enhance security, those capable of continuously discovering vulnerabilities may flood the market with potential risks, causing more harm than good. ChatGPT's development and its detectors' design can follow a similar path [35]. As ChatGPT evolves, it may optimize its responses based on the best detectors available, leading to a continuous cycle of improvement. However, this ongoing evolution could result in ChatGPT and its detectors becoming increasingly powerful, making it more challenging for humans to discern whether the author is ChatGPT or not. Figure 5 illustrates this long-term perspective, where detector upgrades could inadvertently contribute to ChatGPT's capabilities, ultimately amplifying the discussed threats. In summary, the interaction between ChatGPT and its detectors can have complex consequences, necessitating a careful approach to ensure that their development aligns with beneficial outcomes and does not exacerbate the potential risks associated with AI-generated content.
## VII Conclusion and Future Works
In this paper, we have introduced technologies, features, limitations, and applications of ChatGPT, the most popular LLMs currently, and specifically focused the security, privacy, and ethical concerns raised by ChatGPT. At present, ChatGPT remains susceptible to its inaccurate responses and brings troubles in plagiarism detection and copyright protection. It is uncertain whether these issues could be thoroughly resolved with the improvement of LLMs. The limitations would be
Fig. 5: Adversary Promotion
long-standing problems in LLMs and need great efforts to mitigate, as outlined below.
* OpenAI's efforts to implement filters for malicious prompts are commendable, but attackers can still bypass these restrictions using specific language patterns. To counter this, rigorous input validation and sanitization for user-provided prompts are necessary, as well as context-aware filtering and output encoding to prevent prompt manipulation.
* The hallucination problem in ChatGPT, derived from inaccurate answers and misinformation, poses serious risks for individuals relying on AI for medical, legal, and daily decision-making. Improving the accuracy of language models and their ability to handle various questions is imperative. Incorporating human oversight and review to ensure accuracy, appropriateness, and impartiality of AI-generated content is crucial.
* Security threats, such as prompt injection and data poisoning, can lead to erroneous decisions. Identifying new vulnerabilities in ChatGPT and other language models and finding effective resolutions is vital. Detecting and preventing malicious exploitation by attackers are equally important.
* Privacy leakage, though challenging to reason with black-box models, can be observed through analyzing simple prompts and responses. To mitigate privacy concerns, ChatGPT must comply with privacy laws, develop large-scale prompt-response analysis for leakage detection, employ customized approaches to prevent leakage, and enhance transparency for reasoning responses.
* Considering ethical and social implications, including bias and manipulation risks, is essential. Bias can harm marginalized groups, necessitating diverse teams to identify and address bias in AI systems.
* Plagiarism and copyright violations are significant issues in ChatGPT. Distinguishing AI-generated text from human-written content is key. Effective AI-written text detectors with watermarking for images or videos need development to protect intellectual property and properly attribute AI-generated texts.
|
2304.04809 | A data-driven framework for structure-property correlation in ordered
and disordered cellular metamaterials | Cellular solids and micro-lattices are a class of lightweight architected
materials that have been established for their unique mechanical, thermal, and
acoustic properties. It has been shown that by tuning material architecture, a
combination of topology and solid(s) distribution, one can design new material
systems, also known as metamaterials, with superior performance compared to
conventional monolithic solids. Despite the continuously growing complexity of
synthesized microstructures, mainly enabled by developments in additive
manufacturing, correlating their morphological characteristics to the resulting
material properties has not advanced equally. This work aims to develop a
systematic data-driven framework that is capable of identifying all key
microstructural characteristics and evaluating their effect on a target
material property. The framework relies on integrating virtual structure
generation and quantification algorithms with interpretable surrogate models.
The effectiveness of the proposed approach is demonstrated by analyzing the
effective stiffness of a broad class of two-dimensional (2D) cellular
metamaterials with varying topological disorder. The results reveal the complex
manner in which well-known stiffness contributors, including nodal
connectivity, cooperate with often-overlooked microstructural features such as
strut orientation, to determine macroscopic material behavior. We further
re-examine Maxwell's criteria regarding the rigidity of frame structures, as
they pertain to the effective stiffness of cellular solids and showcase
microstructures that violate them. This framework can be used for
structure-property correlation in different classes of metamaterials as well as
the discovery of novel architectures with tailored combinations of material
properties. | Shengzhi Luan, Enze Chen, Joel John, Stavros Gaitanaros | 2023-04-10T18:38:52Z | http://arxiv.org/abs/2304.04809v1 | A data-driven framework for structure-property correlation in ordered and disordered cellular metamaterials
###### Abstract
Cellular solids and micro-lattices are a class of lightweight architected materials that have been established for their unique mechanical, thermal, and acoustic properties. It has been shown that by tuning material architecture, a combination of topology and solid(s) distribution, one can design new material systems, also known as metamaterials, with superior performance compared to conventional monolithic solids. Despite the continuously growing complexity of synthesized microstructures, mainly enabled by developments in additive manufacturing, correlating their morphological characteristics to the resulting material properties has not advanced equally. This work aims to develop a systematic data-driven framework that is capable of identifying all key microstructural characteristics and evaluating their effect on a target material property. The framework relies on integrating virtual structure generation and quantification algorithms with interpretable surrogate models. The effectiveness of the proposed approach is demonstrated by analyzing the effective stiffness of a broad class of two-dimensional (2D) cellular metamaterials with varying topological disorder. The results reveal the complex manner in which well-known stiffness contributors, including nodal connectivity, cooperate with often-overlooked microstructural features such as strut orientation, to determine macroscopic material behavior. We further re-examine Maxwell's criteria regarding the rigidity of frame structures, as they pertain to the effective stiffness of cellular solids and showcase microstructures that violate them. This framework can be used for structure-property correlation in different classes of metamaterials as well as the discovery of novel architectures with tailored combinations of material properties.
architected metamaterials | cellular structures | microstructure quantification | structure-property correlation
## Introduction
Ongoing advances in additive manufacturing have led to a rapid growth on the synthesis of architected materials with increasingly complex microstructures across several length scales [(1, 2)]. Cellular metamaterials, consisting of polyhedral topologies with struts [(3, 4)], plates [(5, 6)], or shells [(7, 8, 9)] as building blocks, are an important class of architected materials that has been the focus of a vast number of studies due to their unique mechanical, thermal and acoustic properties and their ubiquitous occurrence in nature [(10)]. To date, a plethora of engineered architectures have been reported with desirable effective properties including high stiffness [(11, 12, 13, 14)], negative Poisson's ratio [(15, 16, 17)], phononic bandgaps [(18, 19)], energy absorption [(20, 21, 22)], heat transfer [(23)], strength and resilience [(24, 25, 26, 27, 28)], toughness [(29, 30)], and others. Rigorous design methodologies, such as topology optimization, have led to material systems with stiffness and strength that reach their individual theoretical bounds [(12)]. Recently, data-driven techniques have also been employed [(31, 32, 33)] in order to further explore the property space of architected materials.
Despite the significant developments in both design algorithms and synthesis techniques, the analysis of the resulting material mechanics, and in particular the correlation of effective properties to all critical morphological features, has not progressed equally. Experimental measurements of truss-based metamaterials are often interpreted through the lens of classical scaling formulas, derived from dimensional analysis and beam theory. In this approach, an effective material property \(\bar{y}\) is connected with the underlying microstructure through a power law i.e., \(\bar{y}=\alpha\bar{\rho}^{\beta}\), where \(\bar{\rho}\) is the relative density of the architected material, and \((\alpha,\beta)\) are constants that depend on the properties of the parent solid and all topological characteristics. Typically, the exponent \(\beta\) is determined by the axial-to-bending ratio of the internal loads within the struts, designating the metamaterial as either stretching- or bending-dominated. This classification, in turn, is related to the corresponding average nodal connectivity. Several works [(34, 35, 36)] have illustrated the limitations of these scaling laws (e.g. not accounting for shear and/or the complex effect of the strut junctions) and reported mixed results regarding their agreement with experimental measurements on different microstructures. However, to date, a framework which enables comprehensive and accurate structure-property correlation, and at the same time is sufficiently general to be applicable to broad classes of architected materials, remains an open challenge.
In this work, we demonstrate a systematic data-driven approach that exploits cellular structure generation and quantification algorithms, to yield interpretable regression-based surrogate models with the ability to not only accurately capture a target material property, but more importantly identify key morphological descriptors and evaluate their effect on macroscopic material behavior. The accuracy and effectiveness of the proposed framework are demonstrated by analyzing the effective stiffness of 2D cellular metamaterials and elucidating the complex manner on which individual microstructural descriptors contribute to it. A distinct element of the techniques developed here, is that they enable the simultaneous
study of both perfectly-ordered lattices and highly disordered microstructures. It is also important to emphasize that all aspects of individual components of this framework (i.e. virtual structure generation, microstructure quantification, property estimation, and structure-property correlation) can be extended to other classes of metamaterials and various properties of interest.
### Virtual Structure Generation
In order to generate a large, but most importantly representative, dataset of truss metamaterials we introduce a virtual structure generation framework that combines tiling patterns and power diagrams. Our objective is to create cellular architectures with a varying, and to an extent tailored, amount of disorder so that the resulting materials, in turn, attain a wide range of properties. Power diagrams, also known as Laguerre diagrams or Dirichlet cell complexes, have found applications in many scientific disciplines including solid state physics, geology, and economics (37, 38). A finite-size planar domain \(M\) is first divided into a set \(\mathbf{S}\in\mathbb{R}^{2}\) of \(n\) cells, each associated with a seed \(\mathbf{s}_{i}\in\mathbf{S}=\{\mathbf{s}_{1},\mathbf{s}_{2},\cdots\mathbf{s}_{n}\}\) that is located at \(\mathbf{c}_{i}\). The convex region \(C\) occupied by a single cell is defined by
\[C_{i}=\Big{\{}\mathbf{x}\in\mathbb{R}^{2}\mid d_{L}(\mathbf{x},\mathbf{s}_{i})<d_{L}(\mathbf{ x},\mathbf{s}_{j}),\forall\mathbf{s}_{j}\in\mathbf{S}-\{\mathbf{s}_{i}\}\Big{\}}. \tag{1}\]
where \(d_{L}(\mathbf{x},\mathbf{s}_{i})=\|\mathbf{x}-\mathbf{c}_{i}\|^{2}-\mathbf{v}_{i}^{2}\) is the Laguerre (or power) distance and \(w_{i}\) is the weight assigned to each seed. A common geometric interpretation of a seed with weight is a circle with radius \(w_{i}\geq 0\). By controlling the placement of seeds and their corresponding weights, one can design innumerable polygonal tessellations. Note that in classical Voronoi diagrams all weights are equal and \(d_{L}\) is replaced by the Euclidean distance. Both Voronoi and Laguerre tessellations have been used extensively as representations of cellular microstructures found in nature, such as plant tissue and trabecular bone (10).
The structure generation framework proposed here follows a two-step process. In the first step, we design a primitive set of cellular metamaterials that encompasses, though not exclusively, well-known 2D architectures that have been the foci of numerous research works due to their unique material properties. Since a large portion of these materials consists of periodic microstructures, we employ tilings of the 2D Euclidean space as blueprints, in order to generate seed locations \(\mathbf{c}_{i}\) that yield perfectly-ordered topologies with distinct geometric characteristics. The art of tilings i.e., filling space with unit-tiles of a specific shape, can be traced back to antiquity (39), and more recently to the work of M.C.Escher (40). Although several of their geometric properties were known to ancient Greek mathematicians, a systematic analysis and listings of tilings first appear in the works of Kepler (41). Recent studies have focused on discovering new families of polyhedral tilings with applications in condensed matter physics and crystal structures (42, 43), though to date, a complete classification of 2D tilings by polygons remains an open-challenge (44). In this work we consider solely the case of edge-to-edge tilings, that is if two polygons intersect at more than one point then they must share a common edge. If each polygonal cell is vertex-transitive i.e., all vertices are equivalent with respect to the corresponding symmetry group, then the tiling is _uniform_ (or 1-uniform). It was shown by Kepler that there are only 11 uniform tilings in \(\mathbb{R}^{2}\). These include three regular tilings and eight semi-regular tilings that are also known as Archimedean. The former group consists of only one type of regular polygons (triangles, squares, and hexagons respectively) while the semi-regular tilings are formed by two or three types of regular polygons. By increasing the tile-transitivity one can define general classes of \(k\)-uniform tilings (39). Even though \(k\)-uniform tilings have been enumerated up to \(k=6\), here we will focus on materials based on the 1- and 2-uniform tilings, since the remaining classes do not contribute appreciably towards microstructural diversity. Of course, it is impossible to represent a broad class of ordered lattice metamaterials using only combinations of regular polygons, irrespective of the range of their types. To address this issue, we take advantage of the dual tilings, which are generated by placing seeds in the vertices of the polygons, and then tessellate the same planar domain with single-valued weights (Supporting Information, Fig. S1). Subsequently, we add to this set of designs two disordered topologies: a Voronoi diagram generated by seed placement based on a uniform distribution, and its corresponding dual tessellation.
The process described above leads to 64 generated architectures (see Fig. 1A), of which 60 are distinct (4 of the dual tilings correspond to an existing topology), that will form the basis for an expanded set of microstructures with much richer morphologies. It is important to emphasize here that this primitive dataset of architectures includes most, if not all, well-studied 2D cellular solids (1, 12) including the triangular, square and hexagonal honeycombs, the Kagome lattice, a subset of the stiffest isotropic lattices, and an archetypal random material. The second step in the microstructure generation process involves the expansion of this primitive set, by modifying the seed locations and their weights, to a large database of 2D structures with an amplifying amount of disorder. Three distinct operations (Fig. 1B) are employed to generate the final expanded set of 2D structures from each one of the basic designs: (a) seed translation, (b) increase of local weight variance, and (c) modification of global weight variance. The goal of the first operation is twofold. First, it allows us to break the symmetry of the basic periodic topologies, leading to a large group of microstructures with distinct disorder characteristics than the typical Voronoi tessellations. Second, it enables assessing the effect of defects or imperfections, on the resulting macroscopic properties, which is key to the design of robust architected metamaterials. The two operations that affect the weight of the seeds lead to individual distributions of cell sizes and thus, to an additional group of disordered metamaterials that also differs from random seed placement-based disordered topologies. It is important to emphasize that not all three operations are applicable, or meaningful, for all microstructures contained in the primitive set.
Seed translation refers to the change of seed position from its corresponding value in the initial primitive design (Supporting Information, Fig. S2a). For every seed \(\mathbf{s}_{i}\), its position \(\mathbf{c}_{i}\) is modified by a radial distance \(r\) along a random direction \(\theta\), resulting in a updated seed position \(\mathbf{c}_{i}^{\prime}\) with
\[\mathbf{c}_{i}^{\prime}=\mathbf{c}_{i}+r[\cos\theta,\sin\theta]^{\mathrm{T}}. \tag{2}\]
The two uniform random variables \(r\) and \(\theta\) are generated independently within the ranges \(r\sim U(0,\xi_{\mathrm{p}}d_{\mathrm{s}})\) and \(\theta\sim U(0,2\pi)\) respectively. The coefficient \(\xi_{\mathrm{p}}\) corresponds to the maximum relative change of position, whereas \(d_{\mathrm{s}}\) denotes the minimum
distance between neighboring seeds. Seed translation is only applied to the regular microstructures of the basic set i.e., the 1- and 2-uniform tilings and their duals, since this operation has a minimal effect on the microstructural characteristics of the disordered Voronoi tessellations.
Local weight variance describes the fluctuation of the seed weight from its original value in the basic tessellations. Here, each seed with weight \(w_{i}\) is replaced by a new seed with weight \(w_{i}^{\prime}=\eta_{v}w_{i}\), where \(\eta_{v}\sim N(1,\xi_{v})\) follows a normal distribution (Supporting Information, Fig. S2b) scaled by the local weight coefficient \(\xi_{v}\). Increasing the local weight variance is applied to all designs included in the primitive set. Global weight variance refers to the macroscopic distribution of seed weights with respect to their mean value \(\widetilde{w}\) (Supporting Information, Fig. S2c). Using a global weight coefficient \(\xi_{d}\), each seed \(\mathbf{s}_{i}\) is assigned a new weight \(w_{i}^{\prime}\) according to
\[w_{i}^{\prime}=w_{i}+\xi_{d}(\widetilde{w}-w_{i}). \tag{3}\]
Modifying global weight variance is only meaningful on designs generated by power diagrams with non-uniform weights, that is, the semi-regular and 2-uniform tilings. It is trivial to show that this operation leaves all other designs of our primitive set intact.
Applying these structure modification techniques to the basic set of 2D materials leads to a broader class of cellular topologies consisting of 1646 designs. Finally, to transition from pure geometric entities to real metamaterials, one needs to assign a solid material distribution along the polygonal topologies. In general, there is an infinite number of ways to achieve this by exploiting different combinations of non-uniform density distributions with arbitrary shapes of cross-sectional areas for each strut. For simplicity and tractability, here we assign uniform strut thickness to all edges of the underlying tessellations in order to attain a target relative density of the resulting truss metamaterial. To avoid errors in the relative density calculation, material overlap at the strut-junctions is removed. For each tessellation, we assign five different values of strut thickness to obtain corresponding relative densities \(\bar{\rho}=[1\%,5\%,10\%,15\%,20\%]\). Therefore, our final set of ordered and disordered cellular metamaterials consists of 8230 distinct microstructures.
### Microstructure Quantification
An essential component in the analysis of cellular metamaterials is an accurate description of their microstructural properties. To date, this is typically based on geometric classifications (e.g. periodic, three-dimensional, etc.), underlying crystal symmetries (e.g. FCC), or the mechanical behavior at the strut level (e.g. stretching-dominated). However, none of the above approaches is complete, failing to properly capture the complexity of cellular architectures, especially non-periodic ones. Recently, image-based tools have been employed (33) to address this issue and re-explore the design space of lightweight
Figure 1: Basic cellular architectures and structure modification. (A) Primitive set of 64 architectures that includes: eleven 1-uniform tilings (three regular marked in yellow and eight semi-regular marked in green), twenty 2-uniform tilings (marked in blue), a disordered Voronoi structure (marked in black), and their corresponding dual tessellations (marked in red). (B) Three operations (seed translation, increase of local weight variance, and modification of global weight variance) are used to expand the primitive set to an expanded representative dataset of 1646 architectures with wide distributions of morphological characteristics.
architected materials. Here we aim to establish a rigorous framework, employing a set of well-established metrics of morphological features, that allows the digitalization of microstructure characterization. The main advantage of this approach is that it can facilitate comparisons between different metamaterials and more importantly enable a systematic construction of structure-property relations, as is shown in the next section.
Based on previous studies focusing on identifying key microstructural features and their effects on the mechanical properties of porous, polycrystalline, and multi-phase composites, we adopt here a vector representation of cellular metamaterials, where each component represents a microstructural feature (45). At the metamaterial (macroscopic) scale, microstructure is characterized by the relative density (or volume fraction) and number of polygonal cells. Each cell is in turn described by its edge number, area, compactness and eccentricity. At a microscopic level, features include each strut length, orientation and nodal connectivity (also known as coordination number in solid state physics). Connecting these length-scales are the distance and angle between the seeds of neighboring cells. Note that we calculate the sine/cosine of all angles for the seeds to avoid any periodicity-induced bias issues. All of the above characteristics (Supporting Information, section 2) can be classified as either deterministic i.e., denoting a property (e.g. relative density) of the entire structure by a single value, or statistical, representing a given feature (e.g. strut length) by its probability distribution function across the material domain. For each feature in the latter category, we employ four descriptors corresponding to the first four moments of their probability distribution function i.e., its mean, variance, skewness and kurtosis. This process leads to a material descriptor vector \(\mathbf{C}\) with 42 components. The microscopic level features are shown in Fig. 2 as an example, while a detailed report for all structure components is included in Supporting Information, Fig. S3. Collectively, these morphological descriptors are tailored to the specific, even though broad, class of polygonal cellular microstructures examined in this study. However, different or additional characteristics can be used for different types of architected materials.
At this point it is important, with the aid of our microstructure quantification framework, to re-examine how representative the set of generated metamaterials is, and further discuss the comprehensiveness of the microstructure vector \(\mathbf{C}\). A potential pitfall in using select physical descriptors to characterize a given microstructure is the possibility of over-simplifying the representation of the material and creating microstructural uncertainty (46). This in turn, can result in unfeasible or erroneous structure-property correlation. Using the four moments of all statistical morphological features, however, promotes specificity of the corresponding probability distributions, since higher moments typically do not offer additional information. The large number of adopted descriptors and their intercorrelation indicate that it is improbable, if not impossible, to generate two distinct materials with the exact same sets of descriptors. Furthermore, each sub-set of materials (e.g. ordered) included in our general class of metamaterials should be uniquely represented by these descriptors. This can be seen, for example, in Fig. 2, where it is obvious that a periodic lattice will result in a discrete and discontinuous distribution of strut orientations, while the corresponding distribution of a disordered metamaterial is continuous within the same range. Note that these differences are consistent across many different descriptors. Finally, we ensure that the range of values for each microstructural feature, once recorded from all materials, is sufficiently extensive. This serves as additional evidence that our material dataset constitutes a representative set of 2D cellular microstructures (Supporting Information, Fig. S4). It is important to note that this result is not coincidental but was purposefully ensured by modifying all parameters of the structure generation process described in the previous section in an iterative process. In particular, the magnitudes of the three topological variation operations induced in the primitive set of material topologies were tuned to achieve the desired
Figure 2: Microscopic level descriptors for two samples: a periodic (A) and a disordered (B) metamaterial with their corresponding distributions of nodal connectivity (C,D), strut length (E,F), and strut orientation (G,H).
amount of microstructural richness.
## Effective Stiffness Prediction
We proceed to show that the structure vector \(\mathbf{C}\) can be incorporated in machine learning algorithms to create a surrogate model that can accurately predict a material property of any 2D cellular structure. Here we focus on effective stiffness, calculated through numerical simulations due to their low computational cost and high accuracy. All material microstructures are discretized with finite elements and are subsequently compressed in silico along the vertical direction. Both ordered and disordered topologies are treated as finite-size domains i.e., there is no periodicity applied to any simulation and the transverse edges remain constraint-free. To validate this modeling framework we select two architected materials, one ordered and one disordered, with distinct microstructural vectors, synthesize them by stereolithography and test them experimentally. Fig. 3 shows the comparison between measured and predicted compressive responses for the two microstructures, demonstrating the high accuracy and efficiency of the numerical models (all details of the simulations, synthesis, and testing processes are detailed in Materials and Methods). Hence, we employ the same procedure for all metamaterials in our dataset and record their relative stiffness \(\tilde{E}=E^{*}/E\), where \(E^{*}\) is the effective stiffness of the metamaterial and \(E\) is the Young's modulus of the parent solid.
Compiling the microstructural features and effective stiffness of each metamaterial enables the construction of a machine learning-based surrogate model with the ability to predict this target property directly from the structure vector \(\mathbf{C}\). A feature selection step is applied first, using filtering techniques based on the Pearson correlation coefficient \(p\) and mutual information \(I\), to increase robustness, accuracy and interpretability of the surrogate model. Pearson correlation measures the linear dependency between components of the feature vector and is employed here to reduce the inherent multicollinearity of the microstructural descriptors. Two pairs of these descriptors are found to be highly correlated: the mean of nodal connectivity has a negative correlation with the mean of cell edge number (\(p=-0.927\)); the mean of nearest neighboring seed distance has a positive correlation with the mean of cell area (\(p=0.920\)). Proceeding, we retain only the average nodal connectivity and nearest neighboring seed distance in the structure vector \(\mathbf{C}\). Mutual information indicates the dependency between target property and each microstructural descriptor and can therefore be used to eliminate redundant components. Here, only descriptors with \(I=0\) i.e., indicating no correlation with the target variable, are removed. Through the two filtering processes, the remaining structure vector has 23 components. It is important to note that the updated, and reduced, vector \(\mathbf{C}\) only applies to the material property under focus, while the original one can be applied towards any other material property.
Subsequently, we construct a random forest regression model in which a number of decision trees are trained on different subsets of the material dataset, including corresponding data of effective stiffness and microstructural descriptors. Optimal parameters are determined by a grid-search method based on the mean-squared error criterion. The prediction of the resulting regression model is generated by aggregating all predictions by each individual tree. The material dataset is randomly split into 70% for training and the remaining 30% for testing. The predicted values of the relative effective stiffness by the surrogate model are compared with the corresponding validation data in Fig. 3D. The effectiveness and accuracy of the machine learning algorithm are reflected in the overall satisfactory agreement and the resulting variance explained (coefficient of determination) \(R^{2}=0.960\). Additionally, by checking predictions and validations for different samples we verify that the accuracy of the algorithm is independent of the specific class of microstructures (e.g. ordered, monodisperse, low-density etc.). It is important to note that despite the excellent performance of the surrogate model, there is an inherent limitation induced by the representation of the microstructure using physical descriptors (vs. an image-based technique for example) that leads to an inevitable loss, though limited, of structure information. However, the particular microstructure quantification and associated learning algorithm adopted here are chosen to favor interpretability, which will be critical in extracting accurate structure-property relations, as is shown next.
## Structure-Property Correlation
To uncover the intricate relation between effective stiffness and key microstructural descriptors, we employ a Shapley
Figure 3: (A) Synthesized specimens for two cellular metamaterials and (B) their corresponding numerical models. (C) Comparison of experimental (solid) and numerical (dashed) stress-strain responses for the same samples. (D) Prediction vs. validation of relative effective stiffness for the random forest model.
Additive exPlanations (SHAP) framework [(47)] customized for decision-tree models. This technique takes advantage of Shapley values, a game-theory based metric of the contribution by each player, that has recently been extensively utilized for the interpretation of machine-learning model predictions. In this context, the players are the microstructural descriptors and the game's outcome corresponds to the surrogate model's stiffness prediction. Running SHAP for all microstructures in our dataset results in an \(8120\times 23\) matrix of Shapley values. Fig. 4A presents a summary plot that demonstrates the effect of each descriptor, ordered according to their importance (given space constraints, only the ten most essential descriptors are shown). As expected, the effective stiffness is governed by the relative density in a positive relation, as is the case for any porous material. The average nodal connectivity, another well-established parameter, is shown to be the second most critical factor, exhibiting an overall positive impact on the effective stiffness. Strut orientation plays a vital role as well in determining stiffness but in a non-straightforward manner. According to SHAP analysis, the third and fourth most important descriptors are the variance and kurtosis of the strut orientation distribution, in a positive and negative manner respectively. Both of these results imply that the existence of a large number of struts with deviating orientations from their mean value is highly desirable to attain high stiffness, though not necessary for all microstructures as shown next.
Furthermore, one can also use the SHAP framework to interpret individual model predictions. Doing so in select cellular architectures (for details see Supporting Information, section 3), shows that: (i) isotropic metamaterials attain the highest effective stiffness mainly due to their corresponding high nodal connectivity \(Z=6\); (ii) the variance of strut orientation becomes extremely important when the average nodal connectivity is not close to its extremal values; (iii) despite the relatively low nodal connectivity, features such as low kurtosis of strut orientation distribution help the well-studied Kagome lattice to reach a high stiffness, comparable to the corresponding rigidity of the triangular lattice.
To further probe the relation between cellular microstructure and effective stiffness, we take advantage of an additional interpretable regression technique, that is, Generalized Additive Model (GAM). GAMs enable modeling of non-linear relationships, with no strong assumption of their form, between a target response variable (i.e. effective stiffness) and one or more of the covariates (i.e. the microstructural descriptors). Here we focus solely on topological features and therefore eliminate relative density as a descriptor while keeping its value constant (\(\bar{\rho}=1\%\)) for all samples. The resulting predictions have a slightly lower accuracy (\(R^{2}=0.877\)) than the corresponding ones of the random forest model, but are still sufficient to elucidate how key descriptors, as identified by SHAP analysis, affect the resulting material behavior. The partial dependence plot depicted in Fig. 4B shows the effect of the mean nodal connectivity on the predicted effective stiffness. One can easily distinguish four regimes: two plateaus (\(Z<3.7\) and \(Z>5.1\)) where change of nodal connectivity
Figure 4: (A) The ten most essential descriptors ranked by their overall impact, as denoted by their mean absolute Shapley value (left). The summary plot (right) combines descriptor importance with their overall effect on stiffness: the color represents the relative value of each descriptor from low (blue) to high (red), while the x-axis (SHAP value) denotes the impact on the stiffness. (B) GAM partial dependence plot showing the relationship between effective stiffness and mean nodal connectivity. (C) Power exponents of the scaling of stiffness with density for all samples in the dataset as a function of their mean nodal connectivity.
has a minimum effect on stiffness, separated by two regions (\(3.7<Z<4.3\) and \(4.3<Z<5.1\)) where effective stiffness increases linearly, though with different slope, with regards to nodal connectivity. To fully comprehend these trends, it is constructive to compare them with Maxwell's criteria regarding the rigidity of pin-jointed frames [(48)]. For the case of 2D structures, Maxwell's rule states that a nodal connectivity \(Z=4\) is a necessary condition for rigidity, while the necessary and sufficient condition requires \(Z=6\)[(49)]. When applied to material microstructures, where strut junctions have resistance to rotation, these criteria can be used to classify cellular solids as stretching- or bending-dominated, referring to the governing internal load within their struts. The former class of materials is known to have a high stiffness that scales linearly with relative density (i.e. \(\bar{E}\sim\bar{\rho}\)) while bending-dominated lattices and foams are more flexible, with their stiffness following a cubic power-law (i.e. \(\bar{E}\sim\bar{\rho}^{3}\)). Our regression model can be used to rigorously derive these classifications and more importantly explain the behavior of those microstructures that do not fall within these categories. To do so, we first examine the power exponent \(\beta\) of all cellular metamaterials in our dataset as a function of mean nodal connectivity (Fig. 4C). It is clear that the four regimes identified by GAM are correlated, to a certain degree, with the trend shown here. That is, microstructures with very low nodal connectivity tend to be bending-dominated while high connectivity leads to stretching-dominated materials. In between the two regimes, stiffness increases significantly with nodal connectivity but at a decreasing rate, since it is gradually converging to the theoretical limit defined by the Hashin-Shtrikman bounds. However, there are two important deviations from Maxwell's rules: (i) one can notice that there are several materials with \(Z<4\) that are nonetheless stretching-dominated, and (ii) it appears that \(Z>4.5\) is sufficient to achieve rigidity in 2D metamaterials.
To understand why some structures violate Maxwell's rule, as applied to cellular metamaterials, attention needs to be given to the descriptors corresponding to strut orientation, as revealed by SHAP analysis. The corresponding GAM results for these descriptors are shown in Fig. 5A-B. In both figures, one notices regions where the variance and/or the kurtosis of the strut orientation probability distribution, provide significant stiffness enhancement. To demonstrate this phenomenon, we choose two materials microstructures (corresponding to the outlier data shown in (Fig. 4C)) whose mechanical behavior does not correlate to their mean nodal connectivity. The two metamaterials M1 and M2, shown in Fig. 5C, have nodal connectivities \(Z=3.5\) and \(Z=4.5\) respectively. M1 is shown, however, to be much stiffer than M2. In addition, the effective stiffness of the material increases linearly with density, indicating its stretching-dominated behavior, even though it does not meet the criteria for rigidity according to Maxwell's rule i.e. \(Z<4\). This discrepancy can be explained, in this case, through the large difference of the strut orientation kurtosis for M1 and M2, as highlighted in Fig. 5B. Collectively, this analysis shows that nodal connectivity is insufficient to predict effective stiffness except for specific regimes, i.e. when \(Z<3.3\) and \(Z>4.5\). For all microstructures outside of that, one has to examine their struts' orientation, through the key descriptors, to understand the resulting material behavior.
## Discussion
To conclude, we present here a data-driven framework for architected metamaterials that integrates virtual structure generation, microstructure quantification, machine-learning models and interpretability algorithms, in order to identify key morphological characteristics and their effects to effective stiffness. The results validate the importance of nodal connectivity on achieving the stiffest possible cellular metamaterials for a given relative density. It is further shown how strut orientation, represented through the second and fourth moments of its probability distribution, can become the critical factor that governs effective stiffness when the average nodal connectivity is not approaching its extremal values. Collectively, the findings demonstrate the ability of the developed framework to reveal structure-property relations that are inaccessible by conventional experimental and numerical techniques due to the vast number of involved parameters. It is important to highlight that even though effective stiffness is chosen here as a target property of interest, this data-driven approach can be applied to analyze any material property that can be calculated with reasonable accuracy, which for certain nonlinear problems may increase substantially the computational cost associated with the training and validation of the surrogate model. Furthermore, in cases where material properties sensitive to local flaws are investigated, additional morphological features and/or processing parameters (e.g. the resolution of the 3D-printer) should be considered as structure descriptors. It would be of immense interest to assess how the individual influence of each descriptor, represented here through the corresponding SHAP values, changes when different effective properties are examined. The proposed approach can therefore be further extended for the design of multifunctional metamaterials with tailored combinations of mechanical,
Fig. 5: Effect of strut orientation and behavior of two metamaterials that do not follow Maxwell’s rules. (A) GAM partial dependence plot showing the relationship between effective stiffness and variance of strut orientation (M1 and M2 marked with red and blue dots). (B) GAM partial dependence plot showing the relationship between effective stiffness and kurtosis of strut orientation (M1 and M2 marked with red and blue dots). (C) The linear (M1) and cubic (M2) scaling of stiffness with density for the two cellular metamaterials.
thermal and/or acoustic properties. By modifying accordingly the structure vector, this framework can also be extended to other classes of metamaterials including architectures with curved members, thin shells, and/or density gradients. Finally, we envision that online databases that contain complete microstructure vectors and their corresponding material properties will greatly accelerate the design and discovery of novel architected metamaterials.
## Materials and Methods
The compressive numerical simulation of the metamaterials is conducted using Abaqus (SIMULIA). Each strut is discretized into 10 shear deformable beam elements and the parent solid material is modeled as linear elastic. A vertical displacement is applied on top while all nodes at the bottom are fixed. The effective stiffness is measured as the slope of the stress-strain response (up to 2% strain). The specimens are synthesized using a stereolithography-based printer (Form 3 by Formlabs) with \(50\mu\)m layer thickness using a Rigid-10K resin with Young's modulus \(E=10000\)MPa. The post-processing protocol involves washing in isopropyl alcohol for 15 minutes to clear off the liquid resin, followed by UV-curing for 120 minutes in 70degC. The compressive experiment is carried out using an MTS Criterion Series 40 testing stage quasi-statically, and both the force and displacement are measured by the MTS load cell.
This work was supported by Johns Hopkins University and the National Science Foundation (NSF) under Award Number 2129825.
|
2301.08635 | Supercritical colliding wind binaries | Context. Particle-accelerating colliding-wind binaries (PACWBs) are systems
that are formed by two massive and hot stars and produce nonthermal (NT)
radiation. The key elements of these systems are fast winds and the shocks that
they create when they collide. Binaries with nonaccreting young pulsars have
also been detected as NT emitters, again as a consequence of the wind-wind
interaction. Black holes (BHs) might produce NT radiation by this mechanism if
they accrete at super-Eddington rates. In such cases, the disk is expected to
launch a radiation-driven wind, and if this wind has an equatorial component,
it can collide with the companion star yielding a PACWB. These systems are
supercritical colliding wind binaries (SCWBs).
Aims. We aim to characterize the particle acceleration and NT radiation
produced by the collision of winds in binary systems composed of a
superaccreting BH and an early-type star.
Methods. We estimated the terminal velocity of the disk-driven wind by
calculating the spatial distribution of the radiation fields and their effect
on disk particles. We then found the location of the wind collision region and
calculated the timescales of energy gain and losses of relativistic particles
undergoing diffusive acceleration. With this information, we were able to
compute the associated spectral energy distribution of the radiation.
Results. We find that the interaction of winds can produce NT emission from
radio up to tens of GeV, with luminosities in the range of $\sim
10^{33}-10^{35} \, {\rm erg \, s^{-1}}$, which for the most part are
contributed by electron synchrotron and inverse Compton radiation.
Conclusions. We conclude that SCWBs, such as some ultraluminous X-ray sources
and some Galactic X-ray binaries, are capable of accelerating cosmic rays and
producing NT electromagnetic emission from radio to $\gamma$-rays, in addition
to the thermal components. | L. Abaroa, G. E. Romero, P. Sotomayor | 2023-01-20T15:35:22Z | http://arxiv.org/abs/2301.08635v1 | # Supercritical colliding wind binaries
###### Abstract
Context:Particle-accelerating colliding-wind binaries (PACWBs) are systems that are formed by two massive and hot stars and produce nonthermal radiation. The key elements of these systems are fast winds and the shocks that they create when they collide. Binaries with nonaccreting young pulsars have also been detected as nonthermal emitters, again as a consequence of the wind-wind interaction. Black holes might produce nonthermal radiation by this mechanism if they accrete at super-Eddington rates. In such cases, the disk is expected to launch a radiation-driven wind, and if this wind has an equatorial component, it can collide with the companion star yielding a PACWB. These systems are supercritical colliding wind binaries.
Aims:We aim to characterize the particle acceleration and nonthermal radiation produced by the collision of winds in binary systems composed of a superaccreting black hole and an early-type star.
Methods:We estimated the terminal velocity of the disk-driven wind by calculating the spatial distribution of the radiation fields and their effect on disk particles. We then found the location of the wind collision region and calculated the timescales of energy gain and losses of relativistic particles undergoing diffusive particle acceleration. With this information, we were able to compute the associated spectral energy distribution of the radiation. We calculated a number of specific models with different parameters to explore this scenario.
Results:We find that the interaction of winds can produce nonthermal emission from radio up to tens of GeV, with luminosities in the range of \(\sim 10^{33}\)-\(10^{35}\) erg s\({}^{-1}\), which for the most part are contributed by electron synchrotron and inverse Compton radiation.
Conclusions:We conclude that supercritical colliding wind binaries, such as some ultraluminous X-ray sources and some Galactic X-ray binaries, are capable of accelerating cosmic rays and producing nonthermal electromagnetic emission from radio to \(\gamma\)-rays, in addition to the thermal components.
## 1 Introduction
Early-type stars are very hot and their radiation fields can launch powerful particle winds (Lamers & Cassinelli 1999). Such winds quickly reach supersonic velocities and accelerate to terminal velocities in the range \((2-4)\times 10^{3}\) km s\({}^{-1}\) (Abbott 1978; Muijres et al. 2012). When two massive stars with powerful winds form a binary system, the winds collide producing shocks separated by a contact discontinuity from where matter is evacuated (e.g., Stevens et al. 1992). A reverse shock moves in the wind of each star. When such shocks are adiabatic, they can accelerate suprathermal particles up to relativistic energies (Eichler & Usov 1993; Pittard et al. 2020). These particles, in turn, cool mainly by synchrotron radiation and inverse Compton upscattering of stellar photons, emitting nonthermal radiation (Eichler & Usov 1993; Benaglia & Romero 2003; Reimer et al. 2006; De Becker 2007; Reitberger et al. 2014; del Palacio et al. 2016; Pittard et al. 2021). Proton acceleration can also lead to gamma-ray emission through \(pp\) collisions and the subsequent \(\pi^{0}\) decays (e.g., Balbo & Walter 2017; Grimaldo et al. 2019).
The actual fraction of particle-accelerating colliding-wind binaries (PACWBs) among massive colliding wind binaries (CWBs) is not well known. De Becker & Raucq (2013) list 43 confirmed cases, mostly detected at radio wavelengths. These authors mention several other candidates, and new sources have been found since the publication of this latter work (e.g., Benaglia et al. 2015; del Palacio et al. 2016). The total kinetic power of these systems ranges from \(\sim 10^{34}\) to more than \(10^{37}\) erg s\({}^{-1}\). The most extreme cases are WR89, WR98, and WR140, with powers of between 6 and 8 times \(10^{37}\) erg s\({}^{-1}\). Less than \(10^{-7}\) of this power is finally radiated through synchrotron radio emission. The most luminous nonthermal radio-emitting CWB is WR140, with a total radio luminosity of \(\sim 2.6\times 10^{30}\) erg s\({}^{-1}\).
Contrary to the radio emission, high-energy radiation has been more difficult to detect in CWBs. At X-rays, the thermal component usually dominates and hinders the detection of nonthermal components. In the gamma-ray domain, only two systems have been detected so far: \(\eta\) Carinae and WR11. The latter is the nearest known CWB. At \(d\sim 340\) pc, it shows a gamma-ray luminosity in the _Fermi_-LAT energy range of \(L_{\gamma}=(3.7\pm 0.7)\times 10^{31}\) erg s\({}^{-1}\). This luminosity amounts to \(\sim 6\times 10^{-6}\) of the total wind kinetic power (Pshirkov 2016). Similar fractions for other, more distant PACWBs yield fluxes that are undetectable with the currently available instrumentation. The notable exception is the mentioned \(\eta\) Carinae.
\(\eta\) Carinae is a heavily obscured and peculiar object. The system includes a luminous blue variable (LBV) star of about 90 solar masses and a secondary Wolf-Rayet (WR) star of \(\sim 30\) solar masses. \(\eta\) Carinae is the most luminous binary in the Galaxy, with a bolometric luminosity of about \(5\times 10^{6}\)\(L_{\odot}\). The mass-loss rate of the primary is extremely high, reaching up to \(10^{-3}\)\(M_{\odot}\) yr\({}^{-1}\). The binary was detected in hard X-rays by _INTEGRAL_(Leyder et al., 2008) and _Suzaku_(Okazaki et al., 2008), suggesting the presence of relativistic electrons in the system. _AGILE_ detected gamma rays from \(\eta\) Carinae for the first time (Tavani et al., 2009). The system was subsequently detected by _Fermi_(Abdo et al., 2010) with a luminosity of \(\sim 10^{34}\) erg s\({}^{-1}\). The observations reveal the presence of a hard component in the spectrum around periastron, which disappears near apastron. Such a component has been explained through the decay of \(\pi^{0}\) produced by relativistic protons interacting with the dense stellar wind (Farnier et al., 2011). There is a clear variability with the orbital phase. Different behaviors are observed at low (\(0.3-10\) GeV) and high (\(>10\) GeV) gamma-ray energies. The low-energy component is likely produced by inverse Compton scattering of stellar photons (Balbo & Walter, 2017).
The case of \(\eta\) Carinae suggests that super-Eddington systems might be particularly powerful PACWBs. When a compact object such as a black hole accretes with rates that exceed the Eddington rate, the radiation pressure on the surface of the disk will overcome the gravitational attraction and matter will be expelled from the surface of the disk in the form of a strong wind. Such winds can rival and even surpass those of the most luminous CWBs in terms of kinetic power. When the donor star is a hot early-type star also endowed with a wind, a supercritical colliding wind binary (SCWB) can be formed. Such systems should have strong shocks and are potential particle accelerators and nonthermal emitters.
In our Galaxy, there are some examples of black hole X-ray binaries with disks that launch strong outflows. Two examples are GRS 1915+105 (Mirabel & Rodriguez, 1994; Neilsen & Lee, 2009) and V404 Cygni (Munoz-Darias et al., 2016; Tetarenko et al., 2017). However, the donor star in both of these systems is a low-mass star. Another well-known supercritical source is the Galactic microquasar SS433, which is a confirmed nonthermal emitter and might be a possible example of a SCWB in our Galaxy (see Fabrika, 2004, for an extensive review). Many ultraluminous X-ray sources (ULXs) detected in nearby galaxies might also belong to this category of sources.
In this paper, we explore the CWB scenario where one of the winds is launched by a supercritical disk around a black hole. We start by characterizing the disk model and the radiation fields it produces (Sections 2.1 and 2.2). We then investigate the motion of particles under the radiation pressure in such fields (Section 2.3). This allows us to get reasonable estimates of the terminal velocities expected for the matter ejected in the direction of the companion star. We then proceed to study the wind interactions, shock adiabaticity, and other relevant issues for particle acceleration in Sect. 3. This is followed by estimates of energy losses for accelerated particles, particle distributions, and calculations of the nonthermal output (Sect. 4). In Section 5 we present results for some specific models, with different choices of the accretor mass and the accretion power. The donor star is supposed to be a hot O.5V with a temperature of 41500 K and a kinetic power of a few times \(10^{37}\) erg s\({}^{-1}\). We finally apply our model to the extragalactic binary system NGC 4190 ULX 1. After a discussion (Sect. 7), we close with a summary and our conclusions.
## 2 The accretion disk and its wind
We assume that the X-ray binary is composed of a Population I star and a nonrotating stellar mass black hole (BH) in a close orbit.
The orbital semi-axis \(a\), the stellar radius, and the mass ratio of the system, \(q=M_{*}/M_{\rm BH}\), satisfy (Eggleton, 1983):
\[R_{\rm bob}^{*}=\frac{a\ 0.49\ q^{2/3}}{0.6\ q^{2/3}+\ln{(1+q^{1/3})}}, \tag{1}\]
where \(M_{*}\) is the mass of the star and \(M_{\rm BH}\) the mass of the BH. Hence, the star overflows its Roche lobe \(R_{\rm bob}^{*}\), transfers mass to the BH through the Lagrange point, and an accretion disk is formed due to the angular momentum of the system.
In this section, we describe the semi-analytical models we use to study the accretion disk, the spatial distribution of the radiation fields produced by the disk, and the wind ejected from its surface. We assume a Newtonian potential for the gravity field, because we are interested in weak-field processes.
### Accretion disk
We adopt cylindrical coordinates with axial symmetry along the \(z\)-axis, neglect the self-gravity of the disk gas, and consider a nonmagnetized disk with a super-Eddington accretion rate at the outer part of the disk, \(\dot{m}_{\rm input}=\dot{M}_{\rm input}/\dot{M}_{\rm Edd}\gg 1\), where \(\dot{M}_{\rm input}\) is the input of mass per time unit in the accretion disk. The Eddington rate is given by
\[\dot{M}_{\rm Edd}=\frac{L_{\rm Edd}}{\eta c^{2}}\approx 2.2\times 10^{-8}M_{\rm BH }\ {\rm yr}^{-1}=1.4\times 10^{18}\frac{M_{\rm BH}}{M_{\odot}}\ {\rm g \,s^{-1}}, \tag{2}\]
with \(L_{\rm Edd}\) the Eddington luminosity1, \(\eta\approx 0.1\) the accretion efficiency, and \(c\) the speed of light.
Footnote 1: The Eddington luminosity is defined as the luminosity required to balance the attractive gravitational pull of the accreting object by radiation pressure.
The critical or spherization radius, given by
\[r_{\rm crit}\sim 40\dot{m}_{\rm input}r_{\rm g}, \tag{3}\]
separates the disk in two regions: a standard outer disk (Shakura & Sunyaev, 1973) and a radiation-dominated inner disk with advection (Fukue, 2004). In relation (3), \(r_{\rm g}=GM_{\rm BH}/c^{2}\) is the gravitational radius of the BH, with \(G\) the gravitational constant. In the disk model, the advection is parameterized as a fraction \(f\) of the viscous heating, \(Q_{\rm adv}=fQ_{\rm vis}\), and the disk becomes geometrically thick in the inner region, where the ejection of winds by the radiation force helps to regulate the mass-accretion rate onto the BH (\(\dot{M}_{\rm acc}\)) at the Eddington rate2.
Footnote 2: \(\dot{M}_{\rm acc}=\dot{M}_{\rm input}\) in the outer region of the disk and \(\dot{M}_{\rm acc}=\dot{M}_{\rm input}r_{\rm d}/r_{\rm crit}\) in the inner region (Fukue, 2004).
As the disk is optically thick, we assume that it radiates locally as a blackbody. The radiation intensity of a plasma element in the comoving frame of the outer and inner disk, at a radius \(r_{\rm d}\) measured on the equatorial plane, is
\[I_{0}=\frac{1}{\pi}\sigma T_{\rm eff}^{4}=\left\{\begin{array}{ll} \frac{1}{\pi}\frac{3GM_{\rm BH}\dot{M}_{\rm input}}{8\pi r_{\rm d}^{3}}\,f_{ \rm in},&r_{\rm d}>r_{\rm crit}\\ \frac{1}{\pi}\frac{3}{4}\sqrt{c_{3}}\frac{L_{\rm Edd}}{4\pi r_{\rm d}^{2}},&r_{ \rm d}\leq r_{\rm crit},\end{array}\right. \tag{4}\]
where \(\sqrt{c_{3}}=H/r_{\rm d}=\tan\delta\), with \(H\) the scale height of the disk, \(\delta\) the disk opening angle, and \(f_{\rm in}=1-r_{\rm in}/r_{\rm d}\approx 1\) (as \(r_{\rm d}>r_{\rm crit}\), then \(r_{\rm d}\gg r_{\rm in}\)). Here, \(c_{3}\) (along with \(c_{1}\) and \(c_{2}\) used in the following section) is a coefficient that depends on the advection parameter, the adiabatic index of the gas \(\gamma\), and the viscosity \(\alpha\) (see Appendix in Fukue 2004). We adopt a disk with \(f=0.5\) and \(\alpha=0.5\); that is, we assume equipartition between advection and viscous heating. The index \(\gamma=4/3\) corresponds to a radiation-dominated gas in the inner disk. These values lead to a disk-opening angle of \(\delta=30^{\circ}\).
### Radiation fields
The wind launched from the radiation-dominated region of the disk will be determined by the radiation forces acting upon the particles on the disk surface and along their subsequent trajectories. These forces will have contributions from different parts of the disk in relative motion with respect to the particles. Some radiation will be blueshifted and some will be redshifted, resulting in differential azimuthal forces onto the particles and then transferring angular momentum from the disk to the wind.
In order to obtain the radiative contribution of each plasma element \(\mathcal{Q}=(r_{d},\phi_{d},H)\) of the disk surface, at any point \(\mathcal{P}=(r,\phi,z)\) above or below the disk, we make a transformation of the intensity between the inertial and comoving reference frames (see Fig. 1). Azimuthal symmetry allows us to perform the calculations for any constant value of \(\phi\); therefore, we do it in the \(r_{2}\) plane (\(\phi=0\)). The relativistic Doppler factor \(\mathcal{D}\) provides the transformation between the reference frames (McKinley 1980):
\[I=\mathcal{D}^{4}I_{0}=\frac{I_{0}}{(1+z_{\rm red})^{4}}, \tag{5}\]
where \(z_{\rm red}\) is the redshift factor given by (Watarai & Fukue 1999)
\[z_{\rm red}=-\frac{(r\cos\phi_{\rm d}-r_{\rm d})v_{r}-(r\sin\phi_{\rm d})v_{ \phi}+(z-H)v_{r}c_{3}}{cD}. \tag{6}\]
Here, \(D\) is the distance between \(\mathcal{P}\) and \(\mathcal{Q}\), \(v_{\phi}=c_{2}v_{\rm K}\) is the azimuthal velocity and \(v_{r}=-c_{1}\alpha v_{\rm K}\) is the radial velocity, with \(v_{\rm K}=\sqrt{GM_{\rm BH}/r_{\rm d}}\) the Keplerian velocity. We note that we only consider the inner part of the disk for these calculations, because the intensity decays with \(r_{\rm d}^{-3}\).
The radiation-field tensor is given by (Rybicki & Lightman 1986)
\[R^{\mu\nu}=\left(\begin{matrix}E&\frac{1}{c}F^{\alpha}\\ \frac{1}{c}F^{\alpha}&P^{\alpha\beta}\end{matrix}\right)=\frac{1}{c}\int I \vec{p}\,\vec{r}\,\vec{r}\,\mathcal{Q}\Omega. \tag{7}\]
This is a symmetric tensor of rank 2 and therefore we calculate ten elements in total: one for the energy density \(E\), three for the flux vector \(F^{\alpha}\), and six for the stress tensor \(P^{\alpha\beta}\). In Eq. 7, \(\vec{p}\,\vec{r}\) and \(\vec{j}\,^{\nu}\) are the direction cosines in Cartesian coordinates, and \(\Omega\) is the solid angle subtended by \(\mathcal{Q}\):
\[\vec{j}\,^{\mu}=\left(\frac{r-r_{\rm d}\cos\phi_{\rm d}}{D},\frac{-r_{\rm d} \sin\phi_{\rm d}}{D},\frac{z-H}{D}\right), \tag{8}\]
\[\mathcal{\rm\rm\rm\rm\rm\rm\,d}\Omega=\frac{-(r\cos\phi_{\rm d}-r_{\rm d}) \sin\delta+(z-H)\cos\delta}{D^{3}}\,\mathcal{\rm\rm\rm\,d}S, \tag{9}\]
where \(\mathcal{\rm\rm\,d}S=\sqrt{1+c_{3}}\,r_{\rm d}\,\mathcal{\rm\rm\,d}r_{\rm d} \,\mathcal{\rm\rm\,d}\phi_{\rm d}\).
### Particles in the photon field
We now calculate the trajectory and velocity of the particles ejected from the disk when they interact with photons of the ambient radiation field.
The equation of motion under a relativistic, radiation treatment, is given by (Kato & Fukue 2020)
\[f_{\mu}=-\frac{\partial\Phi_{\rm e}}{\partial x^{\nu}}+R^{\nu}_{\mu\nu}, \tag{10}\]
where \(f_{\mu}\) is the four-force per unit volume. The effective potential \(\Phi_{\rm e}\) is the sum of gravitational (\(\Phi_{\rm g}\)) and centrifugal (\(\Phi_{\rm c}\)) potentials. The semicolon (; ) in the second term refers to the covariant differentiation of the energy-momentum tensor.
As we consider a disk with axial symmetry, the gravitational potential cancels out in the azimuthal coordinate: \(\partial\Phi_{\rm g}/\partial x^{\alpha}=(\partial\Phi_{\rm g}/\partial r,0, \partial\Phi_{\rm g}/\partial z)\). Furthermore, the centrifugal potential acts only in the radial direction: \(\partial\Phi_{\rm c}/\partial x^{\alpha}=(\vec{l}^{2}/r^{3},0,0)\), with \(l=r_{\rm d}^{2}\omega_{\rm K}\) being the specific angular momentum of the disk, and \(\omega_{\rm K}\) the angular velocity.
The equations of motion of the ejected particles can be found working with Eq. 10. In terms of the nondimensional form of the radiation-field tensor elements \(\epsilon\), \(f^{\alpha}\), and \(p^{\alpha\beta}\), the system of differential, tensorial, and coupled equations is as follows (equations originally derived by Watarai & Fukue 1999, Eq. 42-44, but now extended to second order in velocity):
Radial coordinate:
\[\frac{\mathcal{\rm\rm\,d}\mu^{\prime}}{\mathcal{\rm\,d}\tau}= -\frac{\partial\Phi_{\rm g}}{\partial r}+\frac{l^{2}}{r^{3}}+ \tag{11}\] \[+\frac{1}{2}[\gamma f^{\prime}-p^{\prime\beta}u_{\beta}-\gamma^{2 }\epsilon u^{\prime}+u^{\prime}(2\gamma f^{\beta}u_{\beta}-p^{\prime\beta}u_{ \beta}u_{\delta})].\]
Azimuthal coordinate:
\[\frac{1}{r}\frac{\mathcal{\rm\,d}l}{\mathcal{\rm\,d}\tau} =\frac{1}{2}[\gamma f^{\phi}-p^{\prime\beta}u_{\beta}-\gamma^{2 }\epsilon(l/r)+ \tag{12}\] \[+(l/r)(2\gamma f^{\beta}u_{\beta}-p^{\prime\beta}u_{\beta}u_{ \delta})].\]
Figure 1: Geometry of the present disk model. The radiation fields are calculated in the \(rz\) plane, where \(\phi=0\). Here, \(\mathcal{Q}\) is the position of the plasma element of the disk and \(\mathcal{P}\) the point of calculation on the \(rz\) plane. The scale height of the disk is \(H\), and \(D\) is the distance between \(\mathcal{Q}\) and \(\mathcal{P}\). The short arrow is the direction cosine \(\vec{p}\,^{\prime}\). This figure is adapted from Watarai & Fukue (1999).
Height coordinate:
\[\frac{\mathrm{d}u^{z}}{\mathrm{d}r}= -\frac{\partial\Phi_{\mathrm{g}}}{\partial z}+ \tag{13}\] \[+\frac{1}{2}[\gamma f^{z}-p^{\beta}u_{\beta}-\gamma^{2}\epsilon u ^{z}+u^{z}(2\gamma f^{\beta}u_{\beta}-p^{\beta\delta}u_{\beta}u_{\delta})],\]
where \(u^{\mu}\) denotes the four-velocity of the particles and \(\gamma\) the Lorentz factor, which is given by
\[\gamma=\sqrt{1+u^{r}u^{r}+l^{2}/r^{2}+u^{z}u^{z}}. \tag{14}\]
The free parameter of these equations of motion is the launching radius of the particles, \(r_{0}\), and we assume as initial condition that the particles co-rotate with the disk at this radius, \(u_{0}^{\alpha}=(0,l_{0}/r_{0},0)\).
We solve this system of equations numerically and assume that the kinematics of the disk-driven wind is roughly described by the trajectory and terminal velocities obtained for the test particles. As the accretion rate in the inner region of the disk is regulated at the Eddington rate, the mass loss in the wind is of the order of the super-Eddington accretion rate, \(\dot{M}_{dw}\sim\dot{M}_{\mathrm{input}}\).
## 3 Collision of winds
The wind ejected from the disk collides with the stellar wind at the interaction region, where shocks are generated giving rise to particle acceleration. An important quantity that characterizes the wind is the kinetic luminosity, \(L_{\mathrm{K}}=\dot{M}v^{2}/2\), where \(\dot{M}\) is the mass-loss rate and \(v\) the velocity of the fluid. A small fraction of the total kinetic power of the wind is transferred to relativistic particles, \(L_{\mathrm{rel}}\sim 0.1L_{\mathrm{K}}\), where we assume equipartition between relativistic protons and electrons (\(L_{\mathrm{e}}=L_{\mathrm{p}}\)). The mass-loss rate and velocity of the stellar wind are set according to the parameters found in the literature for the type of star we have chosen (e.g., Kobulnicky et al., 2019). In the case of the disk-driven wind, the velocity is obtained following the procedures described in the previous section. Given the orbital separation, the disk inclination, and the stellar size, we estimate that \(\sim 10\%\) of the original kinetic power reaches the acceleration region. We assume a circular orbit, that is, the geometry associated with the collision of winds does not depend on the orbital phase.
In this section, we describe the models for the collision region, the magnetic ambient field, and the shocks. We adopt a one-zone approximation for these calculations.
### Contact discontinuity
The winds collide at a surface called the contact discontinuity (CD). The stagnation point (SP) is the closest position of the CD to the star, and is located where the ram pressures of the winds are in equilibrium,
\[P_{\mathrm{ram}}(r_{\mathrm{BH}})=\rho_{\mathrm{dw}}v_{\mathrm{dw}}^{2}=\rho _{\mathrm{*w}}v_{\mathrm{*w}}^{2}=P_{\mathrm{ram}}(r_{*}). \tag{15}\]
Here, \(r_{\mathrm{BH}}\) and \(r_{*}\) are the distances to the SP from the BH and from the center of the star, respectively. The density of the spherical stellar wind at this location is given by
\[\rho_{\mathrm{*w}}=\frac{\dot{M}_{*}}{4\pi r_{*}^{2}v_{\mathrm{*w}}}, \tag{16}\]
whereas the density of the disk-driven wind reads
\[\rho_{\mathrm{dw}}=\frac{\dot{M}_{\mathrm{dw}}}{\Omega r_{\mathrm{BH}}^{2}v_ {\mathrm{dw}}}, \tag{17}\]
where \(\Omega=2\pi(1-\cos\theta)\) is the solid angle of the wind and \(\theta\) the semi-opening angle of the wind. Solving these equations we obtain the position of the SP.
### Magnetic field
The strength of the magnetic field at the CD is essentially determined by the stellar surface magnetic field \(B_{*}\). The intensity of \(B_{\mathrm{CD}}\) and its topology -dipole (i), radial (ii), or toroidal (iii)-, is given by (Eichler and Usov, 1993):
\[B_{\mathrm{CD}}\approx B_{*}\times\left\{\begin{array}{ll}R_{*}^{3}/r_{*}^{ 3},&R_{*}<r_{*}<r_{\mathrm{A}},&\mathrm{(i)}\\ \\ R_{*}^{2}/r_{*}r_{*}^{2},&r_{\mathrm{A}}<r_{*}<R_{*}(v_{\mathrm{*w}}/v_{*}^{ \mathrm{tot}}),&\mathrm{(ii)}\\ \\ R_{*}^{2}v_{*}^{\mathrm{rot}}/r_{*}r_{*}v_{\mathrm{*w}},&R_{*}(v_{\mathrm{*w}}/v _{*}^{\mathrm{tot}})<r_{*},&\mathrm{(iii)},\end{array}\right. \tag{18}\]
where \(R_{*}\) is the stellar radius, \(r_{\mathrm{A}}\) the Alfven radius, and \(v_{*}^{\mathrm{tot}}\sim 0.1v_{\mathrm{*w}}\) the surface rotation velocity.
### Particle acceleration and shock
Particles are accelerated up to relativistic energies in the collision region through a first-order diffusive shock mechanism. Two shock fronts are generated: a forward shock (FS) that propagates through the stellar wind, and a reverse shock (RS) that propagates through the wind of the disk. The diffusive acceleration rate of the particles is given by (e.g., Protheroe, 1999):
\[t_{\mathrm{se}}^{-1}=\eta_{\mathrm{se}}\,\frac{e\,Z\,c\,\,B_{\mathrm{CD}}}{E}, \tag{19}\]
where \(e\) is the electric charge, \(Z\) the atomic number, and \(E\) is the energy of the particle. The acceleration efficiency, \(\eta_{\mathrm{se}}\), depends on the diffusion coefficient of the particles, the shock velocity, and the angle between the magnetic field and the normal to the shock plane. We assume that the shock propagates perpendicular to the magnetic field and that diffusion occurs in the Bohm regime. Thus, the acceleration efficiency is
\[\eta_{\mathrm{se}}\approx\frac{3}{8}\left(\frac{v_{\mathrm{sh}}}{c}\right)^{2}, \tag{20}\]
where the shock velocities in the reference frame where one of the fluids is at rest, \(v_{\mathrm{*w}}=0\), and the other one moves with a velocity \(v_{\mathrm{dw}}\), are given by (Lee et al., 1996):
\[v_{\mathrm{RS}}=-\frac{4}{3}\frac{1}{1+\sqrt{n_{\mathrm{*w}}/n_{\mathrm{dw}}}}v _{\mathrm{dw}}, \tag{21}\]
Figure 2: Scheme of the wind collision seen in the \(rz\) plane (not to scale), adapted from Abaroa et al. (2021).
\[v_{\rm FS}=\frac{4}{3}\frac{1}{1+\sqrt{n_{\rm dw}/n_{\rm sw}}}v_{\rm dw}. \tag{22}\]
Here, \(n_{\rm sw}\) and \(n_{\rm dw}\) are the numerical densities of the winds (\(n_{\rm w}=\rho_{\rm w}/m_{\rm p}\), with \(m_{\rm p}\) the mass of the proton). The pressure and density of the shocked medium are calculated following the Rankine-Hugoniot relations (e.g., Lamers & Cassinelli 1999).
As we are interested in the nonthermal particle distribution, we investigate only adiabatic shocks; that is, where radiative losses are negligible. This is because in radiative shocks the gas in the shocked region emits large amounts of thermal radiation; the system therefore loses energy, the entropy increases, and the medium becomes increasingly homogeneous. If magnetic-inhomogeneities disappear, the acceleration efficiency decays abruptly, aborting the formation of nonthermal distributions.
The shock is adiabatic if the thermal cooling length \(R_{\rm A}\) is larger than the size of the acceleration region \(\Delta x_{\rm ac}\) (McCray & Snow 1979). The cooling length reads
\[R_{\rm A}=\frac{5.9\times 10^{11}\mu(v_{\rm sh}/{\rm km~{}s^{-1}})^{3}}{(n_{ \rm w}/{\rm cm}^{-3})[\Lambda(T_{\rm sh})/{\rm erg~{}s^{-1}~{}cm^{-3}}]}~{}{\rm cm}. \tag{23}\]
Here, \(n_{\rm sw}\) is the number density of the undisturbed medium, \(\mu\) is the average molecular weight (\(\mu=0.6\) for a fully ionized plasma), and \(\Lambda(T_{\rm sh})\) is the cooling function, which depends on the shock temperature (Raymond et al. 1976; Myasnikov et al. 1998; Wolfire et al. 2003). This latter function can be written as
\[\Lambda(T_{\rm sh})=\left\{\begin{array}{ll}4\times 10^{-29}T_{\rm sh}^{0.8 },&55~{}{\rm K}\leq T_{\rm sh}<10^{4}~{}{\rm K}\\ 7\times 10^{-27}T_{\rm sh},&10^{4}~{}{\rm K}\leq T_{\rm sh}<10^{5}~{}{\rm K}\\ 7\times 10^{-19}T_{\rm sh}^{-0.6},&10^{5}~{}{\rm K}\leq T_{\rm sh}<4\times 10^{7}~ {}{\rm K}\\ 3\times 10^{-27}T_{\rm sh}^{0.5},&T_{\rm sh}\geq 4\times 10^{7}~{}{\rm K}, \end{array}\right. \tag{24}\]
where \(T_{\rm sh}\) is given by
\[T_{\rm sh}=18.21\mu\left(\frac{v_{\rm sh}}{{\rm km~{}s^{-1}}}\right)^{2}{\rm K}. \tag{25}\]
We note that this temperature has a maximum value in a collisional plasma: it is self-regulated by the pair-creation, satisfying in any case \(k_{\rm B}T_{\rm sh}<1\) MeV (\(k_{\rm B}\) is the Boltzmann constant).
We assume that the size of the acceleration region is a fraction of the distance from the BH to the SP, \(\Delta x_{\rm ac}\sim 0.1r_{\rm BH}\). As we consider a one-zone model, the acceleration region must be narrow enough to generate near-homogeneous conditions.
## 4 Radiative processes
Particles accelerated at the shock can cool through different processes and produce nonthermal radiation. The timescales associated to this cooling are related to the total energy-loss of the particles:
\[\frac{dE}{dt}\approx\frac{-E}{t_{\rm cool}}, \tag{26}\]
where the total cooling rate is
\[t_{\rm cool}^{-1}=\sum_{i}t_{i}^{-1}, \tag{27}\]
where \(t_{i}\) corresponds to each timescale of the involved cooling processes.
We assume advective escape; that is, particles are removed from the acceleration region by the bulk motion of the fluid. If the timescales of cooling are shorter than those of escape, particles radiate before they escape from the acceleration region. The maximum energy for each kind of particle can be inferred by looking at the point where the acceleration rate is equal to the total cooling or escape rate. This energy cannot exceed the maximum energy imposed by the Hillas criterion, \(E_{\rm e,p}^{\rm max}<E_{\rm Hillas}^{\rm max}\).
As we are interested in nonthermal processes, we work at scales smaller than the size of the binary system and assume that rotation effects are negligible there. Effects caused by the orbital motion, such as Coriolis or centrifugal forces, could be relevant on larger scales and lead to strong disturbances in the flow and thermal processes. The analysis of such effects usually requires numerical simulations and is beyond the scope of this work.
### Energy losses
We consider adiabatic and radiative losses. Adiabatic cooling is related to the work done by the particles of the wind to expand the shocked gas. Radiative cooling is caused by nonthermal processes as a consequence of the interaction of the wind particles with ambient fields and matter.
Our model is lepto-hadronic, and so we calculate the following radiative processes numerically:
-Synchrotron: interaction of protons and electrons with the ambient magnetic field, which will be amplified by a factor of 4 in the shocked region due to Rankine-Hugoniot relations.
-Inverse Compton (IC): collision of relativistic electrons with photons of the ambient radiation field.
-Bremmstrahlung: Coulombian interactions between relativistic electrons and cold matter.
-Photo-hadronic interactions: interaction of highly relativistic protons with photons of the ambient radiation field.
-Proton-proton: collision of relativistic protons with cold matter.
In addition, we take into account inelastic collision of particles with atoms of the dense medium; that is, ionization losses, which can be relevant in the 1-100 MeV range. We note that in this energy range, ionization losses largely dominate over Coulomb scatterings (see e.g., Fig. 7 from O'C Drury et al. 1996), and so the latter are not included in our analysis. The reader is referred to Romero & Paredes (2011), Romero & Vila (2014), and Muller & Romero (2020) plus references therein for additional details on radiative processes.
### Particle distribution
We investigate the evolution of particles that are accelerated at the shock and injected into the surrounding medium. The medium around the shock is the shocked gas of the winds. In this paper, we restrict our analysis to this region. Beyond the binary, the surrounding medium has been affected by the effects of the stellar winds, and so the system is expected to be located inside a bubble inflated by the winds and surrounded by a shell formed with the swept-up material at distances of a few to several parsecs, depending on the mass of the black hole progenitor. Inside the bubble, where the advected protons will be injected, the density is expected to be lower than that of the standard interstellar medium (e.g., around 0.01 cm\({}^{-3}\) or less). In the shell, there should be sufficient material for hadronic interactions with the protons diffused or transported from the central source3.
Footnote 2: The \(\alpha\)-dependence of the \
turnover of the synchrotron spectrum in SCWBs, which is expected to be at \(\sim\)GHz frequencies (see e.g., Rybicki & Lightman 1986; del Palacio et al. 2016).
Other absorption processes, such as the photoelectric effect, direct Compton, or \(\gamma\)-nucleon pair creation, are not taken into account in this paper. Their cross-sections are not high enough to become relevant in the calculation of opacity given the ambient densities that we consider here (see Fig. 1 from Reynoso et al. 2011).
## 5 Results
In this section, we apply our model to a generic super-Eddington X-ray binary. We consider a star of spectral type O.5V (Table 1) and investigate four scenarios: in scenarios S1 and S2 we regard a BH with mass \(M_{\rm BH}=5M_{\odot}\) and mass-accretion rates of \(10^{2}\dot{M}_{\rm Edd}\) and \(10^{3}\dot{M}_{\rm Edd}\), respectively; in scenarios S3 and S4 we consider a BH with mass \(M_{\rm BH}=20M_{\odot}\) and again accretion rates of \(10^{2}\dot{M}_{\rm Edd}\) and \(10^{3}\dot{M}_{\rm Edd}\), respectively. The complete set of parameters is summarized in Table 2.
### Wind
We calculate the radiation-field tensor (Eq. 7) and in Fig. 3 we show the distribution of the energy density (\(\epsilon\)) on the \(rz\) plane, where the black zone is the inflated inner disk. We obtain a strong azimuthal flux component of the radiation-field tensor. This distribution is the same in all four scenarios, because in the critical disk the radiation-field tensor depends on advection, viscosity, and adiabatic parameters, which remain the same in all cases.
We solve Eqs. 11-13 to find the trajectory and velocity of the particles. Both quantities are determined by \(R^{\alpha r}\) and therefore we obtain the same trajectories and terminal velocities in S1-S4. As an example, in Fig. 4 we show the normalized velocity of a test particle, with a launching radius of \(40r_{\rm g}\) (\(\equiv 20r_{\rm s}\)), which reaches a terminal velocity of \(\approx 0.16c\). This result does not vary much if we vary the launching radius (\(\pm 0.02c\) for \(\pm 20r_{\rm g}\)).
The particles describe a helical trajectory in the vicinity of the BH for two main reasons (Fig. 5). The first is the presence of the strong azimuthal components of the radiation field, which help to maintain the spiral geometry of the particles in the inner disk. The second reason is the condition imposed for the particle ejection, namely that the particles initially have only azimuthal velocity. The intensity of the radiation field decays rapidly with distance from the BH, and therefore the ejected particles follow a spiral trajectory near the BH, but beyond a certain radius (\(\sim r_{\rm crit}\)) they follow a free path with a strong component of the radial velocity.
The overall result is an equatorial wind with terminal velocities of the order of \(0.15c\). The kinetic power of these winds is in the range \(10^{39-41}\) erg s\({}^{-1}\), which is well above the power of the winds of typical WR or OB stars. Therefore, in general, the disk wind is expected to overwhelm the stellar wind.
### Energy gain and losses
We follow the calculations in Sect. 3.1 and find that, in all four scenarios, the SP is located near the stellar surface and the wind of the disk completely sweeps up the stellar wind, as expected. Hence, the forward shock is in the stellar atmosphere, fully ra
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{Type O.5V Star} \\ \hline Parameter & Value & Units \\ \hline \(M_{\star}\) & 37 & \(M_{\odot}\) \\ \(R_{\star}\) & 11 & \(R_{\odot}\) \\ \(T_{\rm eff}\) & 41500 & K \\ \(M_{\star}\) & \(1.2\times 10^{-5}\) & \(M_{\odot}\) yr\({}^{-1}\) \\ \(v_{\rm sw}\) & \(2.9\times 10^{8}\) & cm s\({}^{-1}\) \\ \(v_{\star}^{\rm rot}\) & \(2.9\times 10^{7}\) & cm s\({}^{-1}\) \\ \(L_{\star}^{\star}\) & \(3.2\times 10^{37}\) & erg s\({}^{-1}\) \\ \(B_{\star}\) & 750 & G \\ \hline \end{tabular}
\end{table}
Table 1: Parameters adopted in the model for the star of type O.5V. All parameters from Kobulnicky et al. (2019), with the exception for the magnetic field (from Wade & MiMeS Collaboration 2015).
Figure 4: Normalized velocity of a wind test particle as a function of the Schwarzschild radius. The particle reaches a terminal velocity of \(\sim 0.16c\) for a launching radius of \(r_{0}=20r_{\rm s}\) (coincident with the vertical axis).
Figure 3: Contour maps of the spatial distribution of the normalized radiation energy density \(\epsilon\) in the \(rz\) plane above the accretion disk. Both axes are in units of Schwarzschild radius. The color bar is the intensity of \(\epsilon\) and the black zone is the inflated disk (\(f=0.5\), \(\alpha=0.5\), \(\gamma=4/3\)).
diative, and completely unable to accelerate relativistic particles. Only the reverse shock (RS) is suitable for the task. As \(r_{*}\approx R_{*}\), the magnetic field at the CD is \(B_{\rm CD}\approx B_{*}\).
The cooling length of the RS is greater than the size of the acceleration region in all cases (see Table 2); this is why the shock is adiabatic and the acceleration efficiency of the process is relatively high: \(\eta_{\rm ac}\sim 10^{-2}\) (see Sect. 3.3). The shock velocity is \(\approx 4.4\times 10^{9}\) cm s\({}^{-1}\) and the temperature of the shocked gas reaches \(\approx 4.8\times 10^{10}\) K.
We calculate the energy gain and losses of the shock-accelerated particles following Sect. 4. Highly relativistic protons escape from the acceleration region without cooling in all scenarios considered here (with energies up to \(E_{\rm p}\approx 1\) PeV) and are injected into the interstellar medium (ISM). Protons are advected, that is, they are removed from the collision region by the bulk motion of the fluid. They therefore do not interact with ambient material at scales similar to that of the system. Electrons cool mainly through IC and synchrotron mechanisms, and reach a maximum energy of \(E_{\rm e}\approx 100\) GeV. To obtain the electron distribution, we solve the transport equation considering only the dominant IC and synchrotron losses, and a power-law injection function with a spectral index of 2.2 and an exponential cutoff (see Eq. 29).
### Spectral energy distribution
Figure 6 shows the SEDs of the four scenarios. The only thermal component of the spectrum is the photosphere of the optically thick disk-driven wind. The emission peak of the wind for S1 and S2 is \(\approx 10^{37}\) erg s\({}^{-1}\), whereas for S3 and S4 the peak is \(\approx 10^{38}\) erg s\({}^{-1}\). This occurs at energies of \(\sim 100\) eV for S1 and S3, and \(\sim 30\) eV for S2 and S4. Therefore, if \(M_{\rm BH}\) increases, the luminosity is higher and, if the mass-accretion rate increases, the luminosity peak occurs at lower energies.
In the case of the nonthermal spectrum, we calculate the emission due to synchrotron and IC losses. In the latter case, we consider the photon fields of the star and of the wind photosphere as targets. In all cases, the dominant IC contribution is that of the star. The luminosity in S3 and S4 is an order of magnitude greater than that in S1 and S2. This is because of the modification of the orbital parameters when the BH mass varies: to guarantee the overflow of the Roche lobe, the orbital semi-axis varies with \(M_{\rm BH}\), which results in variation in the size of the acceleration re
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Scenario} \\ \cline{3-6} Parameter & Symbol [units] & S1 & S2 & S3 & S4 \\ \hline Black hole mass\({}^{(1)}\) & \(M_{\rm BH}\) [\(M_{\odot}\)] & 5 & 5 & 20 & 20 \\ Mass accretion rate\({}^{(1)}\) & \(\dot{M}_{\rm input}\) [\(M_{\odot}\) yr\({}^{-1}\)] & \(1.1\times 10^{-5}\) & \(1.1\times 10^{-4}\) & \(4.4\times 10^{-5}\) & \(4.4\times 10^{-4}\) \\ \hline Orbital semi-axis\({}^{(1)}\) & \(a\) [\(R_{\odot}\)] & 15 & 15 & 22 & 22 \\ Gravitational radius\({}^{(2)}\) & \(r_{\rm g}\) [cm] & \(7.4\times 10^{5}\) & \(7.4\times 10^{5}\) & \(2.9\times 10^{6}\) & \(2.9\times 10^{6}\) \\ Critical radius\({}^{(2)}\) & \(r_{\rm crit}\) [\(r_{\rm g}\)] & 4000 & 40000 & 40000 & 40000 \\ Mass loss in disk winds\({}^{(1)}\) & \(\dot{M}_{\rm dw}\) [\(M_{\odot}\) yr\({}^{-1}\)] & \(10^{-5}\) & \(10^{-4}\) & \(4.3\times 10^{-5}\) & \(4.3\times 10^{-4}\) \\ Kinetic power of the disk-driven wind\({}^{(2)}\) & \(L_{\rm K}^{\rm dw}\) [erg s\({}^{-1}\)] & \(7.8\times 10^{39}\) & \(7.8\times 10^{40}\) & \(3.4\times 10^{40}\) & \(3.4\times 10^{41}\) \\ Cold matter density at SP\({}^{(2)}\) & \(n_{\rm dw}\) [cm\({}^{-3}\)] & \(5.1\times 10^{12}\) & \(5.1\times 10^{13}\) & \(2.9\times 10^{12}\) & \(2.9\times 10^{13}\) \\ Distance to SP from BH\({}^{(2)}\) & \(r_{\rm BH}\) [cm] & \(2.7\times 10^{11}\) & \(2.7\times 10^{11}\) & \(7.6\times 10^{11}\) & \(7.6\times 10^{11}\) \\ Size of acceleration region\({}^{(1)}\) & \(\Delta x_{\rm ac}\) [cm] & \(2.7\times 10^{10}\) & \(2.7\times 10^{10}\) & \(7.6\times 10^{10}\) & \(7.6\times 10^{10}\) \\ Shock cold matter density\({}^{(2)}\) & \(n_{\rm RS}\) [cm\({}^{-3}\)] & \(2\times 10^{13}\) & \(2\times 10^{14}\) & \(1.2\times 10^{13}\) & \(1.2\times 10^{14}\) \\ Shock cooling length\({}^{(2)}\) & \(R_{\rm A}\) [cm] & \(7.6\times 10^{11}\) & \(7.6\times 10^{10}\) & \(1.3\times 10^{12}\) & \(1.3\times 10^{11}\) \\ Maximum energy of electrons\({}^{(2)}\) & \(E_{\rm e}^{\rm max}\) [eV] & \(10^{11}\) & \(1.6\times 10^{11}\) & \(10^{11}\) & \(10^{11}\) \\ Maximum energy of protons\({}^{(2)}\) & \(E_{\rm p}^{\rm max}\) [eV] & \(10^{15}\) & \(10^{15}\) & \(3\times 10^{15}\) & \(3.1\times 10^{15}\) \\ Emission peak (low energy)\({}^{(2)}\) & \(L_{\rm 0.01mm}\) [erg s\({}^{-1}\)] & \(3.2\times 10^{33}\) & \(3.2\times 10^{33}\) & \(8\times 10^{34}\) & \(8\times 10^{34}\) \\ Emission peak (high energy)\({}^{(2)}\) & \(L_{\rm 10MeV}\) [erg s\({}^{-1}\)] & \(4\times 10^{32}\) & \(4\times 10^{32}\) & \(10^{34}\) & \(10^{34}\) \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of the different scenarios calculated for the model. We indicate with superscript (1) those parameters that are assumed and with (2) those that are derived. In all models, the system is supposed to be oriented face-on to the observer, that is, the inclination of the normal to the orbital plane \(i\) with respect to the line of the sight is \(\sim 0^{\circ}\).
Figure 5: Trajectory of a test particle in the Cartesian 3D-space in units of Schwarzschild radius. The particles describe a helical trajectory above the inner disk because of the strong azimuthal radiation fields. The launching radius of this test particle is \(r_{0}=20r_{*}\).
gion and the photon density at SP, among other parameters. The emission peak at low energies is \(\sim 10^{33}\) erg s\({}^{-1}\) for S1 and S2, and \(\sim 10^{35}\) erg s\({}^{-1}\) for S3 and S4. At high energies, the emission peak is \(\sim 10^{52}\) erg s\({}^{-1}\) (S1 and S2) and \(\sim 10^{34}\) erg s\({}^{-1}\) (S3 and S4). The gamma-ray absorption due to \(\gamma\gamma\) annihilation is total for energies \(>10\) GeV in all scenarios4.
Footnote 4: We note that, since we assume a nearly face-on inclination of the system, there are no significant variations of the radiative output associated with the orbital phase. If the system were oriented nearly edge-on, the emission would be modulated by the orbital phase due to absorption (for details see Romero et al. 2010).
Attenuation due to material between the source and the observer, that is, absorption by external cold gas, is mainly in the optical-to-UV range and at soft X-rays. At radio wavelengths, refractive scintillation on free electrons of the ISM occurs at lower frequencies than predicted here. For high-energy gamma rays, the main absorbers are infrared (IR) fields and the cosmic microwave background (CMB), but their effects are only relevant for cosmological distances.
## 6 Application to NGC 4190 ULX 1
Ultraluminous X-ray sources (ULXs) are extragalactic point-like objects where the luminosity in the X-ray band appears to be higher than the Eddington luminosity (Bacheti 2016). ULXs are thought to be X-ray binaries with a stellar-mass compact object accreting at super-Eddington rates, where a beaming effect could be responsible for the luminosity observed in the X-ray band: the radiation emitted from the inner part of the accretion disk is geometrically collimated by the ejected wind, which is optically thick except in a narrow region around the black-hole axis and forms a cone-shaped funnel (King et al. 2001; King 2009; Kaaret et al. 2017; Fabrika et al. 2021).
We apply our model to estimate the radiation emitted by the ultraluminous X-ray source NGC 4190 ULX 1 (also known as CXO J121345.2+363754). Although many characteristics of this ULX remain poorly understood, several authors have explored the system and have provided constraints on some of its parameters (see e.g., Liu & Bregman 2005; Gladstone et al. 2013; Koliopanos et al. 2017; Koscec et al. 2018; Ghosh & Rana 2021).
In what follows, we describe the parameterization of the system and its components, and investigate the expected collision of winds. The complete set of parameters used in this section is detailed in Table 3.
### System parameterization
The source is located in the nearby Galaxy NGC 4190 at a distance of \(d\approx 3\) Mpc (Tully et al. 2013). Observations made in 2010 using the _XMM-Newton_ telescope reveal a long-term spectral variability in the 0.3-10.0 keV energy range: \(L_{\rm X}\sim 3-8\times 10^{39}\) erg s\({}^{-1}\).
The angle \(i\) between the line of sight and the \(z\)-axis at which the disk of a ULX is observed determines the components of its spectrum: blackbody disk (BB) or Comptonization. If \(i\) is small, the observer is able to look into the funnel and see the innermost part of the disk: the spectrum shows only the BB component, which corresponds to thermal emission of the disk. This type of spectrum is called broadened disk (BD). If \(i\) is sufficiently large, another effect is observed: the interaction between photons and wind particles near the disk surface induces a Comptonization that produces a hardening in the spectrum. Most ULXs exhibit a combination of both phenomena in their X-ray spectrum.
Ghosh & Rana (2021) investigated the spectral properties of NGC 4190 ULX 1 and suggested that the ULX is in a BD state, and that the compact object is a BH with mass \(\sim 10-30M_{\odot}\) accreting at super-Eddington rates. We fit the _XMM-Newton_ observations (Epoch 3) with the supercritical advection-dominated disk model detailed in Sect. 2.1, assuming a mass-accretion rate of \(\dot{M}_{\rm input}=10\dot{M}_{\rm Edd}\). We also assume a face-on inclination \(i\approx 0^{\circ}\), a BH mass \(10M_{\odot}\) and a geometrical beaming factor \(b=0.07\). This factor is given by,
\[b=\Omega/4\pi=0.5(1-\cos\theta), \tag{39}\]
where \(\Omega\) is the solid angle of the emission. The angle \(\vartheta\) is related to the opening angles of the disk (\(\delta\)) and its wind (\(\theta\)): \(\vartheta+\delta+2\theta=90^{\circ}\). Both angles, \(i\) and \(\vartheta\), can change over time, causing the spectral variability of the object (Fabrika et al. 2021).
On the other hand, Gladstone et al. (2013) provided constraints on the characteristics of the optical counterpart of the system. They suggested that, if \(M_{\rm BH}=10M_{\odot}\), the mass of the star could be \(<50M_{\odot}\) and its radius \(<86R_{\odot}\). We choose a star of type B2V for our model in light of one of the fittings these latter authors made from _Hubble Space Telescope_ observations. If we apply Eq. 1 and consider the mass ratio \(M_{*}/M_{\rm BH}\), and the stellar radius involved (see Table 3), the transfer of mass in the binary system occurs for an orbital semi-axis \(a\leq 15.2\,R_{\odot}\), which results in a period \(\leq 38\) h.
### Collision of winds
The terminal velocity of the disk-driven wind is \(v_{\rm dw}=4.95\times 10^{9}\) cm s\({}^{-1}\), and therefore \(L_{\rm K}^{\rm dw}=1.5\times 10^{39}\) erg s\({}^{-1}\), while \(L_{\rm K}^{*}=2.17\times 10^{34}\) erg s\({}^{-1}\). The SP is located near the stellar surface and the wind of the disk completely suppresses the stellar wind. We therefore only take into account the reverse shock (RS). As \(r_{*}\approx R_{*}\), the magnetic field at the CD is \(B_{\rm CD}\approx B_{*}\).
The cooling length of the RS is \(R_{\rm A}=2.2\times 10^{13}\) cm and the size of the acceleration region is \(\Delta x_{\rm ac}=6.68\times 10^{10}\) cm; therefore, the shock is adiabatic and the acceleration efficiency of the process is \(\eta_{\rm ac}=10^{-2}\), as in our general models. We calculate the energy gain and losses of the shock particles following Sect. 4. Highly relativistic protons escape from the acceleration region without cooling, as in our previous scenarios (with energies up to \(E_{\rm p}\approx 1\) PeV), and are injected into the ISM. Electrons cool mainly through IC and synchrotron mechanisms. Figure 7 shows the timescales of electrons, which reach a maximum energy of \(E_{\rm e}\approx 0.32\) TeV. To obtain the electron distribution, we solve the transport equation taking into account only IC and synchrotron losses, and a power-law injection function with a spectral index of 2.2 and an exponential cutoff.
### Total SED
The SED of the ULX spans a broadband energy range. Figure 9 shows the thermal (wind and accretion disk) and nonthermal (colliding-winds shock) contributions of the system. We also show the sensitivity of the instruments ALMA, VLA (sub-mm waves), _Fermi_, and CTA (gamma rays), and observational data from _XMM-Newton_.
The luminosity in the IR band is \(\sim 10^{34}\) erg s\({}^{-1}\), which is relatively strong, though still undetectable at megaparsec distances. The luminosity in gamma-rays also reaches \(\sim 10^{34}\) erg s\({}^{-1}\). The attenuation factor (Fig. 8) has an effect on photons with energies \(\gtrsim 1\) GeV. Most of the radiation above 1 GeV and all above 50 GeV is suppressed by the annihilation of the \(\gamma\) rays with the photon fields of the disk-driven wind and the star.
## 7 Discussion
Our analysis of supercritical colliding wind binaries shows that these systems should exhibit broadband emission from radio to gamma rays. In this sense, they are similar to CWBs formed by two hot stars, such as O+WR binaries. However, there are important differences as well. If we compare our models with recent models of O+WR CWBs (Pittard et al., 2021), we find that (i) in SCWBs, the wind of the disk is far more powerful than the wind of the star. This results in stagnation points that are very close to the surface of the star. Efficient particle acceleration then can only occur in reverse shocks. (ii) We also see that the disk wind advects protons from the acceleration region before they have time to cool. Only electrons can cool locally. The resulting SED is consequently dominated by synchrotron and IC radiation. (iii) As the acceleration region is close to the star, the local magnetic field is relatively strong. Synchrotron emission reaches energies of hundreds of keV. As the medium is far more dense than in stellar CWBs, free-free absorption causes this radiation to turnover below \(\sim 24\,\mathrm{GHz}\). The total power at millimeter (mm) and submm wavelengths can be between three and five orders of magnitude higher in SCWBs than in stellar CWBs. (iv) IC is the dominant radiation mechanism at high energies. The stronger thermal fields of SCWBs (wind photosphere and star) provide the seed photons, but also impose a high-energy cutoff at \(\sim 1\) GeV through \(\gamma-\gamma\) attenuation. Instead, stellar CWBs can reach energies close to 1 TeV. (v) The strong magnetic fields in the acceleration region cut electromagnetic cas
Figure 8: Attenuation factors due to \(\gamma\gamma\)-annihilation between high-energy nonthermal radiation and photon fields from the star and from the photosphere of the disk-driven wind in NGC 4190 ULX 1. The total attenuation is plotted with a black line.
Figure 6: Thermal and nonthermal SEDs of the four scenarios considered, S1–S4, in logarithmic scale, where a face-on inclination is assumed. S1 and S3 are shown in the left plot, whereas S2 and S4 are shown in the right plot. Dashed lines correspond to S1 (left) and S2 (right), solid lines correspond to S3 (left) and S4 (right). We plot the nonattenuated inverse Compton contributions in gray. The emission peak at high energies is \(\sim 10^{33}\) erg s\({}^{-1}\) for S1 and S2, and \(\sim 10^{34}\) erg s\({}^{-1}\) for S3 and S4. The gamma-ray absorption due to \(\gamma\gamma\) annihilation is total for energies \(>10\) GeV.
Figure 7: Timescales in logarithmic scale of the electron acceleration, escape, and cooling at the reverse shock in NGC 4190 ULX 1. Electrons reach a maximum energy of \(\approx 0.32\) TeV. The acceleration efficiency is \(10^{-2}\).
cades in SCWBs. (vi) The SED is always dominated by the X-ray component associated with the disk or its wind in SCWBs. Finally, (vii) stellar CWBs have wider orbits and a variable separation between the components of the system. This produces variability related to the orbital period. On the contrary, the orbits of SCWBs should be mostly circularized. In general, CWBs are weaker than SCWBs, although span a broader energy range.
An interesting feature of SCWBs is their potential as cosmic ray sources. As mentioned, the strong wind of the disk drags away the relativistic protons before they cool. These protons, with maximum energies of the order of 1 PeV, are then injected into the ISM where they diffuse. Even if a fraction of just \(\sim 1\) % of the wind kinetic power goes to relativistic protons, the cosmic ray output of a SCWB would be in the range \(10^{37}\)[39; 38] erg s\({}^{-1}\). These protons might interact with ambient clouds at some distance from the system, producing gamma rays through \(pp\rightarrow\pi^{0}+pp\) interactions and the subsequent pion decays \(\pi^{0}\rightarrow\gamma\gamma\). The gamma-ray emission from the illuminated clouds can be even stronger than the emission from the binary itself. However, the spectrum should be softer because of propagation effects (Aharonian & Atoyan 1996). Recent modeling by Pittard et al. (2021) of particle acceleration in colliding wind binaries with wind velocities of a few \(10^{3}\) km s\({}^{-1}\) and mG magnetic fields in the acceleration region demonstrate that up to \(\sim 30\) % of the wind power can be transferred to nonthermal particles. This means that, in some extreme cases, a SCWB might inject up to \(\sim 10^{40}\) erg s\({}^{-1}\) in cosmic rays.
Another type of CWB is the so-called gamma-ray binary (GRB; e.g., LS 5039, PSR B1259-63, LS1 +61\({}^{\circ}\) 303, PSR J2032+4127, and others; see, e.g., Dubus 2013; Chernyakova & Malyshev 2020). These sources are formed by a massive star (usually a Be star with a dense equatorial decretion disk and a fast wind) and a young pulsar in an eccentric orbit. The pulsar ejecta a relativistic pair wind. The wind collision produces a broadband spectrum from electrons accelerated at the shock that cool by synchrotron and IC radiation. The two-peak SEDs are similar to those we estimate for SCWBs, but some differences are also clearly seen: (i) GRBs are less energetic because the spin-down luminosity of the pulsar is much smaller than the power of a supercritical wind. (ii) GRBs are highly variable. This variability is modulated with the orbital period. The orbital modulation of the different components of the broadband spectrum is a consequence of the orbital variability of geometrical parameters, such as the geometry of the contact surface of the stellar and pulsar winds. Absorption effects are also strongly variable. (iii) Hadronic interactions are likely when the pulsar crosses the equatorial disk of the star (e.g., Bykov et al. 2021). (iv) GeV flares have been observed after the periastron passage in sources such as PSR B1259-63 (Abdo et al. 2011; Chernyakova et al. 2014). These flares are attributed to the effects of the unshocked pulsar wind interaction with photons from the stellar disk (e.g., Khangulyan et al. 2012).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & Symbol & Value & Units \\ \hline System & & & \\ \hline Inclination\({}^{(1)}\) & \(i\) & 0 & \({}^{\circ}\) \\ Orbital semi-axis\({}^{(2)}\) & \(a\) & 15 & \(R_{\odot}\) \\ Distance to the source\({}^{(3)}\) & \(d\) & 3 & Mpc \\ \hline Black hole & & & \\ \hline Mass\({}^{(1)}\) & \(M_{\rm BH}\) & 10 & \(M_{\odot}\) \\ Gravitational radius\({}^{(2)}\) & \(r_{\rm g}\) & \(1.48\times 10^{6}\) & cm \\ \hline Accretion disk & & & \\ \hline Disk semi opening angle\({}^{(1)}\) & \(\delta\) & 30 & \({}^{\circ}\) \\ Critical radius\({}^{(2)}\) & \(r_{\rm crit}\) & \(3.5\times 10^{9}\) & cm \\ Eddington accretion rate & \(\dot{M}_{\rm tidal}\) & \(2.2\times 10^{-7}\) & \(M_{\odot}\) yr\({}^{-1}\) \\ Mass accretion rate\({}^{(1)}\) & \(\dot{M}_{\rm upper}\) & \(2.2\times 10^{-6}\) & \(M_{\odot}\) yr\({}^{-1}\) \\ Mass loss in winds\({}^{(1)}\) & \(\dot{M}_{\rm low}\) & \(1.98\times 10^{-6}\) & \(M_{\odot}\) yr\({}^{-1}\) \\ Wind velocity\({}^{(2)}\) & \(v_{\rm low}\) & \(4.95\times 10^{9}\) & cm s\({}^{-1}\) \\ Wind semi opening angle\({}^{(2)}\) & \(\theta\) & 14.5 & \({}^{\circ}\) \\ Beaming factor\({}^{(2)}\) & \(b\) & 0.07 & — \\ \hline B2V Star & & & \\ \hline Mass\({}^{(4)}\) & \(M_{*}\) & 8 & \(M_{\odot}\) \\ Radius\({}^{(4)}\) & \(R_{*}\) & 5.4 & \(R_{\odot}\) \\ Temperature\({}^{(4)}\) & \(T_{\rm eff}\) & 20600 & K \\ Mass loss in winds\({}^{(4)}\) & \(M_{*}\) & \(1.4\times 10^{-7}\) & \(M_{\odot}\) yr\({}^{-1}\) \\ Wind velocity\({}^{(4)}\) & \(v_{\rm esc}\) & \(7\times 10^{7}\) & cm s\({}^{-1}\) \\ Rotation velocity\({}^{(1)}\) & \(v_{\rm esc}^{\rm rec}\) & \(7\times 10^{6}\) & cm s\({}^{-1}\) \\ Magnetic field\({}^{(5)}\) & \(B_{*}\) & 200 & G \\ \hline Colliding winds & & & \\ \hline Kinetic power of disk-driven wind\({}^{(2)}\) & \(L_{\rm K}^{\rm dw}\) & \(1.5\times 10^{39}\) & erg s\({}^{-1}\) \\ Kinetic power of stellar wind\({}^{(2)}\) & \(L_{\rm K}^{\rm w}\) & \(2.17\times 10^{34}\) & erg s\({}^{-1}\) \\ Distance from BH to SP\({}^{(2)}\) & \(r_{\rm BH}\) & \(6.68\times 10^{11}\) & cm \\ Size of acceleration region\({}^{(1)}\) & \(\Delta x_{\rm acc}\) & \(6.68\times 10^{10}\) & cm \\ Magnetic field at SP\({}^{(2)}\) & \(B_{\rm Bw}\) & 200 & G \\ Injection spectral index\({}^{(1)}\) & \(p\) & 2.2 & – \\ Acceleration efficiency\({}^{(2)}\) & \(\eta_{\rm ex}\) & \(10^{-2}\) & – \\ Molecular mean weight\({}^{(1)}\) & \(\mu\) & 0.6 & – \\ \hline Reverse shock & & & \\ \hline Velocity\({}^{(2)}\) & \(v_{\rm RS}\) & \(4.4\times 10^{9}\) & cm s\({}^{-1}\) \\ Temperature\({}^{(2)}\) & \(T_{\rm RS}\) & \(10^{10}\) & K \\ Cold matter density\({}^{(2)}\) & \(n_{\rm RS}\) & \(6.9\times 10^{11}\) & cm\({}^{-3}\) \\ Cooling length\({}^{(2)}\) & \(R_{\rm A}\) & \(2.2\times 10^{13}\) & cm \\ \hline \end{tabular} 1
\end{table}
Table 3: Parameters of NGC 4190 ULX 1.
Figure 9: Thermal and nonthermal SEDs of NGC 4190 ULX 1 in logarithmic scale (dashed lines). The nonthermal SED is partially attenuated for energies \(>\) 1 GeV and totally attenuated for energies \(>\) 50 GeV due to annihilation of \(\gamma\)-rays with the photon fields of the star and the photosphere of the disk-driven wind. The gray dashed lines are the nonattenuated IC contributions. The total SED is plotted with a solid black line. Data from _XMM-Newton_ (Epoch 3), and the sensitivity of ALMA, _Fermi_, VLA, and CTA are also shown (instrument sensitivities were taken from Sotomayor & Romero 2022).
We finally mention that some black holes accreting at supercritical rates seem to be capable of launching mildly relativistic jets. A remarkable case in our Galaxy is the notorious microquasar SS433 (Fabrika, 2004). This object resembles a ULX source seen edge on (Begelman et al., 2006). The accretion rate should be extremely high in order to explain the large jet power \(L_{\rm K}\sim~{}10^{40}\) erg s\({}^{-1}\). Begelman et al. (2006) suggest rates of \(\sim 5\times 10^{3}~{}M_{\rm Edd}\sim 5\times 10^{-4}~{}M_{\odot}\,{\rm yr}^{-1}\), which are consistent with estimates of equatorial mass outflows inferred from radio observations (Blundell et al., 2001). These outflows, ejected toward either side of the jets, present a thermal spectrum and might well correspond to the radiation-driven wind of the hypercritical disk. The contamination from the jet base makes it impossible to disentangle contributions from colliding winds from those coming from the jet. However, the equatorial outflow might propagate well beyond the system and reveal itself if it collides with any clouds. The shock generated in the collision would convert the kinetic energy of the plasmoids into internal energy and relativistic particles, which might then cool by \(pp\) interactions with the cloud material. Such a scenario might explain the detection of a GeV source by the \(Fermi\) satellite on the side of SS433 (Bordas, 2020; Li et al., 2020). We will explore the details of this hypothesis elsewhere.
## 8 Summary and conclusions
We explored the consequences of supercritical accretion in binary systems consisting of a hot star and a black hole. We find that a fraction of the kinetic power of the radiation-driven wind released by the accretion disk is transformed into relativistic particles in the region of the wind that collides with the star. Electrons are cooled locally, mainly through synchrotron and inverse Compton radiation. The radiation fields of the star and wind photosphere provide abundant thermal photons for the latter process; they also absorb high-energy radiation above a few GeV. Free-free absorption imposes a high-frequency turnover in the radio regime, suppressing centimeter radio waves, unlike the case of colliding wind binaries. The relativistic protons are blown away by the wind before they can cool down significantly. Once trapped by the outflow, these protons are transported to outer regions where they can interact with ambient gas away from the binary system, producing hadronic gamma-rays. Our most important finding is that, in addition to being strong thermal UV and X-ray sources, supercritical colliding wind binaries can be significant nonthermal sources at mm wavelengths and GeV energies.
###### Acknowledgements.
The authors thank the anonymous referee for a careful and constructive review, and for hisher comments that improved this work. We thank also Daniela Perez and Jiri Horki for fruitful discussions. This work was supported by grant PIP 0554 (CONICET). LA acknowledges the Universidad Nacional de La Plata for the education received. GER acknowledges the support from the Spanish Ministry de Ciencia e Innovacion (MICINN) under grant PID2019-105510GBC31 and through the Center of Excellence Mara de Macerau 2020-2023 award to the ICCUB (CEX2019-000918-M).
|
2309.02248 | Encoding Seasonal Climate Predictions for Demand Forecasting with
Modular Neural Network | Current time-series forecasting problems use short-term weather attributes as
exogenous inputs. However, in specific time-series forecasting solutions (e.g.,
demand prediction in the supply chain), seasonal climate predictions are
crucial to improve its resilience. Representing mid to long-term seasonal
climate forecasts is challenging as seasonal climate predictions are uncertain,
and encoding spatio-temporal relationship of climate forecasts with demand is
complex.
We propose a novel modeling framework that efficiently encodes seasonal
climate predictions to provide robust and reliable time-series forecasting for
supply chain functions. The encoding framework enables effective learning of
latent representations -- be it uncertain seasonal climate prediction or other
time-series data (e.g., buyer patterns) -- via a modular neural network
architecture. Our extensive experiments indicate that learning such
representations to model seasonal climate forecast results in an error
reduction of approximately 13\% to 17\% across multiple real-world data sets
compared to existing demand forecasting methods. | Smit Marvaniya, Jitendra Singh, Nicolas Galichet, Fred Ochieng Otieno, Geeth De Mel, Kommy Weldemariam | 2023-09-05T13:58:59Z | http://arxiv.org/abs/2309.02248v1 | # Encoding Seasonal Climate Predictions for Demand Forecasting with Modular Neural Network
###### Abstract
Current time-series forecasting problems use short-term weather attributes as exogenous inputs. However, in specific time-series forecasting solutions (e.g., demand prediction in the supply chain), seasonal climate predictions are crucial to improve its resilience. Representing mid to long-term seasonal climate forecasts is challenging as seasonal climate predictions are uncertain, and encoding spatio-temporal relationship of climate forecasts with demand is complex. We propose a novel modeling framework that efficiently encodes seasonal climate predictions to provide robust and reliable time-series forecasting for supply chain functions. The encoding framework enables effective learning of latent representations--be it uncertain seasonal climate prediction or other time-series data (e.g., buyer patterns)--via a modular neural network architecture. Our extensive experiments indicate that learning such representations to model seasonal climate forecast results in an error reduction of approximately 13% to 17% across multiple real-world data sets compared to existing demand forecasting methods.
## Introduction
The significant disruption caused by climate variability--be it seasonal (e.g., warmer winters) or extreme events (e.g., heatwaves)--within a supply chain affects its resilience: from demand management to inventory planning [1], [2]. The literature on climate-aware forecasting has highlighted many impactful real-world applications: from creating option plans for pre-season planning [3], [4], energy and utility industries [5], [6], to the manufacturing industry [7].
Today, most retailers recognize the impact of weather in their demand forecast [8], [9] and use short-term weather forecasts (e.g., a week ahead) while predicting demand or employ de-weatherization techniques to understand weather-driven demand patterns [10]. In order to effectively perform demand management or inventory planning, decision-makers require accurate and reliable forecasting w.r.t. temporal and spatial coverage [8], [11]. This is especially critical when considering infusing seasonal-scale forecasting into decision-making workflow processes. In such situations, demand forecasting assesses the seasonal climate prediction [12] and uses them to predict demand for multiple steps in the future. However, climate-aware seasonal-scale demand forecasting is challenging for time-series machine learning since accurately encoding climate variability for demand forecasting is complex [13].
There are two key technical challenges in seasonal-scale climate-aware forecasting: (1) how to represent mid to long-term seasonal climate predictions with uncertainty, and (2) how to encode spatio-temporal relationship of climate forecasts with demand. Fig. 1 illustrates these challenges using a real-world scenario in which the goal is to predict the sales of a set of products across spatial domain using seasonal-scale climate predictions. While Figs. 1(a) and (b) highlight how product sales are spread across locations and times, the seasonal-scale climate predictions have implicit uncertainty with complex spatio-temporal dependency as shown in Fig. 1(c). Modern deep learning techniques can be adapted to address these challenges up to some extent, for example, by treating weather and climate forecasts as an added exogenous variable in the demand prediction stack [14], [15]. Based on our first-hand practical experiences in an industrial setting, due to the high degree of uncertainty embedded in climate prediction ensembles, simply considering such data as exogenous variables at the input layer makes demand predictions erroneous and unreliable for decision-making purposes. Therefore, we need robust models that account for local behaviour across spatial domains and uncertain seasonal-scale predictions.
In this paper, we present a novel modeling framework to address the challenge of encoding noisy seasonal-scale climate forecasts for demand prediction tasks. It features a compact representation of uncertain seasonal climate forecasts such that it
Figure 1: Dimensions of the spatio-temporal seasonal-scale climate-aware forecasting problem. (a) Geography, (b) Product sales across stores, and (c) Uncertain climate predictions.
helps, e.g., in enabling climate resilience in the supply chain by improving demand management, inventory planning, and so forth. As a first step, we extract a set of derived use case-inspired climate features that capture future seasonal climate conditions and the uncertainty associated with these forecasts. We then learn a set of temporal encoders to represent these uncertain climate forecasts with a compact latent representation that captures their uncertainties. We accomplish this by jointly learning a time-series forecasting model and a latent representation using a set of temporal encoders. We summarize our contributions as below:
* We design a modular neural network structure that accommodates different feature types, uncertainty associated with seasonal climate forecasts, and variable-length temporal window sizes based on the availability of the data (e.g. three months of temperature forecast at weekly frequency, one month of precipitation at daily frequency, and so forth).
* We propose a novel technique that learns the latent representations of uncertain seasonal climate forecasts, historical observations, and known inputs (e.g., holidays) for seasonal-scale climate-aware forecasting.
* We show the effectiveness of the climate-aware demand predictions using two different types of climate encoding techniques (Sec. and Sec. ) on real-world datasets from the supply chain domain: a public retail dataset and two large-scale retail industry datasets.
## Motivation and Related Work
Seasonal retail demand is affected by many factors: climate conditions (e.g., temperature, precipitation, humidity), promotional schemes, seasonal events, and so forth. In climate, a range of forecasts for each climate variable is produced by varying initial conditions of climate models that perform multiple simulations, making predictions uncertain. For example, seasonal-scale forecasts from The European Centre for Medium-Range Weather Forecasts (ECMWF) [16] contain 50 ensembles for each climate attribute up to six months in the future, which gets updated every month.
The complexity of climate data can be reduced by conceptualizing the data into trend and noise components [17]. However, modeling such climate data in time-series forecasting is challenging as latent representations need to deal with different types of noise present in seasonal-scale climate predictions. Several approaches have been considered in the past for time-series forecasting in the presence of noise. These approaches can be broadly classified into two categories: classical time-series forecasting and deep learning-based time-series forecasting.
**Classical Time-series Forecasting:** This consists of more classical approaches for modeling time series by including components for modeling level, trend and seasonality. Example of these classical approaches are support vector regression [18], ensemble models [19, 20, 21], exponential smoothing [22], and the Box-Jenkins family of ARIMA [23, 24, 22]. These perform the prediction by using a weighted linear sum of recent historical lags or observations. There are methods such as [25, 26] which
decompose time-series data into a seasonal, trend, and noise components and model them separately to improve forecast accuracy. However, these traditional approaches do not specifically investigate the latent representation learning of seasonal climate predictions for climate-aware forecasting, nor are they suitable for encoding ensemble data representing different levels of uncertainty.
**Deep Learning (DL) based Time-series Forecasting:** In the recent past, DL based approaches have dominated those traditional approaches by providing superiority in terms of modeling complex structures and interdependence between groups of series [27]. Recent works have focused on various deep neural networks such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for multivariate time-series forecasting [28], [29], including temporal attention [30, 31, 32], dilated CNNs [33], [34], temporal CNNs [35, 36], multivariate attention LSTM-FCN [36, 37], and transformer models [38, 14, 39].
Lim et al. [14] proposed a sequence to sequence temporal fusion technique for representing historical data and known input into a latent space and combining them with static covariate encoders and variable selection networks to identify the relevant features for multi-horizon forecasting. In [30] a temporal attention technique is used to extract time-invariant temporal patterns using CNN for multivariate time-series forecasting. DeepAR [15] performs probabilistic forecasting by training an auto-regressive recurrent neural network model that incorporates negative binomial likelihood to deal with significant variation in time-series data. Ekambaram et al. [40] propose an attention-based multi-modal encoder-decoder model for retail time-series forecasting. In [41] a multifactor attention model is considered for capturing external factors such as short-term historical weather, social media trends, and so forth for predicting retail demand. However, none of these approaches provides a systematic way to model uncertainty associated (due to noise and spatio-temporal variability) with seasonal-scale climate predictions.
While some DL techniques (e.g., [14, 15]) and classical statistical methods (such as [18, 26]) can be repurposed to model seasonal-scale climate forecasts as added exogenous features in the demand prediction stack, we did not come across any work that focuses on learning latent representations based on these features with their associated uncertainties.
## Appendix A Framework for Climate-aware Forecasting
In this section, we describe the proposed framework for encoding multiple different types of geospatial-temporal data such as seasonal climate predictions, historical observation data, prediction of extreme events, etc. for demand forecasting. Our goal is to develop compact representations of uncertain seasonal climate predictions along with other data sources which may have different availability of the data such as three months of temperature forecast at daily frequency, one month of precipitation at weekly frequency, etc. and use them in time series forecasting. Fig. 2 (a) shows the high-level overview of the proposed framework that models various types of time-series data such as historical observations, seasonal climate predictions, non-climate exogenous data, etc. for climate-aware demand forecasting. A set of different temporal encoders are shown in Fig. 2 (b)
that learns the latent representation from each individual time-series data which may require varying model complexity levels.
The problem of seasonal-scale time-series forecasting is defined in terms of a cost function that minimizes the error in the multi-horizon forecasts at each _product_ (\(\mathrm{p_{m}}\)) and store combination. In this paper, _store_ (\(\mathrm{s_{n}}\)) designates any node in a supply chain; for instance, it can be a warehouse or distribution center. The model's forecast is given by:
\[\begin{split}\hat{\mathbf{y}}_{\mathrm{t+\tau}}(\mathrm{s_{n}}, \mathrm{p_{m}},\mathrm{t},\tau,q)=\\ \mathbf{f}^{\mathrm{pred}}(q,\mathbf{y}_{\mathrm{t-k:t}},\mathbf{ X}_{\mathrm{i,t-k:t}}^{\mathrm{o}},\mathbf{X}_{\mathrm{i,t-k:t+\tau}}^{\mathrm{k}}, \mathbf{X}_{\mathrm{i,t-k:t+\tau}}^{\mathrm{c}})\end{split} \tag{1}\]
where \(\mathbf{X}_{\mathrm{i}}^{\mathrm{o}}\) is a set of historical observations (e.g. sales data), \(\mathbf{X}_{\mathrm{i}}^{\mathrm{k}}\) is a set of known inputs (e.g. holidays), \(\mathbf{X}_{\mathrm{i}}^{\mathrm{c}}\) is a set of climate predictions (e.g., min, avg, max temperature), q is the quantile, and \(\hat{\mathbf{y}}_{\mathrm{t+\tau}}(\mathrm{s_{n}},\mathrm{p_{m}},\mathrm{t},\tau)\) is the prediction of \(\tau\)-step ahead forecast at time \(\mathrm{t}\).
Importantly, in Eq. 1, time series data for historical, known and climate data are treated differently. Historical data are only available up to time step \(\mathrm{t}\), but up to \(\mathrm{t+\tau}\) for known and climate forecast data. \(\mathbf{f}^{\mathrm{pred}}\) is a prediction model that includes a set of climate and non-climatic encoders for learning the latent representations. We introduce two such prediction models each associated with a specific latent representation for climatic and non-climatic time series. Next, we present these two prediction models: sub-neural network latent representation and a transformer-based latent representation.
Figure 2: (a) The overall diagram of the proposed climate-aware demand forecasting framework. (b) An example of different types of temporal encoders to deal with different types of input features.
### Latent representation learning using sub-neural networks (LRL-SNN)
We consider a set of time series defined by \(\mathbf{X}=[\mathbf{X}^{\mathrm{o}},\mathbf{X}^{\mathrm{c}},\mathbf{X}^{\mathrm{k}}]\) with \(\mathrm{o}\) being the time series of historical observations, \(\mathrm{c}\) the time series of climate predictions and \(\mathrm{k}\) the known time series. For \(l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\}\), we denote \(\mathbf{I}^{\mathrm{l}}\) the index set such that \(\mathbf{X}^{\mathrm{l}}=\bigcup_{i\in I^{\mathrm{l}}}\mathbf{X}^{\mathrm{l}}_ {\mathrm{i}}\). Finally, we define for each \(l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\}\), an offset \(\tau_{l}\in\mathbb{N}^{\star}\).
At time \(t\) and for label \(l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\}\), we define for \(i\in\mathbf{I}^{\mathrm{l}}\) the window \(w^{l}_{i}(t)\) as:
\[\mathrm{w}^{l}_{i}(t)=(x^{l}_{i,1},x^{l}_{i,2},...,x^{l}_{i,t+\tau_{l}}) \tag{2}\]
with \(\mathbf{X}^{l}_{i}=\{x^{l}_{i,1},\ldots,x^{l}_{i,T}\}\)
For historical observations, climate and known data, \(\mathbf{I}^{\mathrm{l}}\) index a time series of interest--e.g., for historical time series, \(\mathbf{I}^{\mathrm{o}}\) = \(\{\mathbf{P}_{\mathrm{sales}}\), \(\mathbf{P}_{\mathrm{price}}\}\) represents the historical sales or product prices. For climate windows, \(i\) can be included in \(\mathbf{I}^{\mathrm{c}}=\{\mathbf{T}_{\mathrm{min}}\), \(\mathbf{T}_{\mathrm{avg}}\), \(\mathbf{T}_{\mathrm{max}}\), \(\sigma(\mathbf{T}_{\mathrm{min}})\), \(\sigma(\mathbf{T}_{\mathrm{avg}})\ldots\}\) with the times series representing minimum, average or maximum temperatures or their standard deviation respectively over a given ensemble climate time window. Finally, for known input, \(\mathbf{I}^{\mathrm{k}}\) can be \(\{\mathbf{W}_{\mathrm{nbr}},\mathbf{M}_{\mathrm{nbr}}\}\) and represents the week and month numbers, respectively.
We introduce Differencing and Normalizing layers to efficiently represent numerical features such as seasonal-scale temperature (min, max, avg) forecasts to enable transfer across time series. The Differencing layer captures relative trend within a time-series window, whereas the Normalizing layer helps transform each data point such that it window-normalized, so each input window is of comparable scale across multiple inputs. This way of transforming each time-series data helps in improving the learnability of the forecasting model such that it deals with weather variation across stores within geography. However, this step is optional for a certain type of time-series data that captures the uncertainty of the seasonal forecasts such as standard deviation of temperature min.
**Differencing Layer** For all time window \(\mathrm{w}=(x_{1},\ldots,x_{n})\), differenciated window \(\mathbf{w}_{\mathrm{diff}}\) is defined as \(\mathbf{w}_{\mathrm{diff}}=(x_{2}-x_{1},\ldots,x_{i}-x_{i-1},\ldots,x_{n}-x_{ n-1})\). The procedure can easily be inverted by saving \(x_{1}\).
**Normalizing Layer** For all time window \(\mathbf{w}=(x_{1},\ldots,x_{n})\), we denote \(\mu_{w}\) (resp. \(\sigma_{w}\)) its empirical average (resp. its empirical standard deviation, without Bessel's correction). The normalized window \(\mathbf{w}_{\mathrm{norm}}\) is defined \(\mathbf{w}_{\mathrm{norm}}=\{\frac{x_{1}-\mu_{w}}{\sigma_{w}},\ldots,\frac{x_ {i}-\mu_{w}}{\sigma_{w}},\ldots,\frac{x_{n}-\mu_{w}}{\sigma_{w}}\}\). Normalization can be inverted by transmitting \(\mu_{w}\) and \(\sigma_{w}\).
For all temporal data window \(\mathbf{w}(\mathrm{l},\mathrm{i},\tau_{l})\), we have the following successions, from time series data to prediction:
\[\forall l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\},\forall i\in \mathbf{I}^{\mathrm{l}},\mathrm{d}(\mathrm{l},\mathrm{i},\tau_{l}) = \mathbf{w}_{\mathrm{diff}}(\mathbf{w}(\mathrm{l},\mathrm{i},\tau_{ l}))\] (3) \[\forall l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\},\forall i\in \mathbf{I}^{\mathrm{l}},\mathbf{V}(\mathrm{l},\mathrm{i},\tau_{l}) = \mathbf{w}_{\mathrm{norm}}(\mathrm{d}(\mathrm{l},\mathrm{i},\tau_{ }))\] (4) \[\forall l\in\{\mathrm{o},\mathrm{c},\mathrm{k}\},\forall i\in \mathbf{I}^{\mathrm{l}},\mathbf{h}^{\mathrm{l}}_{\mathrm{i}} = \mathbf{TE}^{\mathrm{l}}_{\mathrm{i}}(\mathbf{V}(\mathrm{l},\mathrm{i },\tau_{l}))\] (5) \[\mathbf{H} = \underset{i\in\mathbf{I}^{\mathrm{o}}}{\overset{\mathrm{l}}{ \underset{i\in\mathbf{I}^{\mathrm{o}}}{\underset{i\in\mathbf{I}^{\mathrm{c}}}{ \underset{i\in\mathbf{I}^{\mathrm{c}}}{
with \(\rm{\mbox{$+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for seasonal-scale forecasting. The variable section network and temporal attention mechanism help in modeling the noise present in seasonal-scale climate prediction. Compared to TFT, however, LRL-SNN dedicates a separate temporal encoder for each type of time series (historical, climate, and known). This adds a degree of flexibility in the design of the encoders (e.g., configuration of neural networks) such that it supports different feature types and uncertainty associated with them.
## Experiments
This section first demonstrates the effectiveness of seasonal-scale climate-aware demand forecast using a publicly available grocery retail dataset and two large-scale proprietary retail datasets. We then discuss the ablation study to evaluate the effectiveness of our proposed model. Below we provide a brief description of the datasets used.
**Favorita Grocery Retail Dataset (Ecuador)**: The Corporcion Favorita is a retail chain with stores located throughout Ecuador. This publicly available dataset consists primarily of grocery items and various (non-apparel) consumer goods such as automotive and household accessories [43]. Most of the dataset consists of perishable food items strongly affected by temperature (and humidity).
**Gear Apparel Retail Dataset (USA)**: The outdoor gear and apparel retail (Gear Apparel Retail) dataset consists of a chain of stores and distribution centers (DCs) distributed across the USA. Between 50% - 70% of the dataset contains apparel items with a strong seasonal dependence in the USA. Both the chain of stores and DCs also served as order fulfillment nodes for online purchases.
**Apparel Retail Dataset (India)**: The apparel retail dataset consists of daily sales data from a chain of stores distributed across India. Similar to the Gear Apparel Retail dataset, a sizeable portion of products are seasonal--i.e., over 30% of the products in the dataset consist of items that have demand cycles that have a substantial variance with seasonal changes in India.
Table 1 shows the different characteristics of the datasets such as geography, feature attributes, availability of data period, and so forth. In our experiments, the task is a time-series forecasting task and requires predicting future sales of a product for a given region/store weekly. We use features that are aggregated at a week level while forecasting
\begin{table}
\begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **Gear Apparel Retail** & **Apparel Retail** & **Favorita Grocery Retail** \\ \hline
**Target** & Unit Sales & Unit Sales & \(\log\)( Unit Sales ) \\ \hline
**Geography** & USA & India & Ecuador \\ \hline
**\#Features** & 6 & 14 & 23 \\ \hline
**\#Unique Time-series** & 180 & 594 & 645 \\ \hline
**\#Train Samples** & \(\sim\)18k & \(\sim\)52k & \(\sim\)79k \\ \hline
**\#Dev Samples** & \(\sim\)4.5k & \(\sim\)12k & \(\sim\)19k \\ \hline
**\#Test Samples** & \(\sim\)14k & \(\sim\)42k & \(\sim\)61k \\ \hline
**Temperature max (\(\lx@math@degree C\))** & \(\mu=18.4\), \(\sigma=8.3\) & \(\mu=33.4\), \(\sigma=8.2\) & \(\mu=21.0\), \(\sigma=5.3\) \\ \hline
**Temperature avg (\(\lx@math@degree C\))** & \(\mu=13.7\), \(\sigma=7.7\) & \(\mu=27.8\), \(\sigma=7.6\) & \(\mu=17.8\), \(\sigma=5.9\) \\ \hline
**Temperature min (\(\lx@math@degree C\))** & \(\mu=8.8\), \(\sigma=7.5\) & \(\mu=22.3\), \(\sigma=7.6\) & \(\mu=14.6\), \(\sigma=6.6\) \\ \hline
**Time Period** & Sept-2016 to April-2020 & Jan-2017 to May-2020 & Jan-2014 to Aug-2017 \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of large-scale retail datasets.
demand for 12 weeks. In our experiments, we use seasonal climate predictions from ECMWF S5 seasonal forecast system that contains 50 ensembles for each climate attribute such as temperature (min, max, avg.) and precipitation up to six months in the future [16].
The results are reported in terms of average mean absolute percentage error (MAPE), and average root mean squared error (RMSE). Results are generated on 12-week prediction intervals. The error metrics are computed at a finer granularity of 4 weeks and on the whole 12-week prediction intervals. Decomposing the error analysis helps with understanding the influence of the seasonal-scale climate predictions. While reporting comparative results for these three datasets, we show experiments with (1) latent representation learning using sub-neural networks (LRL-SNN), and (2) Temporal Fusion Transformer [14] with and without climate predictions.
**Experimental Settings:** Table 2 shows model parameters and algorithm settings for TFT and LRL-SNN used in our experiments. As mentioned above, ECMWF provides, for each location, 50 measures: temperatures (min, max, and average) and precipitation. For each of these ensembles, we extract a set of derived features such as mean and standard deviation to represent the uncertainty. For LRL-SNN, the last layers of the climate encoders for mean (\(\mu_{d}\)) and standard deviation (\(\sigma_{d}\)) differ depending on the dataset. For Favorita, the values of \(\mu_{d}\) are [32, 16, 16, 16] for \(\mathbf{T}_{\mathrm{avg}}\), \(\mathbf{T}_{\mathrm{min}}\), \(\mathbf{T}_{\mathrm{max}}\) and \(\mathbf{P}_{\mathrm{avg}}\) (precipitation) respectively. Values of \(\sigma_{d}\) are [16, 8, 8, 8] for \(\sigma(\mathbf{T}_{\mathrm{avg}})\), \(\sigma(\mathbf{T}_{\mathrm{min}})\), \(\sigma(\mathbf{T}_{\mathrm{max}})\) and \(\sigma(\mathbf{P}_{\mathrm{avg}})\). Similarly, for Apparel Retail dataset, values of \(\mu_{d}\) are [32, 16, 16] for \(\mathbf{T}_{\mathrm{avg}}\), \(\mathbf{T}_{\mathrm{min}}\), \(\mathbf{T}_{\mathrm{max}}\) and \(\sigma_{d}\) are [16, 8, 8] for \(\sigma(\mathbf{T}_{\mathrm{avg}})\), \(\sigma(\mathbf{T}_{\mathrm{min}})\), \(\sigma(\mathbf{T}_{\mathrm{max}})\). Finally, for Agar Apparel retail dataset, values of \(\mu_{d}\) are [250, 100, 100] for \(\mathbf{T}_{\mathrm{avg}}\), \(\mathbf{T}_{\mathrm{min}}\), \(\mathbf{T}_{\mathrm{max}}\).
### Favorita Grocery Retail Dataset
We evaluate our models and compare them against one of the state-of-the-art approaches--Temporal Fusion Transformer (TFT) [14]. The comparative results are reported in Table 3. We use log-transformed of sales quantity (cf. Table 1) as a target variable similar to [14]. Irrespective of the models, we see a substantial improvement when we incorporate seasonal climate predictions for retail demand forecasting. We show overall error reduction of 12.85% and 6.47% in MAPE for (LRL-SNN + Climate) and (TFT + Climate) over the non-climate models respectively. Furthermore, compared to TFT, our
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Parameters**} & \multicolumn{2}{c|}{**Favorita Grocery Retail**} & \multicolumn{2}{c|}{**Apparel Retail**} & \multicolumn{2}{c|}{**Gear Apparel Retail**} \\ \cline{2-7} & **TFT** & **LRL-SNN** & **TFT** & **LRL-SNN** & **TFT** & **LRL-SNN** \\ \hline \hline
**Dropout Rate** & 0.1 & 0.2 & 0.1 & 0.2 & 0.1 & 0.2 \\ \hline
**Concatenated FFN** & [240] & [200, 1000] & [160] & [2000, 1000] & [160] & [2000, 1000] \\
**TSFN** & - & [2000, 1000, 240] & - & [2000, 1000, 200] & - & [5000, 2500, 1000, 500] \\
**Climate Encoder (mean)** & - & [512, 256, 128, 64, X] & - & [512, 256, 128, 64, X] & - & [5000, 2500, 1000, X] \\
**Climate Encoder (std)** & - & [512, 256, 128, 64, Y] & - & [512, 256, 128, 64, Y] & - & - \\ \hline
**Millbatch Size** & 128 & 32 & 64 & 16 & 64 & 16 \\ \hline
**Learning Rate** & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.01 \\ \hline
**Window size (\(k\))** & 12 & 12 & 12 & 12 & 12 & 12 \\ \hline
**Prediction interval (\(\tau\))** & 12 & 12 & 12 & 12 & 12 & 12 \\ \hline
**Number of epochs** & 100 & 100 & 100 & 100 & 100 & 100 \\ \hline \end{tabular}
\end{table}
Table 2: Model Configuration Parameters. X and Y denote the last layer size of Temporal Encoders
proposed approach (LRL-SNN + Climate) provides a significant improvement over both the error metrics such as averaged RMSE and averaged MAPE. We can infer from these experimental results that learning a set of temporal encoders based on the levels of data difficulty can consistently outperform transformer-based climate encoding architectures such as TFT.
### Large-scale Industry Datasets (India and USA)
We next evaluate our approach of seasonal-scale climate encoding using temporal encoders by performing a set of experiments on first-of-its-kind large-scale retail industry datasets--i.e., Apparel Retail and Gear Apparel Retail datasets from India and USA, respectively. These datasets have a wide range of retail stores distributed across the geographies with high spatio-temporal climate variability.
#### Apparel Retail Dataset (India)
The Apparel Retail dataset contains mostly retail products such as jacket, sweater, jeans, and so forth across multiple cities in India. Table 4 compares climate-aware demand forecasting error metrics with those of TFT. As can be seen, incorporating climate forecasts as a part of latent representation improves the results significantly for both the error metrics and for TFT and LRL-SNN. Results for LRL-SNN remain competitive, and the use of climate brings the errors lower than the level achieved by TFT. Moreover, the TFT architecture appears to be able to model spatio-temporal climate variability better with the help of LSTM-based encoder-decoder architecture and temporal attention mechanism; such a foreknowledge can especially be helpful in contexts (e.g., India) where the wide variability in time and space occur.
The Gear Apparel Retail dataset includes a large portion of items used for seasonal outdoor sports (e.g., winter jackets and ruggedized bottles for hiking) which are sold across the United States of America (including Alaska). As such, there are significant climate variability from region to region. The goal is to determine if encoded forecast
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Algorithms**} & \multicolumn{2}{c|}{**week 1-4**} & \multicolumn{2}{c|}{**week 5-8**} & \multicolumn{2}{c||}{**week 9-12**} & \multicolumn{2}{c|}{**Overall**} \\ \cline{2-9} & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \hline
**TFT** & 18.40 & 1.44 & 21.93 & 4.39 & 24.36 & 6.57 & 23.82 & 4.01 \\ \hline
**TFT + Climate** & **17.00** & 1.29 & **20.07** & **4.05** & **22.24** & **6.04** & **21.70** & **3.69** \\ \hline
**LRL-SNN** & 20.28 & 1.26 & 31.55 & 4.56 & 39.17 & 8.37 & 33.17 & 4.58 \\ \hline
**LRL-SNN + Climate** & 17.15 & **1.11** & 25.37 & 4.09 & 30.87 & 6.66 & 26.62 & 3.83 \\ \hline \end{tabular}
\end{table}
Table 4: Results on large-scale retail industry dataset - Apparel Retail.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Algorithms**} & \multicolumn{2}{c|}{**week 1-4**} & \multicolumn{2}{c|}{**week 5-8**} & \multicolumn{2}{c||}{**week 9-12**} & \multicolumn{2}{c|}{**Overall**} \\ \cline{2-9} & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \hline
**TFT** & 0.98 & 0.35 & 1.03 & 0.18 & 0.89 & 0.17 & 1.07 & 0.25 \\ \hline
**TFT + Climate** & 0.87 & 0.32 & 0.92 & **0.17** & **0.74** & **0.16** & 0.93 & 0.23 \\ \hline
**LRL-SNN** & 0.70 & 0.22 & 0.91 & 0.19 & 0.89 & 0.21 & 0.89 & 0.22 \\ \hline
**LRL-SNN + Climate** & **0.64** & **0.19** & **0.87** & **0.17** & 0.85 & 0.18 & **0.85** & **0.19** \\ \hline \end{tabular}
\end{table}
Table 3: Results of the proposed approach on Grocery retail dataset - Favorita.
data in demand models are able to capture the impacts that climate has on sales-impacts like early winters, a late summer, or an extended autumn period. Tables 5, 6 and 7 show the comparative error metrics for distribution centers (DC), stores and both combined respectively across all of the regions in the USA. Overall, climate-aware models such as TFT + Climate and LRL-SNN + Climate tend to have lower errors than climate-agnostic models across DCs and stores. Climate-encoding using LRL-SNN shows significant improvements compared to TFT and TFT + Climate for store-level retail demand forecasting. Whereas TFT + Climate outperforms as compared to other models for DCs.
### Ablation Study
In any climate-aware demand forecasting, we believe that efficiently encoding seasonal climate prediction can further reduce errors. In Table 8, we show that adding seasonal climate predictions in TFT [14] and LRL-SNN helps in reducing, on average, MAPE by about 17% to 21% and RMSE by about 8% and 14%. One can note that Mean Absolute Error (MAE) for Gear Apparel Retail (Store and Combined) has a negative percentage reduction for TFT + Climate. However, this is expected as gear apparel sales are affected by seasonality, and MAE is significantly affected by low numbers. Thus, in this context, RMSE would be the appropriate error metric. This observation evidenced across three geographically distinct datasets attest that explicitly encoding forecasted seasonal climate leads to improved predictions for regional store purchases.
Furthermore, we compare quantitative and qualitative metrics on the Favorita dataset to show the effectiveness of climate-aware forecasting. Fig. 4(a) compares the quantitative errors product-category wise obtained by our framework, with those of [14]
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Algorithms**} & \multicolumn{2}{c|}{**week 1-4**} & \multicolumn{2}{c|}{**week 5-8**} & \multicolumn{2}{c||}{**week 9-12**} & \multicolumn{2}{c|}{**Overall**} \\ \cline{2-9} & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \hline
**TFT** & 6.51 & 1.09 & 7.77 & 1.49 & 8.68 & 1.91 & 9.10 & 1.48 \\ \hline
**TFT + Climate** & 6.99 & 1.02 & 7.36 & 1.19 & **7.48** & **1.30** & 7.93 & 1.16 \\ \hline
**LRL-SNN** & **5.41** & **0.68** & **6.95** & 1.06 & 8.07 & 1.43 & 7.63 & 1.04 \\ \hline
**LRL-SNN + Climate** & 5.55 & **0.68** & 6.97 & **1.02** & 7.86 & 1.34 & **7.61** & **1.00** \\ \hline \end{tabular}
\end{table}
Table 6: Results on Gear Apparel Retail dataset for Stores.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Algorithms**} & \multicolumn{2}{c|}{**week 1-4**} & \multicolumn{2}{c|}{**week 5-8**} & \multicolumn{2}{c||}{**week 9-12**} & \multicolumn{2}{c|}{**Overall**} \\ \cline{2-9} & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \hline
**TFT** & 15.39 & 1.13 & 16.02 & 1.55 & 16.80 & 2.01 & 20.89 & 1.55 \\ \hline
**TFT + Climate** & 15.28 & 1.02 & **15.38** & 1.23 & **15.63** & **1.37** & **19.50** & 1.20 \\ \hline
**LRL-SNN** & 14.58 & 0.78 & 17.88 & 1.59 & 19.93 & 2.22 & 21.62 & 1.50 \\ \hline
**LRL-SNN + Climate** & **14.71** & **0.76** & 16.51 & **1.20** & 17.75 & 1.62 & 20.68 & **1.18** \\ \hline \end{tabular}
\end{table}
Table 7: Results on Gear Apparel Retail dataset for stores and DCs both combined.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Algorithms**} & \multicolumn{2}{c|}{**week 1-4**} & \multicolumn{2}{c|}{**week 5-8**} & \multicolumn{2}{c||}{**week 9-12**} & \multicolumn{2}{c|}{**Overall**} \\ \cline{2-9} & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \hline
**TFT** & 94.19 & 1.43 & 89.20 & 2.12 & 88.87 & 2.89 & 125.56 & 2.12 \\ \hline
**TFT + Climate** & **88.88** & **1.05** & **86.52** & **1.59** & **87.94** & **2.01** & **122.12** & **1.52** \\ \hline
**LRL-SNN** & 95.91 & 1.63 & 114.82 & 6.32 & 125.12 & 9.30 & 145.71 & 5.62 \\ \hline
**LRL-SNN + Climate** & 95.94 & 1.52 & 101.14 & 2.79 & 105.53 & 4.11 & 136.63 & 2.74 \\ \hline \end{tabular}
\end{table}
Table 5: Results on Gear Apparel Retail dataset for DCs.
whereas Fig. 4 qualitatively compares the product-category wise effectiveness of climate-aware forecasting with seasonal climate prediction as compared to non-climate model using LRL-SNN. The qualitative metric measures the percentage of scenarios (i.e., product and region combinations) in which the climate-aware model performs better or equal to the climate-agnostic one. We label _Tie_ to show that the climate-aware model outperforms for one error metric but not the other.
## Conclusion
Demand forecasting is a well-studied problem in the time-series domain. In climate-aware demand forecasting scenarios, existing methods do not consider the seasonal climate predictions due to the complexities such as noise, time-series data with different temporal frequencies, and spatio-temporal correlations associated with the climate predictions. In this work, we addressed the problem of seasonal climate-aware demand forecasting by effectively learning joint latent representations of climate predictions, historical observations (e.g., sales figures), and known inputs (e.g., holidays) using a sub-neural network architecture. This way of modeling different types of time-series
Figure 4: (a) Comparative quantitative evaluation (MAPE) on Favorita dataset. (b) Qualitative analysis using LRL-SSN (refer to Sec. ) for details.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Datasets**} & \multicolumn{2}{c|}{**LRL-SNN + Climate**} & \multicolumn{2}{c|}{**TFT + Climate**} \\ \cline{2-7} & **MAPE** & **RMSE** & **MAE** & **MAPE** & **RMSE** & **MAE** \\ \hline \hline
**Grocery Retail - Favorita** & 12.85 & 4.71 & 7.04 & 6.47 & 13.34 & 12.00 \\ \hline
**Apparel Retail** & 16.54 & 19.76 & 20.01 & 8.07 & 8.91 & 9.01 \\ \hline
**Gear Apparel Retail - Store** & 4.03 & 0.24 & 0.33 & 21.5 & 12.81 & -3.5 \\ \hline
**Gear Apparel Retail - DC** & 51.14 & 6.23 & 12.97 & 28.14 & 2.74 & 1.91 \\ \hline
**Gear Apparel Retail - Combined** & 21.86 & 4.33 & 8.25 & 22.42 & 6.68 & -0.57 \\ \hline \end{tabular}
\end{table}
Table 8: Comparative error reduction (%) using climate-aware models.
data and learning joint latent representation enables a higher degree of flexibility in climate-aware demand prediction tasks. The extensive experiments we have performed indicate that the latent representation of seasonal climate predictions leads to enhanced demand forecasting, thus paving the way for improvement in pre-season planning and demand management for supply chain functions.
Given that we have only considered seasonal climate predictions in our current work, we aim to enrich relevant data sources for predictions in our future work; such sources will include incorporating high-impact lag and derived climate forecast features. Moreover, we will design methods to propagate uncertainty from ensemble forecasts to demand predictions and quantify the associated uncertainty at various granularities.
|
2301.01163 | The study of Kantowski-Sachs perfect fluid cosmological model in
modified gravity | Kantowski-Sachs perfect fluid cosmological model is explored in modified
gravity with functional form $f(R, T)$=$f_1(R)$+$f_2(T)$ where $R$ is Ricci
scalar, and $T$ is the trace of the energy-momentum tensor. With this
functional form, three different cases have been formulated, namely negative
and positive powers of curvature, logarithmic curvature, and exponential
curvature given by $f_1(R)=R+\gamma R^2-\frac{\mu^4}{R}$, $f_1(R)=R+\nu ln(\tau
R)$ and $f_1(R)=R+\kappa e^{-\iota R}$ respectively. For all these three cases,
$f_2(T)=\lambda T$, here $\gamma$, $\lambda$, $\mu$, $\nu$, $\tau$, $\kappa$
and $\iota$ are constants. While solving the field equations, two constraints
i) the Expansion scalar is proportional to shear scalar ii) the Hyperbolic
scale factor is used. By using these conditions, the required optimum solutions
are obtained. The physical parameters are calculated, and the geometrical
parameters of three cases are analyzed against redshift($z$) with the help of
pictorial representation. In the context of $f(R, T)$ gravity, energy
conditions are discussed with the help of pressure and energy density. If a
strong energy condition is positive, gravity should be attractive but in our
model, it shows negative, which means that cosmic acceleration is due to
antigravity, whereas NEC and DEC are fulfilled. The perturbation technique is
used to test the stability of the background solutions of the obtained models.
The inferences obtained from this paper are persistent with the present
cosmological observations, and the model represents an accelerating universe. | T. Vinutha, K. Niharika, K. Sri Kavya | 2023-01-03T15:48:05Z | http://arxiv.org/abs/2301.01163v3 | ###### Abstract
###### Abstract
Kantowski-Sachs perfect fluid cosmological model is explored in modified gravity with functional form \(f(R,T)\)=\(f_{1}(R)+f_{2}(T)\) where \(R\) is Ricci scalar and \(T\) is the trace of energy-momentum tensor. With this functional form, three different cases have been formulated, namely negative and positive powers of curvature, logarithmic curvature and exponential curvature given by \(f_{1}(R)=R+\gamma R^{2}-\frac{\mu^{4}}{R}\), \(f_{1}(R)=R+\nu ln(\tau R)\) and \(f_{1}(R)=R+\kappa e^{-\iota R}\) respectively, and for all these three cases, \(f_{2}(T)=\lambda T\), here \(\gamma\), \(\lambda\), \(\mu\), \(\nu\), \(\tau\), \(\kappa\) and \(\iota\) are constants. While solving the field equations, two constraints i) Expansion scalar is proportional to shear scalar ii) Hyperbolic scale factor are used. By using these conditions the required optimum solutions are obtained. The physical parameters are calculated and geometrical parameters of three cases are analysed against redshift(\(z\)) with the help of pictorial representation. In the context of \(f(R,T)\) gravity energy conditions are discussed with the help of pressure and energy density. If strong energy condition is positive the gravity should be attractive but in our model it shows negative it means that cosmic acceleration is due to antigravity, whereas NEC and DEC are fulfilled. The perturbation technique is used to test the stability of the background solutions of the obtained models. The inferences obtained from this paper are in persistent with the present cosmological observations and the model represents an accelerating universe.
**The study of Kantowski-Sachs perfect fluid cosmological model in modified gravity**
_T. Vinutha\({}^{1}\), K. Niharika\({}^{1}\) and K. Sri Kavya\({}^{2}\)_
\({}^{1}\)Dept. of Applied Mathematics, AUCST, Andhra University, Visakhapatnam, India.
\({}^{2}\) Dept. of Mathematics, Maharaj Vijayaram Gajapathi Raj College of Engineering,
Vizianagaram-535005, India.
\({}^{\star}\)[email protected]
**Keywords**: Kantowski-Sachs spacetime, \(f(R,T)\) theory, perfect fluid.
## 1 Introduction
Einstein's theory of general relativity is the foundation of modern physics and it describes black holes and gravitational phenomena but it break down to give an explanation of cosmic acceleration. In recent scenario it is well known that our universe is accelerating [1, 2] and it is one of the trending topics in cosmology. To understand this mysterious concept, we focused on dark energy and modified theories of gravity. The universe is going through an accelerated
period of expansion and it is revealed by the experiments such as CMBR and SN\(Ia\). Dark energy can be inspected in many ways and reforming the geometric part of the Einstein-Hilbert action is regarded as the most efficient possible way and these changes lead to so many alternative theories of gravity. There are different classes of modified gravity such as \(f(R)\) gravity, \(f(T)\) gravity, \(f(G)\) gravity, \(f(R,G)\) gravity. Among them \(f(R)\) gravity has attracted many researchers because it provides a natural gravitation alternative to dark energy. During the universe expansion \(f(R)\) theory elucidate the change from deceleration phase to acceleration phase. \(f(R)\) theory is presumed to be beneficial for resolution of the hierarchy problem or unification of grand unified theories with gravity in high energy physics. Nojiri and odintsov [3], Nojiri et al. [4], Chatterjee and Jaryal [5], Sotiriou and Faraoni [6] and De Felice and Tsujikawa [7] are some of the authors who worked on various cosmological models in \(f(R)\) theories of gravity. A new class of \(f(R,T)\) gravity presented by Harko et al. [8], by including trace \(T\) in \(f(R)\) theory. The \(T\)-dependence in \(f(R,T)\) gravity may appear from the presence of imperfect fluids or quantum effects. Among all the modified theories of gravitation, the \(f(R,T)\) theory is a generalized theory because there is an energy transfer relation between matter and geometry. The existence of this relationship is the cause of the rapid expansion of the universe. The authors who worked on \(f(R,T)\) gravity are included in references [9, 10, 11, 12, 13, 14, 15, 16].
In this paper, we examine three specific cases one of them is combination of \(\frac{1}{R^{x}}\) and \(R^{y}\) i.e. \(f(R)=R+\gamma R^{y}-\frac{\mu^{4}}{R^{x}}\) where \(\gamma\) and \(\mu\) are constants. In this functional form, it has both positive and negative curvature powers. At low curvature it leads to gravitational alternative for dark energy which helps in speed up of cosmic expansion where as high curvature describes the inflationary stage of early universe [17]. By considering \(R^{y}\) term for \(1<y<2\) power law inflation happen at early stage. If \(y=2\), Starobinsky inflation takes place [18], the term \(R^{2}\) indicates natural correction to general relativity. According to Nojiri and Odinstov [19]\(R^{2}\) term is necessary to get rid of instabilities, linear growth of the gravitational force, produce early time inflation and appear to pass the solar system tests. The state of no linear growth for gravitational force makes it very much fascinating. Higher derivative terms like \(R^{2}\), \(R^{3}\) can be used to put down the instabilities significantly. For equivalent scalar tensor theory the solar system test may be passed as scalar has large mass originated again by higher derivative terms. The standard Einstein's gravity may be modified by considering a \(\frac{1}{R}\) term in the Einstein Hilbert action [20] which represents the present acceleration of the universe. But the insertion of \(\frac{1}{R}\) term generates instabilities which can be overcome by addition of \(R^{2}\) term to the Einstein's gravitational action. Besides the advantages of this functional form, have well acceptable Newtonian limit, no instabilities and no Brans Dicke problem in scalar tensor version. When we put \(y=2\) and \(x=1\) in the above functional form \(f(R)=R+\gamma R^{y}-\frac{\mu^{4}}{R^{x}}\) it reduces to \(f(R)=R+\gamma R^{2}-\frac{\mu^{4}}{R}\) and the obtained results are very efficient. In addition to this functional form by using the linear
function of \(f(T)=\lambda T\), we get the final form of \(f(R,T)=R+\gamma R^{2}-\frac{\mu^{4}}{R}+\lambda T\) where \(\gamma\), \(\mu\) and \(\lambda\) are constants. Vinutha et al. [21] have worked on Kantowski-Sachs perfect fluid cosmological model in \(R^{2}\)- Gravity. Vinutha and Sri Kavya [22] have studied Bianchi type cosmological models in \(f(R,T)\) theory with quadratic functional form. Brookfield [23] have worked on viability of \(f(R)\) theories with additional powers of curvature. Godani and Samanta [24] have studied traversable warmholes on \(f(R)\) gravity where \(f(R)=R+\alpha R^{n}\). Banik et al. [25] have discussed Bianchi-I cosmological model in \(f(R)=R-\frac{\beta}{R^{n}}\) gravity.
Next, we consider logarithmic curvature i.e. \(f(R,T)=R+\nu ln(\tau R)+\lambda T\) where \(\tau\), \(\nu\) and \(\lambda\) are constants. As this modified gravity has put forward a gravitational alternative for dark energy, it is quite interesting to work on this particular functional form. In this model logarithmic terms are produced by quantam effects in curved space time. The need for dark energy may be eradicated by this modified gravity and may aid for the fusion of the early time inflation and cosmic acceleration. Nojiri and Odinstov have studied about modified gravity and proposed some functional forms such as \(ln(R)\) or \(R^{-n}(lnR)^{m}\) and \(R+\gamma R^{-n}(ln\frac{R}{\mu^{2}})^{m}\)[26, 27]. Fayyaz and Shamir [28] have analysed wormhole structures in logarithmic-corrected \(R^{2}\) gravity. Kourosh and Tahereh [29] have discussed phantom-like behavior in \(f(R)=R+\beta log(\frac{R}{\mu^{2}})+\gamma R^{m}\) gravity.
By appending the torsion scalar component to the exponential \(f(R)\) theory [30, 31, 32, 33, 34], the functional form is \(f(R,T)=R+\kappa e^{-\iota R}+\lambda T\) where \(\kappa\), \(\iota\) and \(\lambda\) are constants. The reason behind choosing this functional form it comes up with the best way of exploring cosmic acceleration. In contrast to the \(\Lambda CDM\) model the exponential gravity model has one more parameter included in it and it also permits the relaxation of fine tuning. Vinutha et al. [35] have studied on Bianchi type cosmological models in modified theory with exponential functional form. Paul et al. [36] have worked on accelerating universe in modified theories of gravity. Sahoo et al. [37] have studied on \(f(R,T)=f(R)+\lambda T\) gravity models as alternatives to cosmic acceleration. Moreas and Sahoo [38] have discussed traversable wormholes by using functional form \(f(R,T)=R+\gamma e^{\chi T}\) and also with this functional form Moreas et al. [39] studied FRW cosmological model.
When compared to other anisotropic metrics, Kantowski-Sachs model is very simple and easy to analyze. The cosmologies of Kantowski-Sachs metric possess two properties of symmetry such as spherical symmetry and invariance under spatial translation. It describes spatially homogeneous, anisotropic universe and interior of black holes that does not allow a simply transitive group of motions. It is also used to analyze the behavior of the added degrees of freedom in quantum cosmological model. This metric represents three different anisotropic \(3+1\) dimensional space time and positive curvature models. The study of anisotropic models were nourished by the theoretical studies and observations of CMB which also been extended to modified theories of gravity. Thus this model with an anisotropic nature appeared most
appropriate in describing the early stage of the cosmos. some of the authors who worked on Kantowski Sachs model are [40, 41, 42, 43, 44, 45, 46].
This article is organized as follows: In section 2, \(f(R,T)\) gravity field equations are obtained and in section 3 the field equations of power-law, logarithmic and exponential functional forms are solved. Section 4 discusses the physical and geometrical properties of three cases using graphs and section 5 concludes our results.
## 2 A brief review of \(f(R,T)=f_{1}(R)+f_{2}(T)\) model
The final action principle of \(f(R,T)\) gravity which is a function of matter Lagrangian \(L_{m}\) is read as
\[S=\int\left[\frac{1}{16\pi}f(R,T)+L_{m}\right]\sqrt{-g}d^{4}x, \tag{1}\]
where \(g\) is the metric determinant of the fundamental tensor \(g_{ij}\), \(f(R,T)\) is an arbitrary function of \(R\) and \(T\) which is mentioned in the abstract, \(L_{m}\) is the usual matter Lagrangian density and we consider \(G=c=1\).
By varying the above equation (1) with respect to \(g_{ij}\),we obtain the field equations of \(f(R,T)\) gravity in covariant tensor form as
\[f_{R}(R,T)R_{ij}-\frac{1}{2}f(R,T)g_{ij}+(g_{ij}\Box-\nabla_{i}\nabla_{j})f_{ R}(R,T)=8\pi T_{ij}-f_{T}(R,T)\theta_{ij}-f_{T}(R,T)T_{ij}, \tag{2}\]
here, \(\nabla_{i}\) is the covariant derivative and \(\Box=\nabla_{i}\nabla_{j}\) is the D'Alemberts operator. \(f_{R}=\frac{\partial f(R,T)}{\partial R}\), \(f_{T}=\frac{\partial f(R,T)}{\partial T}\) and \(R_{ij}\) is the Ricci tensor, where
\[\theta_{ij}=-2T_{ij}+g_{ij}L_{m}-2g^{lk}\frac{\partial^{2}L_{m}}{\partial g^ {ij}g^{lk}}. \tag{3}\]
Here the energy-momentum tensor is considered to be a perfect fluid which is defined as
\[T_{ij}=(p+\rho)u_{i}u_{j}-pg_{ij}, \tag{4}\]
where \(u_{i}\) denotes four velocity vector in co-moving coordinates i.e. \(u_{i}=(1,0,0,0)\) and \(u_{i}u^{j}=1\). Hence, the components of energy-momentum tensor become \(T_{ij}\)=diag\((\rho,-p,-p,-p)\), where \(p\) is the pressure and \(\rho\) is the energy density of perfect fluid. Several authors have studied by choosing energy-momentum tensor as perfect fluid which are included in the references [47, 48, 49, 50, 51, 52, 53, 54]. It takes the form by replacing matter Lagrangian as \(L_{m}=-p\)[55, 56, 57] in equation (3).
\[\theta_{ij}=-2T_{ij}-pg_{ij} \tag{5}\]
Consequently the field equations for \(f(R,T)\) gravity are procured with the aid of \(T=\rho-3p\) in equation (2) as
\[\begin{split} G_{ij}=&\frac{1}{f_{R}(R,T)}\Big{[}[8\pi+ f_{T}(R,T)]T_{ij}+pf_{T}(R,T)g_{ij}+\frac{1}{2}[f(R,T)-Rf_{R}(R,T)]g_{ij}\\ &-(g_{ij}\Box-\nabla_{i}\nabla_{j})f_{R}(R,T)\Big{]},\end{split} \tag{6}\]
where \(G_{ij}\) is the Einstein tensor which is expressed as \(R_{ij}-\frac{1}{2}Rg_{ij}\).
Here, we consider the functional form \(f(R,T)=f_{1}(R)+f_{2}(T)\) i.e.
\[f(R,T)=R+\gamma R^{2}-\frac{\mu^{4}}{R}+\lambda T \tag{7}\]
\[f(R,T)=R+\nu ln(\tau R)+\lambda T \tag{8}\]
\[f(R,T)=R+\kappa e^{-\iota R}+\lambda T \tag{9}\]
as case I, II and III respectively.
## 3 Metric and solutions of the field equations
Now the metric takes the form,
\[ds^{2}=dt^{2}-M^{2}(t)dr^{2}-N^{2}(t)(d\theta^{2}+\sin^{2}\theta d\psi^{2},) \tag{10}\]
where \(M\) and \(N\) are metric potentials and functions of cosmic time \(t\) only and co-moving coordinates are \((r,\theta,\psi)\).
### Case I - (negative and positive powers of curvature)
The functional form \(f(R,T)=R+\gamma R^{2}-\frac{\mu^{4}}{R}+\lambda T\) field equations are as follows
\[\left.\begin{split}\frac{1}{N^{2}}+\frac{2\ddot{N}}{N}+\frac{\dot{ N}^{2}}{N^{2}}=-\frac{(8\pi+\frac{3\lambda}{2})p}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}}+ \frac{\lambda\rho}{2(1+2R\gamma+\frac{\mu^{4}}{R^{2}})}-\frac{(\frac{\gamma R ^{2}}{2}+\frac{\mu^{4}}{R})}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}}\\ -\frac{(2\gamma-\frac{2\mu^{4}}{R^{3}})}{1+2R\gamma+\frac{\mu^{4} }{R^{2}}}\Big{[}\frac{2\dot{N}}{N}\dot{R}+\ddot{R}\Big{]}-\frac{\frac{6\mu^{4 }\dot{R}^{2}}{R^{4}}}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}}.\end{split} \right\} \tag{11}\]
\[\left.\begin{split}\frac{\ddot{M}}{M}+\frac{\ddot{N}}{N}+\frac{ \dot{M}\dot{N}}{MN}=&-\frac{(8\pi+\frac{3\lambda}{2})p}{1+2R \gamma+\frac{\mu^{4}}{R^{2}}}+\frac{\lambda\rho}{2(1+2R\gamma+\frac{\mu^{4}} {R^{2}})}-\frac{(\frac{\gamma R^{2}}{2}+\frac{\mu^{4}}{R})}{1+2R\gamma+\frac{ \mu^{4}}{R^{2}}}\\ -\frac{(2\gamma-2\frac{\mu^{4}}{R^{3}})}{1+2R\gamma+\frac{\mu^{4 }}{R^{2}}}\Big{[}(\frac{\dot{M}}{M}+\frac{\dot{N}}{N})\dot{R}+\ddot{R}\Big{]}- \frac{6\frac{\mu^{4}\dot{R}^{2}}{R^{4}}}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}}. \end{split}\right\} \tag{12}\]
\[2\frac{\dot{M}\dot{N}}{MN}+\frac{\dot{N}^{2}}{N^{2}}+\frac{1}{N^{2}}= \frac{(8\pi+\frac{3\lambda}{2})\rho}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}}-\frac{ \lambda p}{2(1+2R\gamma+\frac{\mu^{4}}{R^{2}})}-\frac{(\frac{\gamma R^{2}}{2}+ \frac{\mu^{4}}{R})}{1+2R\gamma+\frac{\mu^{4}}{R^{2}}} \tag{13}\] \[-\frac{(2\gamma-2\frac{\mu^{4}}{R^{3}})}{1+2R\gamma+\frac{\mu^{4} }{R^{2}}}\Big{[}\frac{\dot{M}}{M}+\frac{2\dot{N}}{N}\Big{]}\dot{R}.\Bigg{\}}\]
here dot denotes derivate with respect to \(t\).
### Case - II (logarithmic curvature)
Field equations corresponding to the \(f(R,T)=R+\nu ln(\tau R)+\lambda T\) are
\[\left.\begin{array}{c}\frac{1}{N^{2}}+\frac{2\ddot{N}}{N}+ \frac{\dot{N}^{2}}{N^{2}}=\frac{-(8\pi+\frac{3\lambda}{2})p}{1+\frac{\nu}{R}} +\frac{\lambda\rho}{2(1+\frac{\nu}{R})}-\frac{\nu(1-ln(\tau R))}{2(1+\frac{ \nu}{R})}\\ +\Big{[}\frac{2\dot{N}}{N}\dot{R}+\ddot{R}\Big{]}\frac{\frac{\nu}{R^{2}}}{1+ \frac{\nu}{R}}-\frac{\frac{2\nu\dot{R}^{2}}{R^{3}}}{1+\frac{\nu}{R}}.\Bigg{\}} \\ \frac{\ddot{M}}{M}+\frac{\ddot{N}}{N}+\frac{\dot{M}\dot{N}}{MN}=- \frac{(8\pi+\frac{3\lambda}{2})p}{1+\frac{\nu}{R}}+\frac{\lambda\rho}{2(1+\frac {\nu}{R})}-\frac{\nu(1-ln(\tau R))}{2(1+\frac{\nu}{R})}\\ +\frac{\frac{\nu}{R^{2}}}{1+\frac{\nu}{R}}\Big{[}(\frac{\dot{M}}{M}+\frac{ \dot{N}}{N})\dot{R}+\ddot{R}\Big{]}-\frac{\frac{2\nu\dot{R}^{2}}{R^{3}}}{1+ \frac{\nu}{R}}.\Bigg{\}}\\ 2\frac{\dot{M}\dot{N}}{MN}+\frac{\dot{N}^{2}}{N^{2}}+ \frac{1}{N^{2}}=\frac{(8\pi+\frac{3\lambda}{2})\rho}{1+\frac{\nu}{R}}-\frac{ \lambda p}{2(1+\frac{\nu}{R})}-\frac{\nu(1-ln(\tau R))}{2(1+\frac{\nu}{R})}\\ +\frac{\frac{\nu}{R^{2}}}{1+\frac{\nu}{R}}\Big{[}\frac{\dot{M}}{M}+ \frac{2\dot{N}}{N}\Big{]}\dot{R}.\Bigg{\}}\end{array}\right\} \tag{15}\]
### Case - III (exponential curvature)
Field equations corresponding to the \(f(R,T)=R+\kappa e^{-\iota R}+\lambda T\) are given as follows:
\[\left.\begin{array}{c}\frac{1}{N^{2}}+\frac{2\ddot{N}}{N}+ \frac{\dot{N}^{2}}{N^{2}}=-\frac{(8\pi+\frac{3\lambda}{2})p}{1-\kappa e^{- \iota R}}+\frac{\lambda\rho}{2(1-\kappa e^{-\iota R})}+\frac{\kappa e^{- \iota R}(1+\iota R)}{2(1-\kappa e^{-\iota R})}\\ -\frac{\kappa\iota^{2}e^{-\iota R}}{1-\kappa e^{-\iota R}}\Big{[} \big{(}\frac{2\dot{N}}{N}\big{)}\dot{R}+\ddot{R}\Big{]}+\frac{\kappa\iota^{3} e^{-\iota R}\dot{R}^{2}}{1-\kappa e^{-\iota R}}.\end{array}\right\} \tag{17}\]
\[\left.\begin{array}{c}\frac{\ddot{M}}{M}+\frac{\ddot{N}}{N}+ \frac{\dot{M}\dot{N}}{MN}=-\frac{(8\pi+\frac{3\lambda}{2})p}{1-\kappa e^{- \iota R}}+\frac{\lambda\rho}{2(1-\kappa e^{-\iota R})}+\frac{\kappa e^{-\iota R }(1+\iota R)}{2(1-\kappa e^{-\iota R})}\\ -\frac{\kappa\iota^{2}e^{-\iota R}}{1-\kappa e^{-\iota R}}\Big{[} \big{(}\frac{\dot{M}}{M}+\frac{\dot{N}}{N}\big{)}\dot{R}+\ddot{R}\Big{]}+\frac{ \kappa\iota^{3}e^{-\iota R}\dot{R}^{2}}{1-\kappa e^{-\iota R}}.\end{array}\right\} \tag{18}\]
\[2\frac{\dot{M}\dot{N}}{MN}+\frac{\dot{N}^{2}}{N^{2}}+\frac{1}{N^{2}}=\frac{(8\pi+ \frac{3\lambda}{2})\rho}{1-\kappa\epsilon e^{-\iota R}}-\frac{\lambda p}{2(1- \kappa\iota e^{-\iota R})}+\frac{\kappa e^{-\iota R}(1+\iota R)}{2(1-\kappa \iota e^{-\iota R})}\]
\[-\frac{\kappa\iota^{2}e^{-\iota R}\dot{R}}{1-\kappa\iota e^{-\iota R}}\Big{[} \frac{\dot{M}}{M}+\frac{2\dot{N}}{N}\Big{]}.\]
To obtain solutions for highly non-linear equations is very strenuous and in order to remove such complications we require some constraints.
(i) We consider \(\sigma\) is proportional to \(\theta\)(where \(\sigma\) is the shear scalar and \(\theta\) is the expansion scalar) and it generate linear relationship between two metric potentials in terms of \(M\) and \(N\) as
\[M=N^{n}, \tag{20}\]
\(n\neq 0,1\) is constant. The physical motivation for assuming this condition is that Hubble expansion of the universe may attain isotropy by the observations of the velocity redshift relation for extra galatic sources if the value of \(\frac{\sigma}{\theta}\) is constant [58].
(ii) The average scale factor is assumed as a hyperbolic expansion
\[a(t)=\sinh(\alpha t)^{\frac{1}{\beta}}, \tag{21}\]
where \(\alpha>0\), \(\beta>0\) are constants. The consequence of using this scale factor is time dependent deceleration parameter \(q\)[59]. This average scale factor tends to zero if \(t\to 0\) and if \(t\rightarrow\infty\) then \(a(t)\) becomes infinity.
The directional Hubble parameters are
\[H_{1}=\frac{\dot{M}}{M}\hskip 28.452756ptH_{2}=H_{3}=\frac{\dot{N}}{N}. \tag{22}\]
The average Hubble parameter is,
\[H=\frac{1}{3}(H_{1}+2H_{2}). \tag{23}\]
By substituting equation (23) in equation (22), we get
\[H=\frac{\dot{a}}{a}=\frac{1}{3}\Big{(}\frac{\dot{M}}{M}+\frac{\dot{N}}{N}\Big{)}. \tag{24}\]
From equations (20) - (24), we obtain metric potentials of \(M\) and \(N\) as
\[M=(\sinh(\alpha t))^{\frac{3n}{\beta(n+2)}}, \tag{25}\]
\[N=(\sinh(\alpha t))^{\frac{3}{\beta(n+2)}}. \tag{26}\]
If \(t\rightarrow\infty\) then \(M\) and \(N\) are nonzero, hence, our model is free from singularity.
Using equations (25) and (26), the Kantowski sachs metric obtained as
\[ds^{2}=dt^{2}-(\sinh(\alpha t))^{\frac{6n}{\beta(n+2)}}\;dr^{2}-(\sinh(\alpha t ))^{\frac{6}{\beta(n+2)}}(d\theta^{2}+\sin^{2}\theta d\psi^{2}). \tag{27}\]
The above metric represents a perfect fluid Kantowski-Sachs universe in \(f(R,T)\) theory of gravity.
### Pressure and energy density for case I
By solving the equations of (11),(12) and (13) we get the pressure of the model as
\[p=\frac{1}{4}\Big{(}\frac{\chi+\xi-2\eta-\phi_{4}+2\phi_{5}+2\phi_{6}-\phi_{7}}{ \phi_{2}-\phi_{1}}-\frac{\chi+\xi+2\eta+4\phi_{3}+7\phi_{4}+2\phi_{5}+2\phi_{6}+ 3\phi_{7}}{\phi_{1}+\phi_{2}}\Big{)}, \tag{28}\]
and the energy density of the model is obtained as
\[\rho=\frac{1}{4}\Big{(}\frac{\chi+\xi+2\eta+4\phi_{3}+7\phi_{4}+2\phi_{5}+2\phi _{6}+3\phi_{7}}{\phi_{1}+\phi_{2}}+\frac{\chi+\xi-2\eta-\phi_{4}+2\phi_{5}+2 \phi_{6}-\phi_{7}}{\phi_{2}-\phi_{1}}\Big{)}, \tag{29}\]
### Pressure and energy density for case II
By solving the equations (14),(15) and (16) we get the expression for pressure is
\[p=\frac{1}{4}\Big{(}\frac{\chi+\xi-2\eta+\phi_{4}-2\phi_{5}+4\phi_{6}+\phi_{7} }{\phi_{2}-\phi_{1}}-\frac{\chi+\xi+2\eta+4\phi_{3}-7\phi_{4}-2\phi_{5}+4\phi_{ 6}-3\phi_{7}}{\phi_{1}+\phi_{2}}\Big{)}, \tag{30}\]
and the energy density of the model is obtained as
\[\rho=\frac{1}{4}\Big{(}\frac{\chi+\xi+2\eta+4\phi_{3}-7\phi_{4}-2\phi_{5}+4 \phi_{6}-3\phi_{7}}{\phi_{1}+\phi_{2}}+\frac{\chi+\xi-2\eta+\phi_{4}-2\phi_{5}+ 4\phi_{6}+\phi_{7}}{\phi_{2}-\phi_{1}}\Big{)}, \tag{31}\]
### Pressure and energy density for case III
By solving the equations (17), (18) and(19) we get the pressure of the model as
\[p=\frac{1}{4}\Big{(}\frac{\chi+\xi-2\eta-\phi_{4}+2\phi_{5}-2\phi_{6}-\phi_{7} }{\phi_{2}-\phi_{1}}-\frac{\chi+\xi+2\eta-4\phi_{3}+7\phi_{4}+2\phi_{5}-2\phi_{ 6}+3\phi_{7}}{\phi_{1}+\phi_{2}}\Big{)}, \tag{32}\]
and the energy density of the model is obtained as
\[\rho=\frac{1}{4}\Big{(}\frac{\chi+\xi+2\eta-4\phi_{3}+7\phi_{4}+2\phi_{5}-2 \phi_{6}+3\phi_{7}}{\phi_{1}+\phi_{2}}+\frac{\chi+\xi-2\eta-\phi_{4}+2\phi_{5} -2\phi_{6}-\phi_{7}}{\phi_{2}-\phi_{1}}\Big{)}, \tag{33}\]
The values of \(\chi\), \(\xi\) and \(\eta\) are same for all the three cases whereas, the values of \(\phi_{i}\), for i=1 to 7 for the corresponding cases are clearly given in appendix section.
## 4 Physical and geometrical properties
The average Hubble parameter is
\[H=\frac{\alpha}{\beta}\coth(\alpha t). \tag{34}\]
From the figure of Hubble parameter, we trace that it decreases with the decrease of redshift i.e. decreases as time increases. By choosing the values of \(\alpha=0.21\) and \(\beta=3.10\) in the scale factor the Hubble parameter is obtained as \(0.07Gyrs^{-1}\) which is nearly equal to the present observational data [60]. For this quantity the dimension is \(\frac{1}{time}\). By using this formula, we can also measure the age of the cosmos.
(ii) The volume of the model is given by
\[V=a^{3}=(\sinh(\alpha t))^{\frac{3}{\beta}}. \tag{35}\]
In figure 2, it is clear that the spatial volume increases with the decrease of redshift i.e. it increases as the time increases and is finite at final epoch.
(iii) The expansion scalar \(\theta\) is
\[\theta=u^{i}_{;i}=3H=\frac{3\alpha\coth(\alpha t)}{\beta}. \tag{36}\]
From figure 3, it is observed that expansion scalar decreases with the decrease of redshift i.e. it decreases as time increases. Here we noticed that for \(t=0\) the expansion scalar is infinite.
(iv) We get the shear scalar as
Figure 1: Plot of Hubble parameter(\(H\)) versus redshift(\(z\))
Figure 2: Plot of volume(\(V\)) versus redshift(\(z\))
\[\sigma^{2}=\frac{3\alpha^{2}(n-1)^{2}\coth^{2}(\alpha t)}{\beta^{2}(n+2)^{2}}, \tag{37}\]
when \(t=0\), \(\sigma^{2}\) (shear scalar) tends to infinity.
(v) The mean anisotropy parameter \(A_{h}\) is obtained as
\[A_{h}=\frac{1}{3}\Big{[}\sum_{i=1}^{3}\Big{(}\frac{H_{i}-H}{H}\Big{)}^{2}\Big{]} \tag{38}\]
where \(i=1,2,3\) indicate the directional Hubble parameters for the coordinates of \(r,\theta\) and \(\psi\). The mean anisotropy parameter is defined on the basis of directional Hubble parameter and mean Hubble parameter.
\[A_{h}=\frac{2(n-1)^{2}}{(n+2)^{2}};n\neq-2. \tag{39}\]
The mean anisotropy parameter \(A_{h}\) is useful for checking if the model is anisotropic or not. In the present model \(A_{h}=0\) for \(n=1\) and \(A_{h}\neq 0\) for \(n\neq 1\) that is the model is anisotropic for \(n\neq 1\) and isotropic for \(n=1\).
In all the discussions and graphical representation of physical parameters we constraint the constants for case I as \(\alpha=0.21\), \(\beta=3.10\), \(n=7.38\), \(\lambda=-10.02\), \(\mu=0.2\), \(\gamma=0.03\), case II as \(\nu=0.001\), \(\tau=0.002\) and case III as \(\kappa=0.2\), \(\iota=0.009\). The values of parameters \(\alpha\), \(\beta\), \(n\), \(\lambda\) in cases II and III are same as that of Case I.
(iv) The deceleration parameter is
\[q=-1+\frac{d}{dt}(\frac{1}{H}). \tag{40}\]
In this model by using hyperbolic function we obtained deceleration parameter as
\[q=-1+\beta(1-\tanh^{2}(\alpha t)). \tag{41}\]
When \(t<\frac{1}{\alpha}\tanh^{-1}(1-\frac{1}{\beta})^{\frac{1}{2}}\), \(q\) has negative value which represents that the universe is accelerating whereas if \(t>\frac{1}{\alpha}\tanh^{-1}(1-\frac{1}{\beta})^{\frac{1}{2}}\), \(q\) has positive value which represents that the universe is
Figure 3: Plot of expansion scalar(\(\theta\)) versus redshift(\(z\))
decelerating. The quantities such as \(q\) and \(H\) specifies the geometric properties of the cosmos. v)Throughout the plots uniform colouring is followed by giving the colours brown for pressure, navy blue for energy density, sky blue for EoS parameter, blue for SEC, green for NEC and red for DEC. Figures 4,5 and 6 illustrate the variation of pressure against redshift in cases I, II and III respectively. The figures shows that in three cases pressure is negative and it is known that a negative pressure fluid is the correct mechanism which is capable of explaining cosmic acceleration within the standard cosmologies, despite the fact that in the latter it is necessary to bring the cosmological constant to get this exotic characteristic. In pressure graphs increase with the decrease of redshift that it is increases as the time increases is perceived which represents cosmic acceleration.
vi) Figures 7,8 and 9 shows the evolution of energy density for cases I, II and III respectively. In all the cases the density decreases with the decrease of redshift i.e. decreases as time increases.
is distinguished in three regions namely quintessence, phantom, and quintom according to its range. In quintessence region the EoS paramter lies in the range of \(-1<\omega<-\frac{1}{3}\), in phantom phase the EoS parameter is in the range of less than -1 (i.e. \(\omega<-1\)) and in quintom \(\omega=-1\). Figures 10, 11 and 12 of EoS parameter are drawn against redshift and observe that decrease with the decrease of redshift that is decrease as time increases. From the graphs we noticed that our model lies in quintessence region in three cases. According to Planck+nine years WMAP the current value of EoS parameter is approximately as \(\omega=-1.13^{+0.24}_{-0.25}\)[61], and from SNe Ia data with galaxy clustering, CMBR anisotropy statistics the EoS parameter lies in the range \(-1.33<\omega<-0.79\), \(-1.67<\omega<-0.62\)[62] respectively. From the figures of EoS parameters, it is seen that three cases are approximately coincide with observational data which is a good result.
viii)In modified theories of gravity, energy conditions [63, 64, 65] plays a crucial role in studying the behaviour of spacelike and timelike geodesics and these conditions are came from Raychaudhuri equations [66]. Energy conditions can be defined in many ways, such as geometric way and physical way. Moreover energy conditions are significant in the black hole physics, as they lay foundations of the singularity theorems. Another advantage of energy condition is that it allows basic tools to consider certain ideas about black holes and wormholes. There are four most commonly used fundamental energy conditions. The general expressions for energy conditions in regard of pressure and energy density are given below:
(i)SEC(Strong Energy condition): Gravity always has to be attractive, and in cosmology \(\rho+3p\geq 0\), \(\rho+p\geq 0\) should be observed.
(ii)DEC(Dominant Energy Condition): The energy density should always be positive when measured by any observer that is \(\rho\geq 0\), \(\rho\pm p\geq 0\), must be obeyed.
(iii)WEC(Weak Energy Condition): The energy density must always be positive when measured by any observer that is \(\rho\geq 0\), \(\rho+p\geq 0\).
(iv)NEC(Null Energy Condition): NEC is expressed in the form of \(\rho+p\geq 0\) and it ensures the validity of second law of black hole thermodynamics.
Where NEC, WEC, DEC and SEC represents null, weak, dominant and strong energy conditions. According to present cosmological data to represent the universe with cosmic expansion the SEC of that model should be violated (\(\rho+3p\leq 0\)). For the obtained models the same scenario can be clearly observed from figures 13 to 15. When compared to strong energy condition null energy condition is more beneficial, as it can be used algebraically due to its weakest pointwise energy condition which results in the strongest theorems and all these energy conditions, are met by electromagnetic field. From figures 16 to 18 it is clear that NEC(\(\rho+p\geq 0\)) is satisfied in all the three cases for the obtained model. If NEC satisfies then the parameter EoS occurs in quintessence region. Also from figures 19 to 21 it is clear that DEC (\(\rho+p\geq 0\)) is fulfilled in all the three cases for the obtained model.
### Stability analysis
Perturbations are essential for simplify a complex mathematical problems. There are several types of perturbations such as isotropic, anisotropic, homogeneous/inhomogeneous scalar, vector and tensor perturbations. The technique of perturbation is studied as a tool for finding approximate solution and comparing it to the obtained exact solution. Some of the researchers who studied on stability analysis are [67, 68, 69]. Here the stability of solutions in terms of metric perturbation as following
\[a_{i}\to a_{Bi}+\delta a_{i}=a_{Bi}(1+\delta b_{i}). \tag{42}\]
The perturbation of volume scale factor, directional Hubble factors and mean Hubble factors are
\[V\to V_{B}+V_{B}\sum_{i}\delta b_{i},\ \ \theta_{i}\rightarrow\theta_{Bi}+ \sum_{i}\delta b_{i},\ \ \theta\rightarrow\theta_{B}+\frac{1}{3}\sum_{i}\delta b_{i}. \tag{43}\]
The following equations are satisfied by the metric perturbation \(\delta b_{i}\)
\[\sum_{i}\delta\ddot{b}_{i}+2\sum_{i}\theta_{Bi}\dot{\delta b}_{i}=0, \tag{44}\]
\[\ddot{\delta b}_{i}+\frac{\dot{V}_{B}}{V_{B}}\delta\dot{b}_{i}+\sum_{j}\delta \dot{b}_{j}\theta_{Bi}=0, \tag{45}\]
\[\sum_{i}\delta\dot{b}_{i}=0. \tag{46}\]
From equations (44) - (46), we attain
\[\ddot{\delta b}_{i}+\frac{\dot{V}_{B}}{V_{B}}\delta\dot{b}_{i}=0, \tag{47}\]
where \(V_{B}\) is the background spatial volume and for our case \(V_{B}\) is
\[V_{B}=(\sinh(\alpha t))^{\frac{3}{\beta}}. \tag{48}\]
From above two equations, \(\delta b_{i}\) is procured as
\[\delta b_{i}=c_{1}-c\Big{(}\frac{\beta\sqrt{cosh^{2}(\alpha t)}\,{\rm sech}( \alpha t)\sinh^{\frac{\beta-3}{\beta}}(\alpha t)_{2}F_{1}\Big{(}\frac{1}{2}, \frac{\beta-3}{2\beta};\frac{3(\beta-1)}{2\beta};-\sinh^{2}(\alpha t)\Big{)}}{ \alpha(\beta-3)}\Big{)}, \tag{49}\]
where \(c_{1}\) and \(c\) are integrating constants.
consequently, the actual fluctuations \(\delta a_{i}=a_{Bi}\delta b_{i}\) is
\[\delta a_{i}=\Big{[}c_{1}-c\Big{(}\frac{\beta\sqrt{cosh^{2}(\alpha t)}\,{\rm sech }(\alpha t)\sinh^{\frac{\beta-3}{\beta}}(\alpha t)_{2}F_{1}\Big{(}\frac{1}{2},\frac{\beta-3}{2\beta},\frac{3(\beta-1)}{2\beta};-\sinh^{2}(\alpha t)\Big{)} }{\alpha(\beta-3)}\Big{)}\Big{]}(\sinh(\alpha t))^{\frac{-3}{\beta}}. \tag{50}\]
Figure 22 shows the behaviour of actual fluctuations versus redshift and it is noticed that it is a decreasing function with the decrease of redshift that is decreases as time increases. It is clear that \(\delta a_{i}\to 0\) as \(z\rightarrow-\infty\) and hence the background solution is shown to be stable against perturbation of gravitational field.
## 5 Conclusions
A cosmological model in \(f(R,T)\) theory with three cases namely power law, logarithmic and exponential curvature is obtained. Hyperbolic scale factor is used to solve the field equations to get the solution in each case. The solutions of these field equations represent accelerating model of the universe. The graph of all parameters are drawn against redshift. In graphs the negative region of \(z\) represents future epoch, positive region of \(z\) represents past and \(z=0\) indicates present. Obtained models are anisotropic and free from singularity all the way through the universe's evolution. By analyzing all the parameters the conclusions are as follows:
* From figures 1 and 3 and from the equations (34) and (36) it can be seen that Hubble parameter and expansion scalar decreases with the decrease of redshift, and also it is clear that the Hubble parameter and expansion scalar are close to zero when \(t\rightarrow\infty\).
* From figure 2 it is clear that volume increases with the decrease of redshift which indicates volume of the expanding universe. From equation (37), it is noticed that the shear scalar is a function of time and tends to zero when \(t\rightarrow\infty\).
* From equation (39), the anisotropic parameter is independent of time and \(A_{h}\neq 0\) for \(n\neq 1\), \(A_{h}=0\) for \(n=1\). But in this paper due to power law \(n\) is different from one. Hence the models are anisotropic throughout.
Figure 22: Plot of actual fluctuations(\(\delta a_{i}\)) versus redshift(\(z\))
* From the graphs of pressure and energy density of all the three cases, it is clear that the pressure and energy density are negative and positive respectively. Due to the negative pressure and positive energy density the universe is going through accelerating expansion.
* The behavior of EoS parameter against redshift is represented in plots 11 to 13. From these graphs it is obvious that the model is in the quintessence region in three cases that is \(-1<\omega<-\frac{1}{3}\) which matches with present observational data.
* In three cases, SEC is violated whereas NEC and DEC are fulfilled. The violation of SEC leads to cosmic acceleration which is in good agreement with the expansion of the cosmos.
* As seen in the graph of stability analysis, the actual fluctuations begin with a small positive value and decreases to zero. As a result, the background solution is stable when the gravitational field is perturbed.
A detailed discussion is provided through the obtained models for describing cosmic acceleration. Finally, through the detailed study of the models in three cases namely power law curvature \(f(R,T)=R+\gamma R^{2}-\frac{\mu^{4}}{R}+\lambda T\), logarithmic curvature\(f(R,T)=R+\nu ln(\tau R)+\lambda T\) and exponential curvature \(f(R,T)=R+\kappa e^{-\iota R}+\lambda T\) very good results which represents the universe accelerating expansion are observed. Moreover all the parameters discussed here matches with the recent observational data. At last, without existence of any exotic fluid, the current universe is accelerating is perceived in this paper which is a great outcome. As a future work, this work can be extended to other anisotropic models and can study the similarities and differences between them.
## 6 Appendix
The values of \(\chi\), \(\xi\), \(\eta\) are same for all the three cases and are given below
\(\chi=\frac{\varrho_{1}}{\varrho_{2}}\)
where
\(\varrho_{1}=\beta^{2}\sinh(\alpha t)^{2}(\cosh(\alpha t)-1)(\cosh(\alpha t)+1 )(n+2)^{2}\sinh(\alpha t)^{\frac{-6+(-2n-4)\beta}{\beta(n+2)}}\)
\(-6(\frac{-9\cosh(\alpha t)^{2}}{2}+\beta(n+2))\alpha^{2}\)
\(\varrho_{2}=\beta^{2}(n+2)^{2}\sinh(\alpha t)^{2}\)
\(\xi=\frac{\varrho_{3}}{\varrho_{2}}\)
where
\[\varrho_{3} =-3\Big{(}(-3n^{2}-3n-3)\cosh(\alpha t)^{2}+\beta(n+2)(n+1)\Big{)} \alpha^{2}\] \[\eta =\frac{\varrho_{4}}{\varrho_{2}}\] \[\varrho_{4} =\beta^{2}\sinh(\alpha t)^{2}(\cosh(\alpha t)-1)(\cosh(\alpha t)+ 1)(n+2)^{2}\sinh(\alpha t)^{\frac{-6+(-2n-4)\beta}{\beta(n+2)}}\] \[+18\alpha^{2}(n+\frac{1}{2})\cosh(\alpha t)^{2}\]
**For case(i)** The values of \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), \(\phi_{4}\), \(\phi_{5}\), \(\phi_{6}\) and \(\phi_{7}\) are given below
\[\phi_{1}(t)=\frac{\varrho_{5}}{\varrho_{6}}\]
where
\[\varrho_{5}(t)=288(n+2)^{2}\sinh(\alpha t)^{\frac{6+(2n+4)\beta}{ \beta(n+2)}}\beta^{2}\Big{(}-\Big{(}(n+2)^{2}\beta-3n^{2}-6n-9\Big{)}\alpha^{2 }\cosh(\alpha t)^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\] \[+\Big{(}\alpha^{2}\sinh(\alpha t)^{\frac{6+(2n+4)\beta}{\beta(n+ 2)}}+\frac{\cosh(\alpha t)^{2}\beta}{3}-\frac{\beta}{3}\Big{)}(n+2)^{2}\beta \Big{)}^{2}(\pi+\frac{3\lambda}{16})\]
\[\varrho_{6}(t)=\varrho_{7}(t)+4(\cosh(\alpha t)-1)(n+2)^{2}\beta^{2}(\cosh( \alpha t)+1)\varrho_{8}(t)\]
\[\varrho_{7}(t)=\Big{(}\Big{(}\mu^{4}(n+2)^{6}\beta^{6}+324\alpha^{4}(n+2)^{2}( n^{2}+2n+3)^{2}\beta^{2}-11664\alpha^{6}(n^{2}+2n+3)^{3}\gamma\Big{)}\]
\[\cosh(\alpha t)^{6}-3(n+2)^{2}\beta\Big{(}\mu^{4}(n+2)^{4}\beta^{5}+72\alpha^{4 }(n^{2}+2n+3)(n+2)^{2}\beta^{2}+108\alpha^{4}\]
\[(n^{2}+2n+3)^{2}\beta-3888\alpha^{6}(n^{2}+2n+3)^{2}\gamma\Big{)}\cosh( \alpha t)^{4}+3(n+2)^{4}\beta^{2}\Big{(}\mu^{4}(n+2)^{2}\beta^{4}\]
\[+12\alpha^{4}(n+2)^{2}\beta^{2}+72\alpha^{4}(n^{2}+2n+3)\beta-1296\alpha^{6} (n^{2}+2n+3)\gamma\Big{)}\cosh(\alpha t)^{2}\]
\[-\beta^{3}(n+2)^{6}(-432\alpha^{6}\gamma+\mu^{4}\beta^{3}+36\alpha^{4}\beta) \Big{)}\sinh(\alpha t)^{\frac{18}{\beta(n+2)}}\]
\[\varrho_{8}(t)=-6\Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}+(n+2)^{2}\beta\Big{)} \alpha^{2}\Big{(}(-54\alpha^{2}(n^{2}+2n+3)\gamma\]
\[+\beta^{2}(n+2)^{2})\cosh(\alpha t)^{2}-\beta(n+2)^{2}(-18\alpha^{2}\gamma+ \beta)\Big{)}\sinh(\alpha t)^{\frac{12}{\beta(n+2)}}+(n+2)^{2}\beta^{2}\]
\[(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)\Big{(}((-108\alpha^{2}(n^{2}+2n+3) \gamma+\beta^{2}(n+2)^{2})\cosh(\alpha t)^{2}\]
\[-\beta(n+2)^{2}(-36\alpha^{2}\gamma+\beta))\sinh(\alpha t)^{\frac{6}{\beta(n+ 2)}}-4\beta^{2}\gamma(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)^{2})\Big{)}\]
\[\phi_{2}(t)=\frac{\varrho_{9}(t)}{\varrho_{6}(t)}\]
where
\[\varrho_{9}(t)=\lambda 18(n+2)^{2}\sinh(\alpha t)^{\frac{6+(2n+4) \beta}{\beta(n+2)}}\beta^{2}\Big{(}-\Big{(}(n+2)^{2}\beta-3n^{2}-6n-9\Big{)} \alpha^{2}\cosh(\alpha t)^{2}\] \[\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+(n+2)^{2}\beta\Big{(} \alpha^{2}\sinh(\alpha t)^{\frac{6+(2n+4)\beta}{\beta(n+2)}}+\frac{\beta \sinh(\alpha t)^{2}}{3}\Big{)}\Big{)}^{2}\]
\[\phi_{3}(t)=\frac{\varrho_{10}}{\varrho_{13}}\]
where
\[\varrho_{10}(t)=-2\Big{(}\varrho_{11}-\varrho_{12}+\beta^{2}(\cosh( \alpha t)-1)(\cosh(\alpha t)+1)(n+2)^{2}\Big{)}\] \[\varrho_{11}(t)=\Big{(}\Big{(}\mu^{4}(n+2)^{6}\beta^{6}-2916 \alpha^{6}\gamma(n^{2}+2n+3)^{3}\Big{)}\cosh(\alpha t)^{6})-3\beta(n+2)^{2} \Big{(}\mu^{4}(n+2)^{4}\beta^{5}\] \[\qquad\qquad-972\alpha^{6}\gamma(n^{2}+n+3)^{2}\Big{)}\cosh( \alpha t)^{4}+3\Big{(}\mu^{4}(n+2)^{4}\beta^{4}-324\alpha^{6}\gamma(n^{2}+2n+3 )\Big{)}\beta^{2}\] \[\qquad\qquad(n+2)^{4}\cosh(\alpha t)^{2}-\beta^{3}(n+2)^{6}(-108 \alpha^{6}\gamma+\beta^{3}\mu^{4})\Big{)}\sinh(\alpha t)^{\frac{18}{\beta(n+2 )}}\] \[\varrho_{12}(t)=108\beta^{2}(n+2)^{2}\gamma(\cosh(\alpha t)+1) \Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}+(n+2)^{2}\beta\Big{)}^{2}\alpha^{4}\] \[\qquad\qquad(\cosh(\alpha t)-1)\sinh(\alpha t)^{\frac{12}{\beta( n+2)}}+36\beta^{4}(n+2)^{4}\gamma(\cosh(\alpha t)+1)^{2}\Big{(}(-3n^{2}-6n-9)\] \[\qquad\qquad\cosh(\alpha t)^{2}+(n+2)^{2}\beta\Big{)}\alpha^{2}( \cosh(\alpha t)-1)^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}-4\gamma\beta^{6} (\cosh(\alpha t)-1)^{3}\] \[\qquad\qquad(\cosh(\alpha t)+1)^{3}(n+2)^{6}\Big{(}(-3n^{2}-6n-9 )\cosh(\alpha t)^{2}+(n+2)^{2}\beta\Big{)}\alpha^{2}\sinh(\alpha t)^{\frac{6} {\beta(n+2)}}\] \[\varrho_{13}(t)=(\varrho_{7}-\varrho_{14})\sinh(\alpha t)^{\frac{ 6}{\beta(n+2)}}(n+2)^{2}\beta^{2}\sinh(\alpha t)^{2}\] \[\varrho_{14}(t)=-24\Big{(}\Big{(}(n+2)^{2}\beta^{2}-54\alpha^{2} \gamma(n^{2}+2n+3)\Big{)}\cosh(\alpha t)^{2}-\beta(n+2)^{2}(-18\alpha^{2} \gamma+\beta)\Big{)}\] \[\qquad\qquad\qquad(n+2)^{2}\alpha^{2}(\cosh(\alpha t)-1)\beta^{ 2}(\cosh(\alpha t)+1)((-3n^{2}-6n-9)\cosh(\alpha t)^{2}\] \[\qquad\qquad\qquad+(n+2)^{2}\beta)\sinh(\alpha t)^{\frac{12}{ \beta(n+2)}}+4(n+2)^{4}\Big{(}\Big{(}(n+2)^{2}\beta^{2}-108\alpha^{2}\gamma(n ^{2}+2n+3)\Big{)}\] \[\qquad\qquad\qquad\cosh(\alpha t)^{2}-\beta(n+2)^{2}(-36\alpha^{ 2}\gamma+\beta)\Big{)}(\cosh(\alpha t)-1)^{2}\beta^{4}(\cosh(\alpha t)+1)^{2}\] \[\qquad\qquad\qquad\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}-16 \gamma\beta^{6}(\cosh(\alpha t)-1)^{3}(\cosh(\alpha t)+1)^{3}(n+2)^{6}\] \[\phi_{4}(t)=\frac{-36\alpha^{2}\varrho_{15}\varrho_{16}}{( \varrho_{7}-\varrho_{14})\varrho_{17}}\]
where
\[\varrho_{15}(t)=\Big{(}\mu^{4}(n+2)^{6}\beta^{6}+5832\alpha^{6} \gamma(n^{2}+2n+3)^{3}\Big{)}\cosh(\alpha t)^{6}-3\beta(n+2)^{2}\Big{(}\mu^{4 }(n+2)^{4}\beta^{5}\] \[\qquad\qquad\qquad+1944\alpha^{6}\gamma(n^{2}+2n+3)^{2}\Big{)} \cosh(\alpha t)^{4}+3\beta^{2}(n+2)^{4}\Big{(}\mu^{4}(n+2)^{2}\beta^{4}+648 \alpha^{6}\gamma\] \[\qquad\qquad\qquad(n^{2}+2n+3)\Big{)}\cosh(\alpha t)^{2}-\beta^ {3}(n+2)^{6}(216\alpha^{6}\gamma+\beta^{3}\mu^{4})\Big{)}\sinh(\alpha t)^{ \frac{18}{\beta(n+2)}}+\varrho_{25}\quad\Bigg{\}}\] \[\varrho_{25}(t)=216(\cosh(\alpha t)+1)\beta^{2}(\cosh(\alpha t)-1 )(n+2)^{2}\gamma\alpha^{4}\Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}\] \[\qquad\qquad\qquad+\beta(n+2)^{2}\Big{)}^{2}\sinh(\alpha t)^{ \frac{12}{\beta(n+2)}}-72(\cosh(\alpha t)+1)^{2}\beta^{4}(\cosh(\alpha t)-1)^ {2}(n+2)^{4}\gamma\alpha^{4}\] \[\qquad\qquad\qquad\Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}+\beta (n+2)^{2}\Big{)}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+8\gamma\beta^{6}( \cosh(\alpha t)-1)^{3}\] \[\qquad\qquad\qquad\qquad(\cosh(\alpha t)+1)^{3}(n+2)^{6}\]
\[\varrho_{16}(t)=\Big{(}(\beta(n+2)^{2}-3n^{2}-6n-9)\sinh(\alpha t)^{ \frac{6}{\beta(n+2)}}-\beta(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)\] \[(n+2)\Big{)}\cosh(\alpha t)^{2}\] \[\varrho_{17}(t)=(-3\alpha^{2}((-3n^{2}-6n-9)\cosh(\alpha t)^{2}+( n+2)^{2}\beta)\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+\beta^{2}(\cosh(\alpha t)-1)\] \[(\cosh(\alpha t)+1)(n+2)^{2})\beta(n+2)\sinh(\alpha t)^{2}\] \[\phi_{5}(t)=\frac{24\varrho_{15}\varrho_{18}}{(\varrho_{7}- \varrho_{14})\varrho_{19}}\]
where
\[\varrho_{18}(t)=\Big{(}((n+2)^{2}\beta-3n^{2}-6n-9)\alpha^{2} \Big{(}\cosh(\alpha t)^{2}+\frac{1}{2}\Big{)}\sinh(\alpha t)^{\frac{6}{\beta(n+ 2)}}\] \[-\frac{(\cosh(\alpha t)-1)(6\cosh(\alpha t)^{2}+\beta(n+2))(\cosh (\alpha t)+1)}{2}\Big{)}\alpha^{2}\] \[\varrho_{19}(t)=-3\alpha^{2}((-3n^{2}-6n-9)\cosh(\alpha t)^{2}+( n+2)^{2}\beta)\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\] \[+\beta^{2}(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)^{2}\] \[\phi_{6}(t)=\frac{\varrho_{20}}{\varrho_{21}\varrho_{22}}\]
where
\[\varrho_{20}(t)=-\Big{(}((n+2)^{2}\beta-n^{2}-6n-9)\alpha^{2} \sinh(\alpha t)^{\frac{6+(2n+4)\beta}{\beta(n+2)}}-((n+2)^{2})-3n^{2}-6n-9) \cosh(\alpha t)^{2}\] \[\alpha^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+\beta\sinh( \alpha t)^{2}(n+2)\Big{)}^{2}\sinh(\alpha t)^{\frac{18+(4n+8)\beta}{\beta(n+2) }}\beta^{6}\cosh(\alpha t)^{2}(n+2)^{6}\alpha^{2}\mu^{4}\] \[\varrho_{21}(t)=\Big{(}\alpha^{2}\beta(n+2)^{2}\sinh(\alpha t)^{ \frac{6+(2n+4)\beta}{\beta(n+2)}}-((n+2)^{2}\beta-3n^{2}-6n-9)\cosh(\alpha t)^ {2}\alpha^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\] \[+\frac{\beta^{2}\sinh(\alpha t)^{2}(n+2)^{2}}{3}\Big{)}^{2}\] \[\varrho_{22}(t)=\varrho_{23}+\varrho_{24}(n+2)^{2}\beta^{2}(\cosh (\alpha t)-1)(\cosh(\alpha t)+1)\] \[\varrho_{23}(t)=\Big{(}\varrho_{26}+\frac{1}{8}((n+2)^{2}\beta( \mu^{4}(n+2)^{4}\beta^{5}+72\alpha^{4}(n^{2}+2n+3)(n+2)^{2}\beta^{2}\] \[+108\alpha^{4}(n^{2}+2n+3)^{2}\beta-388\alpha^{6}(n^{2}+2n+3)^{2} \gamma)\cosh(\alpha t)^{4})-\frac{1}{8}(n+2)^{4}\beta^{2}\] \[(\mu^{4}(n+2)^{2}\beta^{4}+12\alpha^{4}(n+2)^{2}\beta^{2}+72 \alpha^{4}(n^{2}+2n+3)\beta-1296\alpha^{6}(n^{2}+2n+3)\] \[\gamma)\cosh(\alpha t)^{2}+\frac{1}{24}\beta^{3}(n+2)^{6}(432 \alpha^{6}\gamma+\mu^{4}\beta^{3}+36\alpha^{4}\beta)\Big{)}\sinh(\alpha t)^{ \frac{18}{\beta(n+2)}}\] \[\varrho_{26}(t)=\Big{(}\frac{-\mu^{4}(n+2)^{6}\beta^{6}}{24}- \frac{27\alpha^{4}(n+2)^{2}(n^{2}+2n+3)^{2}\beta^{2}}{2}+486\alpha^{6}(n^{2} +2n+3)^{3}\gamma\Big{)}\cosh(\alpha t)^{6}\]
\[\begin{array}{l}\varrho_{24}(t)=\Big{(}((-3n^{2}-6n-9)\cosh(\alpha t)^{2}+(n+2)^{2 }\beta)\alpha^{2}((-54\alpha^{2}(n^{2}+2n+3)\gamma+\beta^{2}(n+2)^{2})\\ \cosh(\alpha t)^{2}-\beta(n+2)^{2}(-18\alpha^{2}\gamma+\beta))\sinh(\alpha t)^ {\frac{12}{\beta(n+2)}}+\frac{1}{3}\Big{(}2(n+2)^{2}\beta^{2}\Big{(}\Big{(}( \frac{-\beta^{2}(n+2)^{2}}{4}\\ +27\alpha^{2}(n^{2}+2n+3)\gamma)\cosh(\alpha t)^{2}+\frac{1}{4}\beta(n+2)^{2} (-36\alpha^{2}\gamma+\beta)\Big{)}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+\beta ^{2}\gamma\\ (\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)^{2}\Big{)}(\cosh(\alpha t)-1)( \cosh(\alpha t)+1)\Big{)}\end{array}\]
\[\begin{array}{l}\phi_{7}(t)=\frac{-36n\alpha^{2}\varrho_{15}\varrho_{16}}{ \varrho_{17}(\varrho_{7}-\varrho_{14})}\end{array}\]
**For case(ii)** The values of \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), \(\phi_{4}\), \(\phi_{5}\), \(\phi_{6}\) and \(\phi_{7}\) are given below
\[\begin{array}{l}\phi_{1}(t)=\frac{48\delta_{3}(\pi+\frac{3 \lambda}{16})}{\delta_{1}}\end{array}\]
where
\[\begin{array}{l}\delta_{1}=\Big{(}\Big{(}\Big{(}\Big{(}\nu(n+2)^{2}\beta^{2} -18\alpha^{2}(n^{2}+2n+3)\Big{)}\cosh(\alpha t)^{2}-\beta(n+2)^{2}(-6\alpha^{ 2}+\beta\nu)\Big{)}\\ \sinh(\alpha t)^{\frac{6}{\beta(n+2)}}-2\beta^{2}(\cosh(\alpha t)-1)(\cosh( \alpha t)+1)(n+2)^{2}\Big{)}\end{array}\]
\[\begin{array}{l}\delta_{3}=\Big{(}\Big{(}(-3n^{2}-6n-9)\cosh( \alpha t)^{2}+\beta(n+2)^{2}\Big{)}\alpha^{2}\sinh(\alpha t)^{\frac{6}{\beta(n +2)}}\\ -\frac{\beta^{2}\sinh(\alpha t)^{2}(\cosh(\alpha t)-1)(\cosh( \alpha t)+1)(n+2)^{2}}{3}\Big{)}\end{array}\]
\[\begin{array}{l}\phi_{2}(t)=\frac{-\delta_{2}\lambda}{\delta_{1}}\end{array}\]
where
\[\begin{array}{l}\delta_{2}=-3\Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}+\beta (n+2)^{2}\Big{)}\alpha^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\\ +\beta^{2}(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)^{2}\end{array}\]
\[\begin{array}{l}\phi_{3}(t)=\frac{-3\nu\delta_{3}\delta_{4}}{\delta_{1}} \end{array}\]
where
\[\begin{array}{l}\delta_{4}=ln\Big{(}\frac{1}{\beta^{2}(n+2)^{2} \sinh(\alpha t)^{2}}\Big{(}\delta_{9}\Big{)}+ln(2)+ln(\tau)-1\Big{)}\end{array}\]
\[\delta_{9}=-\beta^{2}(\cosh(\alpha t)-1)^{2}(\cosh(\alpha t)+1)^{2}(n+2)^{2} \sinh(\alpha t)^{\frac{-6+(-2n-4)\beta}{\beta(n+2)}}\]
\[+3\alpha^{2}\Big{(}(-3n^{2}-6n-9)\cosh(\alpha t)^{2}+\beta(n+2)^{2}\Big{)}\]
\[\phi_{4}(t)=\frac{\delta_{5}}{\delta_{2}\delta_{1}}\]
where
\[\delta_{5}=18\Big{(}-\beta(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)\sinh( \alpha t)^{\frac{6}{\beta(n+2)}}+(\beta(n+2)^{2}\]
\[-3n^{2}-6n-9)\sinh(\alpha t)^{\frac{12}{\beta(n+2)}}\alpha^{2}\Big{)}(n+2)\nu \beta\cosh(\alpha t)^{2}\alpha^{2}\]
\[\phi_{5}(t)=\frac{\delta_{6}}{\delta_{2}\delta_{1}}\]
where
\[\delta_{6}=-12(n+2)^{2}\nu\beta^{2}\Big{(}\delta_{10}+(\beta(n+2)^{2}-3n^{2} -6n-9)(\cosh(\alpha t)^{2}+\frac{1}{2})\sinh(\alpha t)^{\frac{12}{\beta(n+2)} }\alpha^{2}\Big{)}\alpha^{2}\]
\[\delta_{10}=\frac{-(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(6\cosh(\alpha t)^{2} +\beta(n+2))\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}}{2}\]
\[\phi_{6}(t)=\frac{\delta_{7}}{\delta_{8}\delta_{1}}\]
where
\[\delta_{7}=4(n+2)^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\nu \beta^{2}\cosh(\alpha t)^{2}\Big{(}(\beta(n+2)^{2}-3n^{2}-6n-9)\alpha^{2}] \sinh(\alpha t)^{\frac{6}{\beta(n+2)}}\] \[-\beta(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)\Big{)}^{2}\alpha^ {2}\] \[\delta_{8}=\Big{(}-(\beta(n+2)^{2}-3n^{2}-6n-9)\cosh(\alpha t)^{2} \alpha^{2}\sinh(\alpha t)^{\frac{6}{\beta(n+2)}}+\delta_{11}\Big{)}^{2}\] \[\delta_{11}=(n+2)^{2}\beta\Big{(}\alpha^{2}\sinh(\alpha t)^{\frac {6+(2n+4)\beta}{\beta(n+2)}}+\frac{\beta\sinh(\alpha t)^{2}}{3}\Big{)}\] \[\phi_{7}(t)=\frac{n\delta_{5}}{\delta_{2}\delta_{1}}\]
**For case(iii)** The values of \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), \(\phi_{4}\), \(\phi_{5}\), \(\phi_{6}\) and \(\phi_{7}\) are given below
\[\phi_{1}(t)=\frac{-16\pi-3\lambda}{2\zeta_{1}-2}\]
where
\[\zeta_{1}=\kappa\iota e^{\frac{-2\iota\left(\zeta_{8}-3a^{2}\left(\left(-3n^{2}-6n -9\right)\cosh(\alpha t)^{2}+\beta(n+2)^{2}\right)\right)}{\beta^{2}(n+2)^{2} \sinh(\alpha t)^{2}}}\]
\[\zeta_{8}=\beta^{2}(\cosh(\alpha t)-1)^{2}(\cosh(\alpha t)+1)^{2}(n+2)^{2}\sinh (\alpha t)^{\frac{-6+(-2n-4)\beta}{\beta(n+2)}}\]
\[\phi_{2}(t)=-\frac{\lambda}{2\zeta_{1}-2}\]
\[\phi_{3}(t)=\frac{\zeta_{2}\zeta_{4}}{\zeta_{3}}\]
where
\[\zeta_{2}=e^{\frac{2\left(\beta^{2}\sinh(\alpha t)^{2}(\cosh( \alpha t)^{4}+1)(n+2)^{2}\sinh(\alpha t)\frac{-6+(-4n-8)\beta}{\beta(n+2)}+9a^ {2}\cosh(\alpha t)^{2}(n^{2}+2n+3)\right)\iota}{\beta^{2}(n+2)^{2}\sinh(\alpha t )^{2}}}\]
\[\zeta_{3}=\beta^{2}(n+2)^{2}\sinh(\alpha t)^{4}\Big{(}\iota\kappa e^{\frac{2 \iota\left(\beta^{2}(\cosh(\alpha t)^{4}+1)(n+2)^{2}\sinh(\alpha t)\frac{-6+(- 2n-4)\beta}{\beta(n+2)}+9a^{2}\cosh(\alpha t)^{2}(n^{2}+2n+3)\right)}{\beta^{2 }(n+2)^{2}\sinh(\alpha t)^{2}}}\]
\[-e^{\frac{4\left(\sinh(\alpha t)\frac{-6+(-2n-4)\beta}{\beta(n+2)}\cosh( \alpha t)^{2}\beta+\frac{3a^{2}}{2}\right)}{\beta\sinh(\alpha t)^{2}}}\Big{)}\]
\[\zeta_{4}=\kappa\Big{(}\beta^{2}\iota(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+ 2)^{2}\sinh(\alpha t)^{\frac{-6}{\beta(n+2)}}+\Big{(}\frac{-(n+2)^{2}\beta^{2 }}{2}\]
\[+9\alpha^{2}\iota(n^{2}+2n+3)\Big{)}\cosh(\alpha t)^{2}-3(n+2)^{2}\beta(\iota \alpha^{2}-\frac{\beta}{6})\Big{)}\]
\[\phi_{4}(t)=\frac{-36\alpha^{2}\zeta_{5}\zeta_{2}t^{2}\kappa}{\beta(n+2)\zeta_ {3}}\]
where
\[\zeta_{5}=\beta(\cosh(\alpha t)-1)(\cosh(\alpha t)+1)(n+2)\sinh(\alpha t)^{ \frac{-6}{\beta(n+2)}}+\alpha^{2}\Big{(}(n+2)^{2}\]
\[\beta-3n^{2}-6n-9\Big{)}\cosh(\alpha t)^{2}\]
\[\phi_{5}(t)=\frac{-24\iota^{2}\zeta_{2}\alpha^{2}\kappa\zeta_{6}}{\zeta_{3}}\]
where
\[\zeta_{6}=-\zeta_{8}+\Big{(}\cosh(\alpha t)^{2}+\frac{1}{2}\Big{)}((n+2)^{2} \beta-3n^{2}-6n-9)\alpha^{2}\]
\[\zeta_{8}=\frac{(\cosh(\alpha t)-1)(6\cosh(\alpha t)^{2}+\beta(n+2))(\cosh( \alpha t)+1)\sinh(\alpha t)^{\frac{-6}{\beta(n+2)}}}{2}\]
\[\phi_{6}(t)=\frac{-144\alpha^{2}\iota^{2}\cosh(\alpha t)^{2}\zeta_{1}\zeta_{7} }{(n+2)^{4}\beta^{4}(\zeta_{1}-1)}\]
\[\phi_{7}(t)=\frac{-36n\alpha^{2}\zeta_{5}\zeta_{2}\iota^{2}\kappa}{\beta(n+2) \zeta_{3}}\]
\[\zeta_{7}=\Big{(}\alpha^{2}((n+2)^{2}\beta-3n^{2}-6n-9)\sinh(\alpha t)^{\frac {6}{\beta(n+2)}}-\beta(\cosh(\alpha t)-1)\]
\[(\cosh(\alpha t)+1)(n+2)\Big{)}^{2}\sinh(\alpha t)^{\frac{-12+(-6n-12)\beta}{ \beta(n+2)}}\] |
2305.14927 | Scalable wavelength-multiplexing photonic reservoir computing | Photonic reservoir computing (PRC) is a special hardware recurrent neural
network, which is featured with fast training speed and low training cost. This
work shows a wavelength-multiplexing PRC architecture, taking advantage of the
numerous longitudinal modes in a Fabry-Perot semiconductor laser. These modes
construct connected physical neurons in parallel, while an optical feedback
loop provides interactive virtual neurons in series. We experimentally
demonstrate a four-channel wavelength-multiplexing PRC, which runs four times
faster than the single-channel case. It is proved that the multiplexing PRC
exhibits superior performance on the task of signal equalization in an optical
fiber communication link. Particularly, this scheme is highly scalable owing to
the rich mode resources in Fabry-Perot lasers. | Rui-Qian Li, Yi-Wei Shen, Bao-De Lin, Jingyi Yu, Xuming He, Cheng Wang | 2023-05-24T09:10:06Z | http://arxiv.org/abs/2305.14927v1 | # Scalable wavelength-multiplexing photonic reservoir computing
###### Abstract
Photonic reservoir computing (PRC) is a special hardware recurrent neural network, which is featured with fast training speed and low training cost. This work shows a wavelength-multiplexing PRC architecture, taking advantage of the numerous longitudinal modes in a Fabry-Perot semiconductor laser. These modes construct connected physical neurons in parallel, while an optical feedback loop provides interactive virtual neurons in series. We experimentally demonstrate a four-channel wavelength-multiplexing PRC, which runs four times faster than the single-channel case. It is proved that the multiplexing PRC exhibits superior performance on the task of signal equalization in an optical fiber communication link. Particularly, this scheme is highly scalable owing to the rich mode resources in Fabry-Perot lasers.
## I Introduction
The rapid development of artificial intelligence requires an enormous amount of computational power, which is very challenging for traditional computers based on the von Neumann architecture. Photonic computing is a promising approach to significantly raise the computational power, owing to the fast speed, low latency, and high energy efficiency of light.[2, 3] Photonic reservoir computing (PRC) is a special recurrent neural network, where the neurons are connected with multiple feedback loops.[4, 5] In contrast to common recurrent neural networks, weights in the input layer and in the hidden reservoir layers of PRCs are fixed, while only weights in the readout layer require training. Therefore, the training speed of PRCs is fast and the training cost is low. One implementation approach of PRCs is connecting the physical neurons with optical waveguides on a single chip.[6] The operation speed of this kind PRC is fast, and the clock rate reaches more than 10 GHz.
However, the integration of nonlinear neurons is technically challenging[7] and the scale is limited due to the transmission loss of light in optical waveguides.[8] Another approach is employing a time-delay loop together with one physical neuron to produce a large number of virtual neurons.[9] The time-delay PRC architecture is usually implemented by using a semiconductor laser with an optical feedback loop,[10] or by using an optical modulator with an optoelectronic feedback loop.[11, 12] This time-delay scheme significantly eases the requirement of massive hardware. Nevertheless, the clock rate of the system is inversely proportional to the number of virtual neurons. Consequently, the clock rate of time-delay PRCs is usually limited to tens of MHz.[13, 14] In addition to the above two approaches, there are various other implementation schemes,[15] such as using coupled VCSEL arrays[16] or using a spatial light modulator.[17]
In contrast to electronics, photonics have multiple multiplexing dimensions, including wavelength, polarization, space, orbital angular momentum, etc.[18] In the framework of PRCs, Vatin et al. demonstrated a polarization-multiplexing PRC based on the dual-polarization dynamics of a VCSEL, which could process two tasks in parallel.[19] Sunada and Uchida presented a space-multiplexing PRC based on the complex speckle field in a multimode waveguide.[20] Butschek et al. reported a frequency-multiplexing PRC, where 25 comb lines produced by the phase modulation were used as neurons.[21] In addition, Nguimdo et al. numerically proposed a parallel PRC based on the two directional modes in a ring laser.[22] However, the wavelength division multiplexing (WDM) is the most attractive dimension thanks to the broad optical bandwidth of optoelectronic devices. Indeed, the ITU standard dense WDM grid with 50 GHz spacing includes as many as 80 channels. The WDM dimension has been employed in various photonic computing networks, such as the spiking neural network,[23] the convolutional neural network,[24] and the multilayer perceptron.[25] Surprisingly, the deployment of WDM in PRCs
has been only discussed in simulations [26, 27, 28, 29], to the best of our knowledge.
This work experimentally presents a wavelength-multiplexing PRC, by using the multiple longitudinal modes in a Fabry-Perot (FP) semiconductor laser. All the modes act as physical neurons, which are connected in parallel through the common gain medium. On the other hand, an optical delay loop is used to produce virtual neurons, which are connected in series. We demonstrate that the four-channel PRC runs four times faster than the single-channel case. In addition, the parallel PRC exhibits better performance on the task of signal equalization in an optical fiber communication link.
## II Wavelength-multiplexing scheme and experimental setup
Figure 1 shows the experimental setup for the wavelength-multiplexing PRC architecture. A FP laser with tens of longitudinal modes is used as a slave laser. The modes interact with each other through the common gain medium. All the laser modes are subjected to an optical feedback loop consisting of an optical circulator and two couplers (with ratios of 80:20 and 90:10, respectively). The time-delay loop provides a large number of virtual neurons and constructs the hidden reservoir layer of the PRC [9, 10]. The optical feedback strength of the feedback loop is tuned by an optical attenuator. Four single-mode external-cavity lasers are used as the master lasers, and the wavelength (\(\lambda_{1-4}\)) of each laser is finely tuned to align with one longitudinal mode of the slave laser, respectively. All the master lasers are unidirectionally injected into the slave laser through the optical circulator. The optical injection is operated in the stable locking regime, which is bounded by the Hopf bifurcation and the saddle-node bifurcation [30, 31]. After the power amplification with an Erbium-dope fiber amplifier, the polarization of each master laser is aligned with that of the Mach-Zehnder intensity modulator (EOSPACE, 40 GHz bandwidth) by using a polarization controller, respectively. Then, the polarization of the modulated light is re-adjusted to align with that of the slave laser. Every symbol of the input signal under test is firstly multiplied with a mask, which consists of a random binary sequence of {0, 1}.[32] The mask plays a crucial role in maintaining the transient state of the nonlinear laser system, which is the fundamental requirement of time-delay PRCs. In addition, the duration between each bit of the mask determines the temporal interval of virtual neurons. The preprocessed signal is produced from the arbitrary waveform generator (AWG, Keysight, 25 GHz bandwidth). This radio-frequency signal is amplified before driving the intensity modulator. In this way, the signal at the input layer of the PRC is injected into the slave laser at the hidden reservoir layer for nonlinear processing. The optical spectrum of the output signal is measured by an optical spectrum analyzer (OSA, Yokogawa, 0.02 nm resolution bandwidth). At the output layer, the light is split into two branches by using a 50:50 splitter. Each branch analyzes one longitudinal mode through using a bandpass filter (0.95 nm bandwidth), respectively. The two optical signals are detected in parallel by high-speed photodiodes (PD, 25 GHz bandwidth and 50 GHz bandwidth). After power amplification, the temporal waveforms of both channels are recorded on the digital oscilloscope (OSC, Keysight, 59 GHz bandwidth), simultaneously. The two modes with wavelengths of \(\lambda_{1,2}\) are recorded firstly, while another two modes with wavelengths of \(\lambda_{3,4}\) are tracked in the second-round measurement. It is worthwhile to point out that the four modes can be tracked simultaneously in case a proper wavelength demultiplexer is employed.
In the experiment, the delay time of the optical feedback loop is measured to be about r=65.3 ns. The time interval of the virtual neurons in the reservoir is \(\theta\)=0.05 ns, which is governed by the modulation rate of the modulator at 20 Gbps. The number of virtual neurons is set at \(N\)=80 throughout the experiment. The weights of the output layer in the PRC are
Figure 1: Experimental setup for wavelength-multiplexing PRC. AWG: arbitrary waveform generator; OSA: optical spectrum analyzer; OSC: oscilloscope; PD: photodiode.
trained with the algorithm of ridge regression.[32] The sampling rate of the AWG is set at 60 GSa/s and the rate of the oscilloscope is at 80 GSa/s. For the single channel PRC with only one master laser, the clock cycle of the system is \(T_{c}\)=4.0 ns, which is determined by the formula \(T_{c}\)=\(\Omega\)\(\times\)\(N\). When the WDM scheme with multiple master lasers is employed, the neuron number of each channel is inversely proportional to the channel number \(m\) as \(N\)/\(m\). Consequently, the clock cycle of the system scales down with the channel number as \(T_{c}\)=\(\Omega\)\(\times\)\(N\)/\(m\). For the four-channel PRC in Fig. 1, the clock cycle reduces down to \(T_{c}\)=1.0 ns, which is four times faster than the one-channel case. It is stressed that the clock cycle \(T_{c}\) of the PRC in Fig. 1 is significantly shorter than the delay time \(r\), which is different to the common synchronous time-delay PRCs. Our recent work has proved that this asynchronous architecture is beneficial to improve the performance of PRCs,[33] owing to the off-resonance effect.[34]
## III Experimental results
In the experiment, the slave FP laser exhibits a lasing threshold of \(I_{n}\)=8.0 mA at the operation temperature of 20 \({}^{\circ}\)C. The laser is biased at 3.5\(\times\)\(I_{n}\) with an output power of 3.2 mW, unless stated otherwise. The peak of the optical spectrum in Fig. 2(a) locates around 1545 nm, with a free spectral range of about 1.23 nm. When the slave laser is subject to the optical injection from one master laser in Fig. 2(b), only the injected mode keeps lasing, while other longitudinal modes are suppressed due to the gain reduction of the laser medium.[31] When two master lasers respectively lock two modes in Fig. 2(c), only the two injected modes remain lasing. In the same way, Fig. 2(d) shows that four modes keep lasing when four master lasers inject into the slave laser, simultaneously. It is noted that all the modes in the FP laser interact with each other due to the cross-gain coupling effect, rather than emit independently.[29] Therefore, the four modes in Fig. 2(d) act as connected physical neurons in parallel. In addition, every mode subject to the feedback loop in Fig. 1 produces a reservoir of virtual neurons, respectively. As a result, the four modes in the same gain medium generate four connected reservoirs together with a large number of virtual neurons.
The performance of the wavelength-multiplexing PRC is tested on the task of signal equalization in an optical fiber communication link.[35] The optical signal at the receiver side is distorted due to the effect of chromatic dispersion and the effect of Kerr nonlinearity in optical fibers. The aim of the task is to recover the original signal at the transmitter based on the distorted one at the receiver. It has been widely shown that various digital artificial neural networks could well compensate the linear dispersion and the nonlinear impairment in both intensity modulation and direct detection (IMDD) communication links and coherent communication links.[36] This work takes into account an IMDD link described by the nonlinear Schrodinger equation,[37]
\[\frac{\partial E}{\partial z}+\frac{\alpha}{2}E+j\frac{\beta_{2}}{2}\frac{ \partial^{2}E}{\partial t^{2}}=j\gamma\left|E\right|^{2}E, \tag{1}\]
where \(E\)(\(x\),\(t\)) is the slowly-varying envelop of the electric field in the fiber. The attenuation coefficient of the fiber is \(\alpha\)=0.2 dB/km, the chromatic dispersion coefficient is \(\beta_{2}\)=-21.4 ps\({}^{2}\)/km, and the Kerr nonlinearity coefficient is \(\nu\)=1.2 /(W-km).[38] The signal is modulated by the non-return-to-zero for-mat with random bits of {0, 1}. The modulation rate of the, signal is 25 Gbps, the transmission length is 50 km, and the launch power is 4 mW, unless stated otherwise. 30000 symbols are used to train the PRC, and 15000 symbols are used to test the performance. The performance is quantified by the bit error rate (BER). The measurement is repeated four rounds, and the mean BER and the standard deviation of uncertainty are collected.
We first investigate the performance of the single-channel PRC. Figure 3(a) shows that the BER declines nonlinearly with the injection ratio, owing to the increased signal-to-noise ratio of the PRC system. In addition, increasing
Figure 3: Performance of the single-channel PRC. Effects of (a) the injection ratio \(R_{inj}\) (b) the detuning frequency \(M_{inj}\) (c) the feedback ratio \(R_{inj}\), and (d) the normalized pump current /\(I_{inj}\). The default operation conditions are \(R_{inj}\)=4.0; \(\Delta\)fnj: near Hopf bifurcation; \(R_{inj}\)=30.3 dB; /\(I_{n}\)=3.5.
Figure 2: Optical spectra of the slave FP laser under the operation of (a) free running, (b) one-channel injection, (c) two-channel injection, and (d) four-channel injection. The label of the channels is marked in (d).
the detuning frequency in Fig. 3(b) from the side of the saddle-node bifurcation to the side of Hopf bifurcation reduces the BER. That is, the optimal PRC performance is achieved in the vicinity of the Hopf bifurcation. This is because the positive frequency detuning of optical injection reduces the damping factor of the slave laser, leading to richer dynamics of virtual neurons.[39, 40] Figure 3(c) shows that the BER of the signal is insensitive to the optical feedback ratio. It is remarked that the PRC is always operated in the stable regime of optical feedback. The upper limit of the stable regime is bounded by the critical feedback level, beyond which the slave laser becomes unstable.[30, 41] The critical feedback level of the slave laser without optical injection is measured to be about -19.3 dB. However, our recent work found that the optical injection significantly raised the critical feedback level of the slave laser.[33] Figure 3(d) shows that the BER firstly decreases with increasing pump current, and then saturates around 0.026 when the pump current is larger than 2.0\(\times\)\(l_{m}\). Interestingly, the impacts of the above four operation parameters on the signal equalization task are similar to those on the prediction task of Santa Fe chaos in.[33]
Figure 4 compares the performances between PRCs with different number of channels. It is shown that all the three PRCs are insensitive to the launch power of the transmitted signal as long as the power is less than 5.0 mW. This is because the small launch power does not stimulate strong Kerr nonlinearity, and hence the chromatic dispersion dominates the signal distortion. The average BER of the one-channel PRC (squares) is 0.028. In comparison, the average BER of both the two-channel PRC (triangles) and the four-channel PRC (dots) is 0.024, which is 14% smaller than the one-channel case. That is, the WDM scheme improves the PRC performance on the signal equalization task. This is because the laser mode interaction provides parallel connections of the virtual neurons, in addition to the series connections arising from the optical feedback loop. Besides, we remind that the clock rate (1.0 GHz) of the four-channel PRC is four times faster than that of the one-channel case (0.25 GHz), taking advantage of the WDM architecture. According to Fig. 3(a), the performance of the four-channel PRC can be further improved by raising the injection ratio to 4.0 (like one- and two-channel cases) instead of only 1.0.
In order to investigate the ability of the PRC for the compensation of fiber nonlinearity, we artificially raise the launch power of the transmitted signal up to 50 mW in Fig. 5. It is shown that the BER of the four-channel PRC increases nonlinearly with the launch power from 0.024 at 1.0 mW to 0.064 at 50 mW. In addition, the BER of every channel (open symbols) with a neuron number of 20 rises nonlinearity as well. Obviously, the four-channel PRC with a total neuron number of 80 performs better than each channel, because more neuron dynamics are involved. The performance of the PRC is compared with the feedforward equalizer, which is a transversal filter that linearly combines the received symbol and its neighbors.[35] That is, the feedforward equalizer only compensates the chromatic dispersion effect. Figure 5 shows that the BER of the feedforward equalizer (tap number is 5) increases almost linearly with the launch power from 0.042 at 1.0 mW to 0.085 at 50 mW. In comparison, the PRC exhibits better performance at both low and high launch powers. On one hand, this is because the PRC has fading memory effect
Figure 4: Performances of the wavelength-multiplexing PRCs versus the launch power of the transmitted signal. The feedback ratio is \(R_{ui}\)=30.3 dB and the pump current is \(\langle\)\(l_{m}\)=3.5. The optical injection conditions are: \(R_{ui}\)=4.0, \(\partial\)\(M_{ui}\)= -20.1 GHz for the one-channel case (squares); \(R_{ui}^{+2}\)=4.0, \(\Delta_{ui}^{I}\)= -34.8 GHz, \(\Delta_{ui}^{I}\)= -32.3 GHz for the two-channel case (triangles); \(R_{ui}^{+4}\)=1.0, \(\Delta_{ui}^{I}\)= -31.1 GHz, \(\Delta_{ui}^{I}\)= -23.6 GHz, \(\Delta_{ui}^{J}\)= -36.0 GHz, \(\Delta_{ui}^{J}\)= -77.0 GHz for the four-channel case (dots).
Figure 5: Performance comparison between the four-channel PRC (dots) and the feedforward equalizer (squares) for a broad range of launch power. The open symbols represent the BERs of each channel, respectively.
owing to the nature of recurrent neural networks. Therefore, the PRC can better compensate the distortion of chromatic dispersion. On the other hand, the PRC is a typical nonlinear system and thereby can compensate the distortion of Kerr nonlinearity as well. This comparison result is in agreement with those observed in literatures. [42, 43, 44]
## IV Discussion
In the above experiment, the delay time of the feedback loop is fixed at r=65.3 ns without any optimization. Although optimization of the PRC performance is beyond the scope of this work, this section discusses the effect of delay time in simulation so as to provide some insight on future experiment design. We simulate the PRC using the model described in. [29] The slave laser is described by the rate equation approach, which takes into account the dynamics of the carriers, the photon, and the phase of the electric field. The optical feedback effect and the optical injection effect are described by the classical Lang-Kobayashi model. [45, 46] In the simulation, the neuron number of the single channel PRC is set at 80. The neuron interval is set at 0.01 ns, and hence the clock cycle is r=0.8 ns. The simulated BER in Fig. 6 firstly goes down with the normalized delay time starting from \(\sqrt{t}\)/\(c\)=0.1. The optimal PRC performance is achieved within the normalized time range of 0.5 to 2.0, and the best BER is around 0.010. In comparison, the optimal time range for the prediction task of Santa Fe chaos is from 2.0 to 4.0. [32] Interestingly, the BER jumps up to 0.011 at \(\sqrt{t}\)/\(c\)=1.0, where the delay time is synchronous with the clock cycle. The performance degradation is attributed to the detrimental resonance effect. [33, 34] For \(\sqrt{t}\)/\(c\)>2.0, the BER almost rises linearly with increasing delay time, and its value reaches 0.016 at \(\sqrt{t}\)/\(c\)=9.6. In the experimental setup of Fig. 1, nevertheless, the feedback delay time is more than an order of magnitude than the clock cycle, which is far away from the optimal value. Consequently, future experiment requires an optimization of the delay time to achieve the best signal equalization performance.
## V Conclusion
In summary, we have experimentally demonstrated a wavelength-multiplexing PRC based on the numerous longitudinal modes in a FP laser. The modes play the role of connected physical neurons in parallel. Meanwhile, an optical feedback loop produces virtual neurons, which are connected in series through the time-multiplexing effect. It is shown that the four-channel PRC runs four times faster than the single-channel case, and the clock rate reaches up to 1.0 GHz. It is found that the four-channel PRC exhibits superior performance on the signal equalization task, owing to the interaction of neurons both in parallel and in series. The proposed WDM scheme is highly scalable owing to the rich mode resources in FP lasers. Future work will scale up the number of WDM channels and further raise the clock rate of the PRC.
###### Acknowledgements.
This work was funded by the Shanghai Natural Science Foundation (No. 20ZR1436500).
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author contributions
R.Q.L. and Y.W.S. contributed equally to this work.
### Rui-Qian Li
Data curation (equal); Investigation (equal); Validation (equal); Visualization (equal); Writing-original draft (equal). **Yi-Wei Shen**: Data curation (equal); Investigation (equal); Validation (equal); Visualization (equal); Writing-original draft (equal). **Bao-De Lin**: Data curation (equal); Investigation (equal); Validation (equal); Visualization (equal); Writing-original draft (equal). **Jingyi Yu**: Methodology (equal); Project administration (equal); Supervision (equal); Writing - review & editing (equal). **Xuming He**: Methodology (equal); Project administration (equal); Supervision (equal); Writing - review & editing (equal). **Cheng Wang**: Methodology (lead); Project administration (lead); Supervision (equal); Funding acquisition (lead); Writing - review & editing (lead).
## Data availability
The data that support the findings of this study are openly available in
[https://zenodo.org/record/7961785#ZGx](https://zenodo.org/record/7961785#ZGx) FXZByHu
Figure 6: Simulated effect of the normalized delay time \(\sqrt{t}\)/\(c\) on the PRC performance. |
2303.16266 | On-line reinforcement learning for optimization of real-life energy
trading strategy | An increasing share of energy is produced from renewable sources by many
small producers. The efficiency of those sources is volatile and, to some
extent, random, exacerbating the problem of energy market balancing. In many
countries, this balancing is done on the day-ahead (DA) energy markets. This
paper considers automated trading on the DA energy market by a medium-sized
prosumer. We model this activity as a Markov Decision Process and formalize a
framework in which an applicable in real-life strategy can be optimized with
off-line data. We design a trading strategy that is fed with the available
environmental information that can impact future prices, including weather
forecasts. We use state-of-the-art reinforcement learning (RL) algorithms to
optimize this strategy. For comparison, we also synthesize simple parametric
trading strategies and optimize them with an evolutionary algorithm. Results
show that our RL-based strategy generates the highest market profits. | Łukasz Lepak, Paweł Wawrzyński | 2023-03-28T19:27:02Z | http://arxiv.org/abs/2303.16266v3 | # Reinforcement learning for optimization of energy trading strategy
###### Abstract
An increasing part of energy is produced from renewable sources by a large number of small producers. The efficiency of these sources is volatile and, to some extent, random, exacerbating the energy market balance problem. In many countries, that balancing is performed on day-ahead (DA) energy markets. In this paper, we consider automated trading on a DA energy market by a medium size prosumer. We model this activity as a Markov Decision Process and formalize a framework in which a ready-to-use strategy can be optimized with real-life data. We synthesize parametric trading strategies and optimize them with an evolutionary algorithm. We also use state-of-the-art reinforcement learning algorithms to optimize a black-box trading strategy fed with available information from the environment that can impact future prices.
Reinforcement learning, Automated trading, Energy market
## 1 Introduction
In the year 2021, 6.54% and 3.63% of global electricity was produced by wind turbines and solar panels, respectively, after these ratios doubled in 5 preceding years [14]. The power of wind and sunlight reaching the Earth's surface is, to some extent, random. Therefore, while the rise of renewable energy sources presents the prospect of cheap and clean energy, it also exacerbates the problem of balancing power supply and demand.
In many countries, the main institution that balances volatile electricity supply and demand is a day-ahead energy market. Every day, agents participating in this market place their buy and sell bids separately for every hour between 0 am and 11 pm the next day. Market clearing prices are then designated for each of these hours, and the bids are consequently executed or not, depending on the proposed prices.
In this paper, we consider an energy prosumer, who is an agent that (i) consumes electricity, (ii) produces electricity, and (iii) has electricity storage. What is of our interest here is a strategy for automated trading on a day-ahead energy market on behalf of this agent.
In most studies, decision-making in power systems is based solely on the state of this system. We argue that (i) a useful strategy for operation in the power system needs to be fed with data on the environment and (ii) it needs to be optimized with real-life data. Firstly, reasonable temporal energy allocation needs to be based on the information that makes it possible to anticipate future prices (even if they are not directly predicted). Therefore, the strategy needs to be based on such information. Secondly, the environment that impacts the energy prices (e.g., weather conditions) has its own temporal dynamics that are hardly possible to model but can be replayed from real-life data, which is enough for strategy optimization.
Based on the above line of thought, this paper contributes as follows:
* We formalize a framework in which bidding on a day-ahead energy market is a Markov Decision Process, on which behavior can be optimized with real-life data rather than in model-based simulations or with real-life trial-and-error.
* We design a parametric strategy of automated bidding, which is fed with available information that makes it possible to anticipate future prices.
* We apply reinforcement learning to optimize the above strategy.
The rest of the paper is organized as follows: Section 2 defines the problem at hand. Section 3 reviews related work. Section 4 contains the main contribution of this paper: the model of the market, the strategies of automated trading, and the methods of their optimization. Section 5 presents the results of simulation experiments. The last section concludes the paper.
## 2 Problem definition
### Day-ahead energy market
Details of the day-ahead (DA) energy market are here taken from the Polish market of this kind. When created in 2000, this market was modeled on existing day-ahead energy markets in Western Europe. It is, therefore, typical.
Every day between 8 am and 10.30 am, an agent participating in the market places a set of bids defined by: (i) [buy or sell] indicator, (ii) price for 1 MWh [PLN], (iii) volume
[number of MWh, at least 0.1 MWh], and (iv) an hour of realization [one of 24 between 0 am and 11 pm the next day]. The bids are independent. Based on the bids placed by all agents, the clearing market price for each hour is designated. A buy bid is accepted when its price is not below the market price for its hour. A sell bid is accepted when its price is not above the market price for its hour. At each hour of the next day, the agents that realize their sell bids inject the declared volume of electricity into the system and get the market price for it. The agents that realize their buy bids withdraw the declared volume of electricity from the system and pay the market price for it.
In order to participate in the day-ahead energy market in Poland, every agent has to apply to become a member of this market and pay 2 000 PLN initial fee. Then, market members must pay 2 000 PLN a year to maintain their member status. Each member may choose one of two options of membership. Option 1: A participant pays a yearly participation fee equal to 50 000 PLN and 0.08 PLN for each MWh traded, making it well-suited for agents with high turnover. Option 2: A participant pays a yearly participation fee equal to 1 000 PLN and 0.45 PLN for each MWh traded, which may be better for agents making small or occasional bids.
### Prosumer
The agent considered here (i) consumes electricity at random but with a given statistical profile, (ii) produces electricity with means of limited random efficiency, such as a solar panel or wind turbines, (iii) has energy storage with limited capacity and efficiency (it outputs less energy than it inputs). We also assume that the prosumer is large enough to be able to participate in a DA energy market and not large enough for its bids to change the market prices.
At every hour, the agent may consume, produce, buy and sell some energy. The residual energy is deposited into or taken from the energy storage. If some fraction of the residuum still remains because of the storage being full or empty, this portion is given to or taken from the market operator, and the agent is charged the corresponding penalty fee.
An example of a prosumer considered here is a group (or an aggregator) of households. It cannot be a single household, though, as the minimum volume of electricity tradeable on the market is 0.1 MWh, which is too much for a typical single household to consume or produce.
The objective of the prosumer is to maximize its profit (or minimize its costs) by issuing optimal bids on a DA market. Essentially, the agent should buy the energy when its market price is relatively low, keep it in storage, and/or sell it when the market price is relatively high. The agent should also avoid paying penalty fees, thus avoiding having the storage charged or discharged entirely. Note that the problem does not quintessentially change when the prosumer does not produce nor consume electricity because then it becomes a temporal arbitrator and its profit still non-trivially depends on the strategy of issuing the buy/sell bids. However, if the prosumer does not have the storage, then events at different times are independent of each other, and the objective degenerates to just predicting the prosumer's own production and consumption.
## 3 Related Work
Automated trading on the electricity market.Bidding on a one-day ahead (DA) energy market was presented as an optimization problem by Lamont and Rajan [1997]. Wen and David [2001] introduced a catalog of parametric bidding strategies and optimized their parameters with a genetic algorithm. Attaviriyanupap _et al._[2005] proposed other strategies and applied evolutionary programming to optimize them. Bakirtzis _et al._[2007] analyze selling on a DA energy market from a producer point of view and optimize his bidding strategy with mixed integer linear programming. Bidding strategies applicable by power producers at a DA energy market were reviewed by Kwon and Frances [2012]. Liu _et al._[2015] analyze a microgrid that produces, stores, consumes energy and buys/sells it on a DA market. The authors use hybrid stochastic/robust optimization and predictions of prices and wind to optimize the bidding strategy. Rahimiyan and Baringo [2015] analyze a microgrid as above, but the strategy they develop also covers bidding on a real-time (RT) energy market. Iria _et al._[2017] apply stochastic optimization to designate a strategy of bidding at a DA energy market by an aggregator of prosumers. It is assumed there that the prosumers do not have any batteries but have access to a real-time (RT) energy market. Iria and Soares [2019] further extend the above work with a clustering of the aggregated prosumers. More general issues related to energy markets, microgrids, and bidding strategies are analyzed in [Prabavathi and Gnanadass, 2015; Zhang _et al._, 2016].
Automated trading on an energy market is a complex activity that can be modeled as a parametric transformation of available information into action. The parameters of this transformation can be determined with usual optimization techniques such as evolutionary algorithms. However, as the more complex behavior is expected and the more complex transformation is required, the less effective these techniques become. An approach specialized for the optimization of complex behavior is reinforcement learning.
Reinforcement learning and the electricity market.With the advent of electricity prosumers, energy micro-grids, and flexible price-driven energy consumption, there is an increasing need for automated decision-making and control in various activities undertaken by the energy market participants. Strategies for these agents can be optimized with reinforcement learning (RL). Various applications of RL in power systems are reviewed in [Jogunola _et al._, 2020; Yeng _et al._, 2020; Perera and Kamalaruban, 2021]. Nanduri and Das [2007] analyze bidding on a DA energy market as a zero-sum stochastic game played by energy producers willing to exercise their market power and keep their generators productive. RL is used there to optimize their bidding strategy. Vandael _et al._[2015] analyze bidding on a DA energy market from the point of view of a flexible buyer (who charges a fleet of electric vehicles). His strategy is optimized with RL. A number of papers is devoted to peer-to-peer trading with electricity on a local, event-driven energy market, with RL applied to optimize the behavior of such peers [Chen and Su, 2018, 2018, 2021, 2021]. Wang and Zhang [2018] use RL to develop a strategy of temporal ar
bitrage for an agent that operates on a real-time energy market with an energy storage. Lu _et al._[2019] use RL and neural price predictions to optimize the scheduling of home appliances of private users. The authors assume that the electricity prices are changing and are known one hour ahead. Bose _et al._[2021] analyze a similar setting in which the users also trade energy with each other. Qiu _et al._[2021] optimize the user strategies in this setting with multi-agent RL. Dong _et al._[2021] use RL to optimize a strategy of bidding on a DA energy market by a battery energy storage system (BESS). The authors address the dynamics of that process only to a limited extent. Firstly, the criterion of policy optimization is on-day-ahead profit instead of a long-term profit. Secondly, no information on the environment that could impact future prices is considered, e.g., weather conditions.
Dong _et al._[2021] considers simultaneous trading on a DA and hour-ahead energy markets by an energy storage operator as a Markov Decision Process. In this MDP, consecutive days are separate episodes, so between-days dynamics of the market are not accounted for. Discrete actions define the parameters of the bids. They are not based on external observations such as weather forecasts. In the current paper, we take into account the between-days dynamics, continuous parameters of the bids, and weather forecasts. These all lead to significantly better performance of our proposed strategy.
## 4 Model
### Markov Decision Process
In this section, we model the automated trading on a day-ahead energy market as a Markov Decision Process (MDP) [14]. This MDP includes the following components:
* Time, \(t=1,2,\ldots\). Here time instants denote days.
* Actions, \(a_{t}\in\mathbb{A}\). An action is a set of bids in the form \[\langle volume,price,type,hour\rangle,\] (1) where \(type\in\{\text{Sell},\text{Buy}\}\), \(hour\in\{0\text{ am},1\text{ am},\ldots,11\text{ pm}\}\).
* Reward, \(r_{t}\in\mathbb{R}\) is equal to the profit collected during the day.
* States of the environment, \(s_{t}\in\mathbb{S}\). A state here is a vector that encompasses all the information about the surrounding world that may influence the market prices of electricity and the volume of its production and consumption by the prosumer. Here we divide the coordinates of the state into _uncontrollable_, \(s_{t}^{u}\), and _controllable_, \(s_{t}^{c}\), \(s_{t}=\langle s_{t}^{u},s_{t}^{c}\rangle\). The agent does not influence the uncontrollable state coordinates; they may include an indicator of the day within the week, an indicator of the month within the year, and weather forecasts. These state coordinates evolve according to a stationary conditional probability: \[s_{t+1}^{u}\sim P(\cdot|s_{t}^{u}).\] (2) The controllable state coordinates are directly determined by the actions taken, and the uncontrollable state coordinates, that is \[s_{t+1}^{c}=f(s_{t}^{c},a_{t},s_{t}^{u},s_{t+1}^{u}),\] (3) where \(f\) is known. Here there is only one controllable state coordinate: the storage level. The \(f\) function is known because the storage level trivially results from consuming, producing, buying, and selling energy.
The assumptions that (i) the prosumer is small enough not to impact the market prices, (ii) the uncontrollable states changes according to the stationary rule (2), (iii) the controllable state evolves according to a known transition function (3) have the following implication: Based on a recorder trajectory of uncontrollable states, \((s_{t}^{u}:t=1,\ldots,T)\), we can designate a strategy of selecting actions \(a_{t}\) based on states \(s_{t}\) and evaluate this strategy in a simulation based on \((s_{t}^{u}:t=1,\ldots,T)\). This valuation will be an unbiased estimate of the performance of this strategy deployed in reality.
Note that the above-defined division of state variables into controllable and uncontrollable is unusual. In a typical MDP, we assume that the state changes according to
\[s_{t+1}\sim P_{s}(\cdot|s_{t},a_{t}), \tag{4}\]
where the conditional probability \(P_{s}\) is quite difficult to analyze and estimate. Therefore, a strategy of choosing actions cannot be evaluated with no bias within a simulation based on a model of \(P_{s}\).
### Designing a strategy
In general, by a _strategy_, \(\pi\), we understand a probability distribution of actions, \(a_{t}\), conditioned on states, \(s_{t}\):
\[a_{t}\sim\pi(\cdot|s_{t}). \tag{5}\]
In some cases, \(\pi\) will be a single point distribution, thereby being a function.
Let us denote by \(l_{t}\in(0,1)\) the storage level at midnight when the bids defined in action \(a_{t}\) start to be realized. The action \(a_{t}\) is selected at 10.30 am on a preceding day. At this moment, \(l_{t}\) is unknown. However, it is known which of the bids placed with \(a_{t-1}\) have been and will be realized. Therefore, \(l_{t}\) can be estimated with a reasonable accuracy. We will denote this estimate by \(\widehat{l}_{t}\).
Timing-based strategy (Timing).A simple strategy may be based on an observation that the market prices are generally low between 0 am - 3 am and high between 5 pm - 8 pm. That leads to actions comprising the following eight bids:
\[\begin{split}&\langle(\alpha_{1}-\alpha_{2}\widehat{l}_{t})/4,+ \infty,\text{Buy},0\text{ am}\rangle\\ &\langle(\alpha_{1}-\alpha_{2}\widehat{l}_{t})/4,+\infty,\text{ Buy},1\text{ am}\rangle\\ &\langle(\alpha_{1}-\alpha_{2}\widehat{l}_{t})/4,+\infty,\text{ Buy},2\text{ am}\rangle\\ &\langle(\alpha_{1}-\alpha_{2}\widehat{l}_{t})/4,+\infty,\text{ Buy},3\text{ am}\rangle\\ &\langle(\alpha_{1}+\alpha_{2}\widehat{l}_{t})/4,0,\text{Sell},5 \text{ pm}\rangle\\ &\langle(\alpha_{1}+\alpha_{2}\widehat{l}_{t})/4,0,\text{Sell},6 \text{ pm}\rangle\\ &\langle(\alpha_{1}+\alpha_{2}\widehat{l}_{t})/4,0,\text{Sell},7 \text{ pm}\rangle\\ &\langle(\alpha_{1}+\alpha_{2}\widehat{l}_{t})/4,0,\text{Sell},8 \text{ pm}\rangle\end{split} \tag{6}\]
where \(\alpha_{1},\alpha_{2}\) are positive coefficients. The term \(\pm\alpha_{2}\widehat{l}_{t}\) results from the fact that the more we have in the storage, the less we want to buy, and the more we want to sell. The prices (\(+\infty\) and \(0\)) are defined to ensure that the bids will be accepted.
Opportunistic strategy (Opportunistic).Another strategy is based on the observation that the prices generally vary, and the best thing to do is to buy when the price is relatively low and sell when it is relatively high while considering the battery level and production capabilities. That leads to the strategy in which, for each hour \(h\), there is a pair of bids:
\[\langle\bar{v}\exp(\alpha_{4h+5}+\alpha_{1}\widehat{l}_{t}),\bar{p} \exp(\alpha_{4h+7}+\alpha_{3}\widehat{l}_{t}),\text{Buy},h\rangle \tag{7}\] \[\langle\bar{v}\exp(\alpha_{4h+6}+\alpha_{2}\widehat{l}_{t}),\bar {p}\exp(\alpha_{4h+8}+\alpha_{4}\widehat{l}_{t}),\text{Sell},h\rangle,\]
where \(h=0,1,\ldots,23\), \(\bar{p}\) is the market energy price averaged over every hour and the preceding 30 days, \(\bar{v}\) is the maximum possible energy generation volume defined by solar and wind installations, and \(\alpha_{i}\) are coefficients. Here we try to sell/buy varied volumes of energy based on our production capabilities and battery level at different prices. These prices and volumes are to be optimized with respect to the profit this strategy yields.
### Optimization of strategy
Given real data (\(s_{t}^{u}:t=1,\ldots,T\)), we optimize a parametric strategy such as (6) or (7) using a gradient-free optimization method. In this approach, we need to be able to evaluate the strategy for any given vector of parameters. Here, an evaluation is a simulation of events over time \(t=1,\ldots,T\) with the real data, given strategy in use, and calculating the resulting profit.
### Black-box strategy and its optimization with reinforcement learning
We design a black-box strategy as a set of 24 pairs of bids
\[\langle\bar{v}\exp(v_{h}^{B}),\bar{p}\exp(y_{h}^{B}),\text{Buy},h\rangle \tag{8}\] \[\langle\bar{v}\exp(v_{h}^{S}),\bar{p}\exp(y_{h}^{S}),\text{Sell},h\rangle,\]
for \(h=0\) am, \(1\) am, \(\ldots,11\) pm. The numbers \(v_{h}^{B},y_{h}^{B},v_{h}^{S},y_{h}^{S}\) are produced as a sum of the output of a zero-mean normal noise, \(\xi_{t}\), and a neural network, \(g\), output:
\[\begin{bmatrix}v_{0am}^{B}&\ldots&v_{11pm}^{B}\\ y_{0am}^{B}&\ldots&y_{11pm}^{B}\\ v_{0am}^{S}&\ldots&v_{11pm}^{S}\\ y_{0am}^{S}&\ldots&y_{11pm}^{S}\end{bmatrix}=\xi_{t}+g(s_{t};\theta),\ \xi_{t}\sim\mathcal{N}(0,\Sigma). \tag{9}\]
The network \(g\) is fed with the state \(s_{t}\) and parameterized by the vector \(\theta\) of trained weights.
The reason to introduce the noise \(\xi_{t}\) into the bids is exploration: By taking different actions under similar circumstances, the trading agent is able to learn to tell good actions from the inferior ones in the current state.
In order to optimize the strategy (9), we may use any algorithm of on-line reinforcement learning [21] e.g., A2C [17], PPO [11], TD3 [12] or SAC [1]. A training consists of a sequence of simulated trials in which the trajectory of uncontrollable states (\(s_{t}^{u}:t=1,\ldots,T\)) is just replayed from the data, and the corresponding trajectory of controllable states (\(s_{t}^{c}:t=1,\ldots,T\)) is designated based on the uncontrollable states, the actions selected and the function \(f\) (3).
## 5 Experimental study
In this section, we demonstrate the effectiveness of our proposed black-box strategy optimized with reinforcement learning. We compare it to the parametric strategies optimized with the gradient-free CMA-ES algorithm [16] and the strategy optimized with the FARL algorithm [14].
### Testing environment
Our experiments are conducted using a custom environment simulating day-ahead energy market operations (EngMar in short) based on real-life data from the Polish market. This environment allows for customization of various market settings, such as a bid creation time, a scale of the bidding prosumer (defined by the number of households), or its solar and wind energy generation capabilities. The environment is based on the OpenAI Gym environment interface, making it compatible with many reinforcement learning libraries, including Stable-Baselines3 [14], which we use as our source of RL algorithms.
In our experiments, we use real historical data from the following sources:1
Footnote 1: We will make both our framework and the data available upon acceptance of the paper.
* Polish day-ahead energy market (Fixing I).
* Polish Meteorology and Water Management Institute.
* Polish Central Statistical Office.
As there are no publicly available historical weather forecast datasets for Poland, we generate one by noising actual weather data. For each day in the actual data, we start at 10 am of the previous day. For every hour from 11 am of the previous day to 11 pm of the currently forecasted day, we generate the forecasts as follows:
\[\epsilon_{t}\sim\mathcal{N}(0,\frac{\sigma^{2}}{24}) \tag{10}\] \[d_{t}=\sum_{i=1}^{t}\epsilon_{t}\] \[x_{t}^{forecast}=x_{t}^{actual}+d_{t}\]
where \(\sigma\) is an accuracy of a 24-hour forecast, \(d_{t}\) is a deviation for index \(t\) and \(x_{t}^{actual},x_{t}^{forecast}\) are actual and forecasted weather for index \(t\), respectively. For cloudiness, we assume \(\sigma=2\) Oktas, and for wind speed, we assume \(\sigma=1\) m/s. Here, \(t=0\) denotes 10 am of the previous day, and we are interested in \(t\in[14,37]\) - next-day forecasts. Cloudiness forecasts are clipped to be integer values at least zero, while wind speed forecasts are clipped to be at least zero.
We also test the RL agent without the weather forecast data included in the observations. We do this to check if the weather forecasts allow the agent to define better bids, as this information impact future energy production, consumption, and prices.
Common environment settings used in our experiments are depicted in Table 1. We set the action scheduling time to match the Polish day-ahead energy market. Battery and solar panel efficiencies reflect the efficiencies of real-life batteries and solar panels. Wind energy and solar energy limits are tuned so that daily energy production in the environment averages around 1 MWh. The number of households is set to 100 to scale the simulation to represent a medium-sized prosumer, an aggregator, or a small energy generation facility.
Energy consumption for the given hour (\(E_{c}^{h}\)) is calculated as follows:
\[E_{c}^{h}=n\cdot E_{c\_avg}^{h}\cdot|1+\rho| \tag{11}\]
where \(E_{c\_avg}^{h}\) is the average energy consumption per one household for the given hour, \(n\) is the number of households, and \(\rho\sim\mathcal{N}(0,0.03)\) allows the resulting energy consumption to differ each day while maintaining the average value. Equation (11) is prepared so that it scales well with the changing number of households.
Solar energy production for the given hour (\(E_{s}^{h}\)) is based on cloudiness value from the actual weather data and is calculated as follows:
\[E_{s}^{h}=s_{max}\cdot(1-\frac{c}{8})\cdot\eta \tag{12}\]
where \(s_{max}\) is the maximum solar energy generation, \(c\in\{0,1,...,7,8\}\) is the cloudiness value in Oktas (0 - clear sky, 8 - heavy overcast) taken from the weather data and \(\eta\) is the solar panel efficiency.
Wind energy production for the given hour (\(E_{w}^{h}\)) is based on the actual wind speed value from the weather data and is calculated as follows:
\[E_{w}^{h}=w_{max}\cdot\frac{ws}{ws_{max}}\cdot(ws\leq ws_{max}) \tag{13}\]
where \(w_{max}\) is the maximum wind energy generation, \(ws\) is the wind speed, \(ws_{max}\) is the maximum wind speed for which the wind turbines are still operational, and \((ws\leq ws_{max})=1\) when \(ws\leq ws_{max}\), else \(0\).
During the simulation, it may turn out that the agent has to buy missing energy or sell excess energy immediately. In that case, he is being penalized for such events. Immediate buying is realized for double the current market price, and immediate selling is realized for half the current market price so that the agent has the incentive to better plan his bids instead of relying on instant buys or sells. Also, we do not include market entry and transaction fees, as they are fixed costs independent of the bidding strategy.
### Experiments
Evolutionary algorithmThe evolutionary algorithm CMA-ES is used to optimize strategies defined by Equations (6) and (7). It utilizes data from 2016 to 2018 as the training set and data from 2019 as the testing set. After training, the resulting parameters (mean values) are evaluated on a single testing interval 365 days long. Table 2 presents the parameters used for the CMA-ES algorithm. Customized initialization for parameters \(\alpha_{4h+5},\alpha_{4h+6}\) of Opportunistic strategy as defined in Equation (7) prevents the initial samples from creating bids with too high volumes, which leads the strategy to the inefficient solution of creating no bids at all.
Reinforcement learningReinforcement learning is used to optimize the strategy defined in (8). It utilizes data from 2016 to the third quarter of 2018 as the training set, data from the fourth quarter of 2018 as the validation set, and data from 2019 as the testing set. The training is done on random intervals from the training set 90 days long, generated randomly. Periodically, evaluation is done on a single validation interval 90 days long. After the training timesteps budget is depleted, the model for which the highest reward on validation interval was achieved is evaluated on the single testing interval 365 days long. We use the A2C algorithm [10] to optimize the strategy of the RL agent. Parameters used for the A2C algorithm are presented in the supplementary material.
The action space is limited to range the \([-3,3]\), which allows the agent to define prices and volumes up to \(e^{3}\approx 20\) times smaller/larger than the 30-day average price and maximum possible energy generation volume, respectively.
The observation of the environment's state (117 values) is passed to the agent at bid placing time and contains the following information:
* these prices result from the bids created the day before; the agent does not know energy prices for the bid currently submitted.
* this is statistical data about consumption, the actual consumption data is designated according to (11).
* current relative battery charge (1 value).
* estimated relative battery charge at midnight (1 value).
* one-hot encoded information about the current month (12 values).
* one-hot encoded information about the current day of the week (7 values).
\begin{table}
\begin{tabular}{l|c} Action scheduling time & 10.30 am \\ \hline Battery capacity & 2 MWh \\ Battery efficiency & 85\% \\ \hline Maximum solar energy generation & 0.4 MWh \\ Solar panel efficiency & 20\% \\ Maximum wind energy generation & 0.05 MWh \\ Maximum wind speed for which & \\ wind turbines are still operational & 11 m/s \\ Number of households & 100 \\ \end{tabular}
\end{table}
Table 1: Parameters of the EngMar environment used for experiments.
\begin{table}
\begin{tabular}{l|c} Initial mean value (\(\mu\)) & default: \(\mathcal{N}(0,1)\) \\ Initial sigma (\(\sigma\)) & (7), \(\alpha_{4h+5},\alpha_{4h+6}\): \(\mathcal{N}(-2,1)\) \\ \hline Population size & 1 \\ Generations & 100 \\ \end{tabular}
\end{table}
Table 2: Parameters of the CMA-ES algorithm used for the experiments. \(n\) is the number of parameters in the strategy.
* cloudiness and wind speed forecasts for each hour of the next day (48 values).
For comparison, we also applied the FARL algorithm [14], which is a conceptually different approach to optimize a black-box bidding strategy. We fed FARL with the same training, evaluation, and test data discussed above. Parameters and details of FARL are presented in the supplementary material.
### Results
Results of experiments are shown in Figures 1, 2, and in the supplementary material. We define the reference balance of the given day as the difference between the energy produced and consumed multiplied by this day's average energy price. We then calculate the sum of these reference balances during the whole simulation. Note that it is moderately difficult to achieve the reference balance. The agent mostly consumes the energy in the evenings, when it is expensive, and produces it when its price is average. Therefore, the agent needs to ensure it buys little energy when it needs it to reach the reference balance.
Figure 1 presents balance changes during test simulations, averaged over 5 test runs on different random seeds. Streaks around graphs represent the range of balances achieved by the given strategy. We can see that the black-box strategies optimized with the A2C algorithm achieve the best return on the test simulation, beating our reference balance and the strategies optimized with the CMA-ES and FARL algorithms. Also, the A2C-trained strategy utilizing weather forecasts as part of its observations is able to achieve higher returns than the A2C-trained strategy without those observations.
In Figures 2, we look into five days from the middle of the testing simulations. For battery levels plots, we show the average relative battery charge with streaks around graphs indicating the range of levels from all testing simulations, while the other plots were taken from a chosen testing simulation. There is an unscaled market price graph on the bid volumes plot, which allows for easy identification of whether the successful bid was realized when the price was high or low.
It is seen in Figure 2 that the strategy trained with RL behaves reasonably: It charges the battery at about 0 am when the prices are low and discharges at about 8 am when they are high. Unscheduled purchases/sells, which are costly, are rare.
The simple Timing and Opportunistic strategies behave reasonably (we discuss them in more detail in the supplementary material). However, due to their simplicity, they are unable to represent sufficiently complex behavior to respond efficiently to diverse circumstances.
The bidding strategy developed by the FARL algorithm barely exhibited reasonable behavior. This algorithm is based on _Q-Learning_ with function approximation applied in a rather non-standard way: It learns to make a sequence of \(24\) bids, each time only having access to the previous bidding and several other variables. Reasonable bidding based on that information was not possible.
### Battery capacity optimization
The battery is often the largest part of the prosumer installation cost. Our proposed approach can be readily used to
Figure 1: Wallet balances achieved on testing data by different strategies. A2Cw denotes the results of the black-box strategy optimized with the A2C algorithm with weather forecasts included in observations, A2Cnw - without weather forecasts. Numbers in the FARL algorithm denote the number of possible discrete actions.
choose the battery from the possible options. It is enough to perform the strategy optimization for each option and compare the incomes with the battery costs.
Table 3 presents the income gained with the strategy optimized with the A2C algorithm with weather forecasts as input, depending on the battery capacity. A figure with wallet balance changes in testing simulations and additional comments are available in the supplementary material. It is seen that the larger the battery capacity, the larger the income.
### Discussion
The optimal bidding strategy among those analyzed here is based on neural networks trained with reinforcement learning and fed with weather forecasts. Weather impacts the production of energy (e.g., by wind turbines), its consumption (e.g., by air conditioners), and thus its prices. Consequently, optimal bids need to be based on these forecasts. We have tried several RL algorithms. A2C yielded the best performance. The algorithm that was especially disappointing was SAC [1]. This algorithm is based on the action-value function with the action (bid parameters in this case) having 96 dimensions. Under these circumstances, the action-value function was impossible to approximate with sufficient accuracy, hence poor performance.
Parametric strategies with parameters optimized with the CMA-ES evolutionary algorithm behaved worse than those based on neural networks. Even though they were not very complex, their globally optimal parameters were difficult to find for CMA-ES. One can come up with even more elaborate strategies than (7) (and in fact, we have), but then this strategy will have even more parameters, and their optimal values will be even more difficult to find for any gradient-free optimization algorithm.
The bidding strategy learned with the FARL algorithm delivered disappointing results, even worse than those achieved with optimized parametric strategies. Its management of available information proved insufficient to effectively map available observations into actions.
## 6 Conclusions
In this paper, we proposed a framework for optimization of bidding strategy on a day-ahead energy market based on simulations and real-life data. We have optimized two parametric strategies with the state-of-the-art evolutionary algorithm CMA-ES. We have also used reinforcement learning to optimize two strategies that produced bids for this market. One of them was fed weather forecasts, and the other was not. The strategy fed with weather forecasts produced the highest financial return.
\begin{table}
\begin{tabular}{l|c} \hline Maximum battery capacity (MWh) & Achieved income \\ \hline
1.0 & 39937.62 \(\pm\) 1723.72 \\
1.5 & 48245.24 \(\pm\) 1182.34 \\
2.0 & 53107.61 \(\pm\) 560.78 \\
3.0 & 54770.32 \(\pm\) 4760.21 \\ \end{tabular}
\end{table}
Table 3: Incomes achieved for different maximum battery capacities with A2C-optimized strategy with weather forecasts. Incomes are averaged over 5 test runs with the same random seeds and are provided with their respective standard deviations.
Figure 2: Plots for the black-box strategy trained with the A2C algorithm. _Left-top:_ Battery level. _Right-top:_ Unscheduled energy buying/selling. _Left-bottom:_ Bid volumes. _Right-bottom:_ Bid prices. |
2303.09346 | Tactile-Driven Gentle Grasping for Human-Robot Collaborative Tasks | This paper presents a control scheme for force sensitive, gentle grasping
with a Pisa/IIT anthropomorphic SoftHand equipped with a miniaturised version
of the TacTip optical tactile sensor on all five fingertips. The tactile
sensors provide high-resolution information about a grasp and how the fingers
interact with held objects. We first describe a series of hardware developments
for performing asynchronous sensor data acquisition and processing, resulting
in a fast control loop sufficient for real-time grasp control. We then develop
a novel grasp controller that uses tactile feedback from all five fingertip
sensors simultaneously to gently and stably grasp 43 objects of varying
geometry and stiffness, which is then applied to a human-to-robot handover
task. These developments open the door to more advanced manipulation with
underactuated hands via fast reflexive control using high-resolution tactile
sensing. | Christopher J. Ford, Haoran Li, John Lloyd, Manuel G. Catalano, Matteo Bianchi, Efi Psomopoulou, Nathan F. Lepora | 2023-03-16T14:26:48Z | http://arxiv.org/abs/2303.09346v1 | # Tactile-Driven Gentle Grasping for Human-Robot Collaborative Tasks
###### Abstract
This paper presents a control scheme for force sensitive, gentle grasping with a Pisa/IIT anthropomorphic SoftHand equipped with a miniaturised version of the TacTip optical tactile sensor on all five fingertips. The tactile sensors provide high-resolution information about a grasp and how the fingers interact with held objects. We first describe a series of hardware developments for performing asynchronous sensor data acquisition and processing, resulting in a fast control loop sufficient for real-time grasp control. We then develop a novel grasp controller that uses tactile feedback from all five fingertip sensors simultaneously to gently and stably grasp 43 objects of varying geometry and stiffness, which is then applied to a human-to-robot handover task. These developments open the door to more advanced manipulation with underactuated hands via fast reflexive control using high-resolution tactile sensing.
## I Introduction
In modern robotics, robust and reliable grasping and manipulation remain unsolved research problems. A central problem is the ability to apply force-sensitive, gentle grasping to handle fragile objects and apply such grasps to objects of various shapes, sizes and stiffnesses [1][2]. Force sensitive grasping is also key in tasks which require humans to work directly with robots. One promising approach to meet these grasping requirements is to use an adaptive soft robotic hand, such as the Pisa/IIT SoftHand [3] combined with tactile sensors to provide state information from the contact interface [4][5]. The soft properties of the SoftHand make it ideal for gentle grasping as opposed to other more highly actuated hands with rigid links.
In this paper, we propose that the more limited dexterity of an underactuated hand is compensated by the use of soft synergies that interact with basic control of the hand using fingertip tactile sensing, allowing reliable force-sensitive grasping on a variety of objects. Moreover, human fingertips contain 1000s of tactile mechanoreceptors per square centimetre [6], so likewise artificial fingertip tactile sensors carrying similar information should also have a high spatial resolution, as offered by optical tactile sensors such as the TacTip-based design proposed here [7]. Even though there are examples of simple robotic grippers using high-resolution touch for control [8][9], we know of no examples of anthropomorphic hands with multiple high-resolution tactile sensors applied to force-sensitive grasping or the application to a human-robot collaborative tasks.
This study aims to impart an anthropomorphic robotic hand with reflexive, force-sensitive control to apply a gentle grasp using tactile data from high-resolution sensors. The Pisa/IIT SoftHand used here is under-actuated, yet has soft adaptive synergies in its mechanical structure [3] that simplifies the controller implementation whilst retaining a degree of dexterity through its ability to conform to grasped objects. The tactile sensors add an extra capability to this platform by providing information on the nature of the grasp whilst being small and low-cost compared to other works investigating tactile SoftHands [10][11]. Here we consider how this can allow for gentle, force-sensitive grasp control on various a priori-unknown objects of differing geometry and stiffness.
Our main contributions are:
**1) Tactile feedback for force-sensitive grasping:** This work uses tactile feedback from all fingertips to grasp objects without applying extraneous force. This improves upon traditional force-sensitive control methods such as current control by also accounting for the grasp pose, which is important in an underactuated hand.
**2) Five high-resolution tactile sensors:** Equipping the hand with high-resolution tactile sensors on all fingertips gives detailed feedback on how all digits interact with an object, giving a more-sensitive and capable controller able to understand the nature of the grasp and react accordingly.
**3) Fast vision-based control loop:** By implementing meth
Fig. 1: _Tactile-driven handover_ - A Pisa/IIT SoftHand with five tactile fingertips performs a handover task with a human operator: (1) safely moving to the operator, (2) accepting, (3) grasping, then (4) manipulating a delicate object without applying extraneous force.
ods to capture and process tactile data asynchronously whilst also decoupling image capture from the main controller, we propose a system which maximises control loop frequency, resulting in the ability to reliably adjust the grasp quickly and smoothly in response to dynamic changes.
## II Background and Related Work
There are many examples of anthropomorphic robotic hands with multi-fingered tactile sensing that use tactile data to perform grasp and subsequent object recognition/classification tasks [12][13][14]. However, there are few studies that consider tactile feedback for time-dependent, force sensitive grasp control. Those that do use tactile sensing for grasp control either use sensors providing low-resolution tactile data and/or use a limited number of sensors to mitigate the difficulty of achieving a sufficiently fast control loop for responsive control [15][16][17][11].
Human reaction times in response to tactile stimulation are in the range of 150ms-400ms [18], which sets a benchmark for the performance of robot hands. In this context, Santos and Alvares (2019)[15] present an anthropomorphic hand with tactile sensing on all five digits that exhibits some manipulation tasks using tactile feedback with a stated response time of 300 ms (3.3 Hz). This is within the range of human reaction time described previously and can be considered the present benchmark for tactile reaction time in an anthropomorphic hand with five tactile fingertips. Consequently, the performance of the controller described in this paper will be measured against those criteria.
The hand used in this paper is the IIT/Pisa SoftHand: a tendon-driven, underactuated, anthropomorphic soft robotic hand. Designed using principles of postural synergies of human grasping, the SoftHand achieves human-like grasps using one motor that actuates a network of tendons routed through the hand, arranged so that the closure of the hand adheres to a given postural synergy [3]. The joints between the phalanges of the fingers are dislocatable, which allows the digits of the hand to conform to grasped objects, creating a soft grasping interface with an "adaptive" synergy [19].
The sensors used in this paper are a miniaturised version of the BRL TacTip optical tactile sensor, which achieves a high spatial resolution by measuring changes in tactile images over a high-definition pixel array [7, 20]. The TacTip is a low-cost, soft, biomimetic tactile sensor that mimics the process of human tactile perception (a camera tracks the movement of an array of pins beneath an artificial skin as an analog to the movement of dermal papillae being relayed as tactile information to the brain via the shallow tactile mechanoreceptors [20]). Previous work from BRL equipped the SoftHand with a single tactile fingertip, feedback from which was used to control grasping through a combined light-touch and pose estimation controller [21] which was successful in proving that tactile feedback from this sensor was capable of accurate grasp control in the context of a generic negative feedback controller. However, there were several limitations, such as a long latency in the control loop, the use of complex deep learning techniques when simpler methods might suffice, and the restriction to just one tactile fingertip that does not permit whole-hand grasp control from contacts on all five digits.
Here the proposed gentle grasp controller is contextualised and tested in a simplified human-robot handover task. This is a typical human-robot collaborative task and is well suited to a gentle grasp, as a manipulator with a non-force sensitive grasp would be unsafe to use in such close proximity to a human operator. The context of such a task could be in a semi-automated warehouse packing workflow, which may carry additional demands for gentle grasping if the objects being handled are fragile, such as fruits and vegetables. In such environments, humans and robots are often separated for safety reasons, which can be inefficient [22].
## III Methodology
### _Tactile Sensors_
The sensors used in this study are an updated iteration of the miniaturised TacTip sensors by Lepora et al. (2021)[21]. Whilst these sensors were successful in proving the capabilities of fingertip tactile sensing on the SoftHand, they were not optimised in mechanical design, such as in the profile of the tactile skin (which was prone to tears due to shearing) and the construction of the sensor as a whole. Consequently, weak points of the design were addressed to give an updated design (shown in Fig. 2). The main design changes improved the overall robustness and mechanical stability of the sensor, introducing a new skin material and profile, springs to hold the camera in place and a secure tip-mounting interface. The cameras used in this sensor are Misumi micro-endoscope cameras, which transmit data over USB 3.0 at a native resolution of 1080p.
Fig. 2: _3D model of sensor assembly_ - The skin and tip are printed as a single part and filled with an optically-clear gel, through which the camera tracks changes in markers situated on the pin array.
### _Data Capture, Transfer and Processing_
Processing multiple optical tactile sensors simultaneously is difficult due to bandwidth restrictions and large processing demands, particularly when using high-resolution tactile images. Consequently, a central problem in this study was to find a way to efficiently capture and process image data from five cameras while using the collective output as feedback into the main control loop, all within a response time that should be as fast as possible.
Preliminary tests showed we could extract and preprocess frames from a single sensor at 30 fps using our current software; however, if data from each sensor is read sequentially, this process would take over 150 ms. Whilst this is below the response time used in other studies (see Background), it leaves little time for further processing and would limit the on-line potential of the system. Additionally, by the time the data from the fifth sensor is read, the data taken from the first sensor will be out of sync. Consequently, for best performance, the image data should be extracted from all five cameras asynchronously.
To achieve this, each sensor is connected to its own dedicated Raspberry Pi 4 Model B. The array of Raspberry Pis capture and process image data from all sensors asynchronously, which are then sent to a main control PC via a gigabit Ethernet connection. Using the Raspberry Pi array removed the requirement for the control PC to have a large amount of CPU resource available, as the image capture and processing can be distributed across the processors of the Raspberry Pis, allowing the raw tactile data sampled to be as high quality as possible. By using a distributed computing method (Pyro4) a virtual Python object representing a sensor is registered from each Pi to the network. Then, each sensor object captures and processes tactile images in a background thread and writes the result to a class variable which can be read by the control PC via the network. Whilst the tactile SoftHand's reflex time cannot be faster than a single camera's frame rate (30 Hz = 33.3 ms), this method effectively decouples the image capture process from the lower level controller, which prevents the actual control loop time (i.e. the frequency at which the controller output is updated) being limited to the frame rate, resulting in an actual control frequency of 286 Hz. This is important as it provides scope for expanding the controller with additional operations in the future whilst retaining a real-time response. The architecture of this system is shown in Fig. 3.
### _Controller_
The controller developed in this study uses tactile feedback from all five fingertip sensors simultaneously to apply a stable, gentle grasp on stimuli of various geometries and stiffness. It can also maintain a gentle grasp in response to external disturbances, adapting and adjusting the grasp pose as necessary.
In previous work with the TacTip on the SoftHand, the main limitation of the controller presented in [21] was a large latency in the system, resulting in a lack of responsiveness. In that previous work, the controller was written in Python, sending and receiving data from the SoftHand via a MEX-compiled Simulink interface. This approach was taken due to the ease of implementation as well as similar applications with the SoftHand [10]; however, using MATLAB as an intermediary between the main control program and the SoftHand's firmware proved inefficient for reflexive behaviour in this context, resulting in a performance bottleneck.
Hence, the improved controller presented in this work aims for the following capabilities:
**Fast response time:** To address the latency issues seen in previous work, a more direct, low-level programming approach was taken by creating a Python wrapper for a C++ program containing functions from the qbRobotics qbAPI, which communicates with the SoftHand directly. This allows the qbAPI functions to be called directly from the main Python controller, majorly reducing latency.
**Stable response:** By using a state-dependent switching controller, the SoftHand is able to close onto and achieve a gentle grasp on an object quickly and with minimal response overshoot or oscillation.
**Multi-sensor feedback:** By receiving and processing sensor data asynchronously via the methods discussed inSec. IIIB, a control feedback signal is formed from data from all five sensors, enabling the control response to be informed by signals from all five digits.
The controller functions by measuring changes in tactile images using a Structural Similarity Index Measure (SSIM), an established metric for contact detection in optical tactile sensors [15][23]. Prior to grasping, one frame from each sensor \(n\) is imaged in its undeformed non-contact state \(\mathbf{Img_{0,n}}\) and stored for later comparison. As the controller runs, the state of each sensor is imaged \(\mathbf{Img_{n}}\) on each time step and compared against its undeformed state, with the
Fig. 4: _Controller diagram._ The controller output changes based on a state variable defining if any fingertips are in contact with an object.
Fig. 3: _Asynchronous image capture and processing architecture._ Tactile image data is captured and processed by a dedicated Raspberry Pi per sensor, then sent to the control PC over the network.
SSIM of sensor \(n\) against its undeformed state given by:
\[S_{n}\left(\mathbf{Img_{n}},\mathbf{Img_{0,n}}\right)=\frac{\left(2\mu_{x}\mu_{y }+c_{1}\right)\left(2\sigma_{xy}+c_{2}\right)}{\left(\mu_{x}^{2}+\mu_{y}^{2}+c _{1}\right)\left(\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2}\right)} \tag{1}\]
where \(x\) and \(y\) represent a square kernel of pixels (here of size 7\(\times\)7) that is applied across both images as a sliding window. The parameters \(\sigma\) and \(\mu\) represent the mean and covariance of each kernel calculation, with \(c_{1}\) and \(c_{2}\) acting as regularising constants to stabilise the division [23]. The SSIM is given as an averaged final value, \(S_{n}\in[0,1]\), where \(S_{n}=1\) indicates the two images are identical and \(S_{n}=0\) indicates they have zero similarity [24].
Here the SSIM-based metric used to control the hand is derived from a modified version of Equation (1) to represent the degree of deformation, denoted as \(\Delta_{n}\) and given by:
\[\Delta_{n}\left(\mathbf{Img_{n}},\mathbf{Img_{0,n}}\right)=1-S_{n}\left( \mathbf{Img_{n}},\mathbf{Img_{0,n}}\right). \tag{2}\]
The output is inverted from the SSIM so that a value closer to 1 indicates a greater amount of sensor deformation and thus a larger contact force. At each time step, a \(5\times 1\) vector containing \(\Delta_{n}\) for all five sensors is calculated asynchronously to minimise the control loop time (Fig. 4, bottom right).
In a grasping task, each finger can be in one of two states: in contact or not in contact with an object. When the fingers are not in contact with an object (state 0), the system should behave so as to move the fingers into a contacted state (state 1) as quickly as possible. Once state 1 has been achieved, fast yet fine motor movements should be made to achieve and maintain a stable grasp. Since these two states have differing dynamic requirements, we use a state-dependent switching controller (see Fig. 4) to streamline the process and achieve the desired system behaviour.
This controller design is computationally similar to a finite state machine: switching between two linear controllers according to state variable \(\varepsilon\) that signifies whether any fingertip contacts an object. When \(\varepsilon=0\), _i.e._ no contact is detected, the hand is driven using a proportional controller using the motor encoder position, \(u\), as feedback with a defined setpoint of maximum closure. When \(\varepsilon=1\), _i.e._ one or more sensors are in contact with an object, the controller switches to a PI controller that uses the average degree of deformation, \(\mu=\sum_{n}\Delta_{n}/n\) as feedback. The potential behavioural issues that are sometimes associated with this type of controller are addressed by constantly keeping track of feedback variables for both states regardless of the occupied control mode, eliminating discontinuities in the control signal [25]. Based on previous work [23], the threshold condition for a sensor \(n\) being in contact with an object was defined as \(\Delta_{n}>0.05\). Additionally, a value \(\mu=0.5\) was chosen as a setpoint for the entire controller, as preliminary tests suggested this value would be applicable to applying a gentle grasp across a wide range of objects.
### _Handover task_
The handover task used to test the validated gentle grasp controller consists of the SoftHand mounted to a UR5 industrial robot arm (ideally suited to this purpose as it is a 'cobot', meaning it has safety features which allow it to interface directly with human operators [26]) which will move to the hand of a human operator, accept an object from them, then carry and deposit the object in a bin. The task itself was kept simple, as the focus was intended to test the capability of the grasp controller, with the handover acting as a contextual scenario.
To track the position of the SoftHand relative to the operator, two ArUco markers were used: one offset yet coplanar to the robot's wrist frame and another attached to the back of a glove worn on the hand of the operator that holds the object, with both tracked by an Intel RealSense RGBD camera (3D geometric diagram of the setup shown in Fig. 5).
In terms of the robot's movement, only translational motion was considered when moving to the operator due to inconsistencies in detecting relative rotational pose using just ArUco marker data, amendments to which lie outside the scope of this work. In order to move to the goal position, the vector \(\overrightarrow{BD}\) must be solved. \(\overrightarrow{BD}\) is calculated by extracting the pose of the markers \(A\) and \(C\) relative to the camera frame \(O\) and calculating the corresponding homogeneous transformation matrices, \(T_{OA}\) and \(T_{\overrightarrow{OC}}\), the translational components of which correspond to \(\overrightarrow{OA}\) and \(\overrightarrow{OC}\) respectively. Due to the planar constraints between \(\overrightarrow{B}\) and \(\overrightarrow{D}\) relative to \(A\) and \(C\) respectively, the vectors \(\overrightarrow{AB}\) and \(\overrightarrow{CD}\) can be found from the homogenous transformation matrices which describe the linear transformation between the points \((A,B)\) and \((C,D)\), calculated by:
\[T_{AB}=T_{OA}\cdot\begin{bmatrix}\mathrm{I_{3}}&\begin{bmatrix}0&\Delta y_{AB }&\Delta z_{AB}\end{bmatrix}^{T}\\ 0&0&0&1\end{bmatrix}, \tag{3}\]
\[T_{CD}=T_{OC}\cdot\begin{bmatrix}\mathrm{I_{3}}&\begin{bmatrix}0&0&\Delta z_{ CD}\end{bmatrix}^{T}\\ 0&0&0&1\end{bmatrix}. \tag{4}\]
Then \(\overrightarrow{AB}\) and \(\overrightarrow{CD}\) are extracted as the translational components of \(T_{AB}\) and \(T_{CD}\) respectively, with \(\overrightarrow{BD}\) found by
\[\overrightarrow{BD}=\overrightarrow{CD}-\overrightarrow{AB}. \tag{5}\]
Since the position of \(B\) relative to the base frame of the robot is known through the robot's kinematics, the robot can be programmed to execute a Cartesian movement to the goal position by giving it the co-ordinates of \(\overrightarrow{BD}\).
The robot arrives at the goal position in a palm-up orientation, at which point the operator can place the object
Fig. 5: _Experiment 2 setup._ In order to move the SoftHand to the operator, the 3D transformation between the two must be solved.
into the hand, which then grasps the object using the gentle grasp controller established in section III-C. Once the grasp is stable, the arm executes a Cartesian movement to the bin where it releases the grasp to deposit the object. Since the SoftHand is palm-up when accepting the object, this movement involves rotating the hand 180 degrees to be palm-down when depositing. During this motion, the grasp controller continues to run and adjust the grasp as necessary.
## IV Results
### _Experiment 1: Stable, gentle grasping via tactile control_
The first experiment sought to assess the validity of the controller; namely, its ability to apply a stable, gentle grasp on a variety of stimuli of differing geometry and stiffness. The 43 objects used in this task are listed in Fig. 7, where each are associated with one of five categories and denominated with a numerical ID. These objects were selected to assess controller performance on different sizes, topologies and stiffnesses. The wide range of objects was important to test, as a linear controller output in the SoftHand does not correlate to a linear grasp-pose profile. In each trial, the object was placed in the most natural position within the grasping envelope in approximately the same orientation for each trial, with five trials performed for each object.
Fig. 6 shows typical examples of the control responses for objects from each category. In each example, the controller reaches stability from first contact within 1-3 s (defined as the response remaining between a threshold of \(\pm 5\%\) of the setpoint after initial contact).
Fig. 6 also shows the average Peak Motor Currents (PMC) for each object. Previous work with the SoftHand has focused on force-sensitive grasp control using the current draw of the motor to estimate grasping force [27][28]. Whilst motor current does give a representation of grasp force (with PMC correlating to peak force), our tests show that the underactuated nature of the hand causes this measurement to not be invariant of the contact distribution of the fingers. This results in an aliasing effect, whereby a given motor current can be associated with a variety of grasp poses and contact distributions dependent of the orientation of the object with respect to the hand. Using tactile feedback avoids this issue, as it allows a gentle grasp to be applied whilst also providing information on how the individual fingers are interacting with the object, giving a more reliable picture of grasp pose without the need for additional sensors.
Considering the argument for a gentle grasp controller driven by current feedback, the results of this experiment show that the PMC required for a gentle grasp can vary by up to 200 mA depending on the size, shape and stiffness of the object. The box and whisker plot in Fig. 6 further emphasises this, showing that PMC is lower for soft objects (aligning with observations in [27]) and that the spread of PMC is greater for object sets with a more diverse range of sizes and stiffnesses. Additionally, the tactile feedback controller consistently results in a PMC below the 350 mA 'gentle grasp' threshold for the SoftHand defined by [27] despite motor current being absent from the feedback loop.
Fig. 6: _Experiment 1 Results._ The top row of plots show typical control responses on an object from each category. The bar chart shows the average PMC of each object used in this experiment (which represents peak grasping force) with an associated box-and-whisker plot to illustrate the spread of values for each category.
The controller sometimes exhibited higher PMC (for small objects) and oscillations while stabilising, such as with Items 13 and 34. The greater closure with smaller objects required higher peak motor currents, which we attribute to the higher torque demands from the greater extension of the elastic links when the hand is near closed. Overall, this shows that the SSIM deformation vector \(\mu\) is valid as a feedback signal for a tactile-driven gentle grasp controller.
### _Experiment 2: Tactile-driven handover task_
The handover task for Experiment 2 was performed with each object used in Experiment 1 with five trials per object. For each trial, the success of the system was scored by awarding 1 point if the object was successfully deposited in the bin, 0 points for a failure (_i.e._ if the object slips out of the grasp and does not end up in the bin or if the controller fails to stabilise) and 0.5 points for a partial success (exhibited by the object slipping from the grasp but still ending up in the bin). This gives a total score out of 5, which describes how well the system performed in handling the object. The results for Experiment 2 are shown in Fig. 7.
The results show good performance across most of the tested objects, with many being successful on all trials. Overall, the system was successful in the handover task on 80% of trials when counting absolute successes only and 87% when also counting partial successes. The failures seemed to occur when handling objects that were large in comparison to the SoftHand's grasp envelope (Item 11) or objects of higher mass (Item 9), as the force exerted by the object on the fingertips was interpreted as overloading and caused the grasp to release early. This is inherent to the controller design, yet the effect is reduced by using the average SSIM of all sensors as feedback, meaning failures only occur on objects of significant mass. The partial successes seemed to occur on objects that were large but also on longer objects which tended to slip out of the grasp. However, as shown in the results the controller performed well overall and these edge cases could be addressed with further work.
## V Discussion
In this paper, we presented a novel method for achieving stable, force-sensitive grasping using tactile feedback for an underactuated, anthropomorphic soft robotic hand. The controller achieves and maintains a stable, gentle grasp on objects, adjusting grasp pose as necessary to adapt to dynamic changes without applying extraneous forces, achieved without the complex tuning and methods associated with more traditional current-control approaches. This was implemented with an effective control loop time of 286 Hz, which is far below the minimum control loop of 3.3 Hz seen in other studies [15], which helps achieve the dynamic, reflexive behaviour in response to changing grasp conditions.
A key observation of this study was how using tactile feedback significantly decreased the complexity of the controller application for gentle grasping. In all experiments, the motor current rarely exceeded the gentle grasp threshold of 350 mA defined in [27] despite motor current being absent from the control loop. The controller was capable of consistently applying a gentle grasp on 43 distinctly different objects (and even on a cluster of 3 small objects, in Item 43). The interaction between the measure of deformation of the tactile sensors and the semi-coupled nature of the SoftHand's digits through an adaptive synergy combined with the controller is capable of applying force-sensitive grasps to a range of objects without the need for monitoring current.
There were some cases where the control methods could be improved with further work, namely the high PMCs seen on small objects and the failed handover results. These could be remedied by accounting for degree of hand closure and implementing a slip detection driven controller for dynamic movements, as described by James et. al (2018) [29][30].
Another direction for extending this work would be a more sophisticated handover task. This could include the robot physically taking the object from the operator or considering SoftHand orientation relative to the operator in the approach, as seen in [31] and [32]. Overall, the methods presented in this paper open the door to more advanced manipulation with underactuated anthropomorphic hands through feedback from high-resolution tactile sensors on each fingertip, taking another step towards human-like dexterous manipulation with robots.
Fig. 7: _Experiment 2 Results._ Each item from the object set was tested five times in the handover task, with performance measured by cumulative pass/fail criteria. |
2310.11431 | Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems | Single neurons in neural networks are often interpretable in that they
represent individual, intuitively meaningful features. However, many neurons
exhibit $\textit{mixed selectivity}$, i.e., they represent multiple unrelated
features. A recent hypothesis proposes that features in deep networks may be
represented in $\textit{superposition}$, i.e., on non-orthogonal axes by
multiple neurons, since the number of possible interpretable features in
natural data is generally larger than the number of neurons in a given network.
Accordingly, we should be able to find meaningful directions in activation
space that are not aligned with individual neurons. Here, we propose (1) an
automated method for quantifying visual interpretability that is validated
against a large database of human psychophysics judgments of neuron
interpretability, and (2) an approach for finding meaningful directions in
network activation space. We leverage these methods to discover directions in
convolutional neural networks that are more intuitively meaningful than
individual neurons, as we confirm and investigate in a series of analyses.
Moreover, we apply the same method to three recent datasets of visual neural
responses in the brain and find that our conclusions largely transfer to real
neural data, suggesting that superposition might be deployed by the brain. This
also provides a link with disentanglement and raises fundamental questions
about robust, efficient and factorized representations in both artificial and
biological neural systems. | David Klindt, Sophia Sanborn, Francisco Acosta, Frédéric Poitevin, Nina Miolane | 2023-10-17T17:41:28Z | http://arxiv.org/abs/2310.11431v2 | # Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
###### Abstract
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit _mixed selectivity_, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in _superposition_, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems.
## 1 Introduction
One of the oldest ideas in neuroscience is Cajal's _single neuron doctrine_(Finger, 2001) and its application to perception (Barlow, 1972), i.e., the hypothesis that individual sensory neurons encode individually meaningful _features_.1 The idea dates back to the early 1950s, when researchers began to find evidence of neurons that reliably and selectively fire in response to particular stimuli, such as dots on a contrasting background (Barlow, 1953) and lines of particular orientation and width (Hubel & Wiesel, 1959). These findings gave rise to the _standard model_ of the ventral visual stream as a process of hierarchical feature extraction and pooling (Hubel & Wiesel, 1968; Gross et al., 1972;
Figure 1: **Conceptual Overview.****A)** A representation of two neurons’ activations for different images. The highlights indicate maximally exciting images (MEIs) for each neuron. **B)** There exist directions in feature space that are more interpretable.
Riesenhuber & Poggio, 1999; Quiroga et al., 2005). Neurons in the early stages extract simple features, such as oriented lines, while neurons at later stages combine simple features to construct more complex composite features. In the highest stages, complex features are combined to yield representations of entire objects encoded by single neurons--the shape of a hand, or the face of a friend. Notwithstanding a shift in focus towards population codes (Averbeck et al., 2006; Stanley, 2013; Hebb, 2005; Gao & Ganguli, 2015; Jacobs et al., 2009; Ebitz & Hayden, 2021), this model has remained a dominant paradigm in sensory neuroscience for the last seven decades and ultimately inspired (Hassabis et al., 2017; Zador et al., 2023) the development of convolutional neural networks (CNNs) (Fukushima, 1980; LeCun et al., 1989) (but see Gershman, 2023; Poggio et al., 2017).
Mechanistic interpretability research aims to uncover the coding properties of neurons within artificial neural networks. _Feature visualization_(Nguyen et al., 2019)--i.e. the single-unit electro-physiology of artificial neural networks--has revealed remarkable consistencies between neurons in image models and neurons in the visual cortex: neurons with center-surround receptive fields, color-contrast detectors, and oriented edge detectors that combine to form curve detectors in higher layers, for example (Olah et al., 2020; Willeke et al., 2023). However, the study of individual neurons, both _in vitro_ and _in silico_, faces two major problems. First is the inherent subjectivity of "interpretability", which generally necessitates the hand-inspection of neuron response properties. Second is the ubiquitous existence of hard-to-interpret units with mixed selectivity (Fusi et al., 2016; Olah et al., 2020).2 We address both problems in this work by (1) defining a quantitative, automated measure of interpretability for vision models that does not rely on human inspection, and (2) demonstrating a simple approach for finding meaningful directions in activity space.
Footnote 2: One might wonder why evolution or gradient descent would be so kind as to make any neurons interpretable. Anecdotally, researchers have explained this as a result of the use of pointwise nonlinearities in deep networks. We provide a more formal argument for this explanation in App. E.
A recent paper by Zimmermann et al. (2023) has taken a similar approach, using human perceptual judgments in large-scale psychophysics experiments to quantify the interpretability of neurons within deep image models (Zimmermann et al., 2021; Borowski et al., 2020). We automate this pipeline by replacing human judgments of perceptual similarity with a similarity metric grounded in deep image model representations (Zhang et al., 2018), and validate the approach against the large scale human data from Zimmermann et al. (2023). Thus, in line with recent work that uses _language models to interpret language models_(Bills et al., 2023), we use _image models to interpret image models_. We then leverage this automated index of interpretability to test whether non-axis aligned directions in the neural activation space of CNNs trained on real data may be more interpretable than the individual units--a test of the recently stated _superposition hypothesis_(Elhage et al., 2022).
Through a suite of experiments and analyses, we find evidence consistent with the hypothesis that neurons in both deep image models and the visual cortex encode features in superposition. That is, we find non-axis aligned directions in the neural state space that are more interpretable than individual neurons. In addition, across both biological and artificial systems, we uncover the intriguing phenomenon of what we call _feature synergy_--sparse combinations in activation space that yield more interpretable features than the constituent parts. Our work pushes in the direction of automated interpretability research for CNNs, in line with recent efforts for language models (Bills et al., 2023; Cunningham et al., 2023; Gurnee et al., 2023; Bricken et al., 2023). Simultaneously, it provides a new framework for analyzing neural coding properties in biological systems. Our results on neuroscience data add nuance to the concepts of _disentanglement_, _mixed selectivity_, _representational drift_ and _representational universality_ in the brain and suggest that insights gleaned from studying the coding properties of artificial neural networks may transfer to biological systems. These findings highlight potential synergy between mechanistic interpretability research and computational neuroscience, which may together reveal universal coding principles of neural representations.
## 2 Methods
We propose an approach for quantifying the interpretability of neural network activations that is grounded in human judgement, yet is fully automated and scalable. In general, individual neurons -- i.e., \(N\) directions corresponding to basis vectors of an activation space \(\mathbb{R}^{N}\) -- might not be interpretable. Yet, other directions in \(\mathbb{R}^{N}\) might be: we refer to them as _features_. For example, in Fig. 1 B) the human observer can define three directions that are interpretable and correspond
approximately to horse-, car- and cat-like images. The superposition hypothesis stipulates that the activation space \(\mathbb{R}^{N}\) of a neural network possesses several interpretable directions that are non-orthogonal (Ehlage et al., 2022). Given a CNN, we aim to identify such directions and quantify their interpretability through the following three steps:
1. **Collect neural network activations for a given dataset**. Images are passed through the network up to the layer under analysis, for convolutional layers, we average activations across space (Zimmermann et al., 2023) to generate a dataset in activation space \(\mathbb{R}^{N}\).
2. **Identify directions in activation space.** Directions may be provided by the neurons themselves (basis vectors) or by an algorithm (e.g., PCA, sparse coding, K-Means).
3. **Quantifying the interpretability of each direction**. We compute an interpretability index (II) as the average pairwise similarity of the top \(M\) Maximally Exciting Images (MEIs, defined in the next subsection) for each direction. Through a suite of experiments, we argue that the II is a meaningful measure of interpretability.
### Quantifying Interpretability in Neural networks
A neural network layer defines an activation space \(f:\mathcal{X}\rightarrow\mathbb{R}^{N}\) with \(N\) the number of neurons of that layer. We consider directions in this space, for example, individual neurons are represented as directions: the basis vectors of \(\mathbb{R}^{N}\), i.e., for neuron \(i\), \(f_{i}(x)=f(x)e_{i}\) with \(e_{i}\) the \(i^{th}\) canonical basis vector and similarly \(f_{u}(x)=f(x)u\) for any direction \(u\in\mathbb{R}^{N}\). In activation space, some directions may be _interpretable_, in the sense that they detect a single feature or concept within the image data. For example, an interpretable direction may detect features such as edges, corners, textures in early layers, or more abstract patterns in later layers such as dogs, cats, trucks. By contrast, other directions respond to several unrelated features or concepts. For instance, Fig. 1 (A) shows the first neuron firing in response to unrelated car- or cat-like images.
Maximally Exciting Images (MEIs) are defined as synthetic images that maximally activate a given direction in activation space (Erhan et al., 2009). Given a direction \(u\), we propose an **Interpretability Index (II)** computed as the average pairwise similarity of its top \(M=5\) MEIs from a dataset of \(D\) images, i.e., \(f_{u}(x_{1})\geqslant...\geqslant f_{u}(x_{M})\geqslant...\geqslant f_{u}(x_{D})\):
\[\Pi(u)=\frac{1}{M}\sum_{j=1}^{M}\sum_{k=1}^{M}\mathrm{sim}\Big{(}x_{j},x_{k} \Big{)}. \tag{1}\]
In this work, we consider and compare several similarity metrics \(\mathrm{sim}\), detailed below.
### Image Similarity Metrics
We consider image similarity metrics that capture notions of similarity at different levels of abstraction: **1) Low-Level: Color** The color similarity between two images is defined as the difference between the average color (averaged across space, independently, for each color channel) in each image; **2) Mid-Level: LPIPS** Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) is a perceptual metric used for assessing the perceptual differences between images. It relies on a CNN such as VGG or AlexNet that has been pre-trained on an image classification task. Given two images, LPIPS extracts their respective feature maps from multiple layers of the CNN. LPIPS then computes the distance between the corresponding feature maps. The distances are scaled by learned weights and then aggregated to yield a single scalar value representing the perceptual similarity between the two images; **3) High-Level: Labels** The label similarity between two images is a value equal to 0 if the two images have been assigned different labels during a reference classification task, and equal to 1 if the two images have been assigned the same label. In our experiments, we use the CIFAR-10 dataset and associated classification task.
### From Human Psychophysics to In-Silico Psychophysics
How can we validate whether the proposed interpretability index from Eq. 1 is indeed a sensible measure of interpretability? The concept of _interpretability_ is intimately tied to human judgment. A long history of theoretical inquiry has demonstrated the impossibility of identifying necessary
and sufficient conditions for many natural semantic categories (Stekeler-Weithofer, 2012). Due to this difficulty, we adopt a pragmatic view, converting the question of whether a representation is interpretable into an empirical measure of the human interpretability judgment (Wittgenstein, 1953).
### Human Psychophysics
Psychophysics is an experimental paradigm for quantifying the relationship between stimuli (e.g. images) and the perceptions they produce for human observers. Borowski et al. (2020) and Zimmermann et al. (2023) have demonstrated that large-scale psychophysics experiments can be leveraged for conducting quantitative interpretability research. In these works, researchers used the judgments of human participants to quantify the interpretability of neurons in trained artificial neural networks. In Zimmermann et al. (2023), participants are shown 9 minimally and 9 MEIs for a given neuron. The participant is then asked to select one of two query images \(x_{1},x_{2}\) that they believe also strongly activates that neuron (see App. A for an illustration). The _(human) psychophysics accuracy_ obtained for that neuron is defined as the percentage of participants that are able to select the correct image.
### In-Silico Psychophysics
Psychophysics experiments provide a way of crowd-sourcing and quantifying human intuition of interpretability at scale. However, such experiments are time consuming, noisy and costly ($12,000 for (Zimmermann et al., 2023)). Here, we propose a method for automating psychophysics experiments, with a model that faithfully approximates human judgments while requiring no human input. We replicate, _in-silico_, the experiments of Zimmermann et al. (2023), comparing different similarity metrics as proxies for human judgments. In our experiments, the model computes the maximum similarity, according to the image similarity metric, between each of the query images \(x_{1},x_{2}\) and the set of MEIs. The model then chooses as its response the image that is the closest to that set
\[\text{sim}(x,\text{MEI}(u))=\max_{k=1,...,9}\text{sim}(x,x_{k}), \tag{2}\]
where \(x_{1},...,x_{9}\) are the top \(9\) MEIs for a neuron or direction \(u\), and sim the image similarity metric. The _psychophysics accuracy_ for a given neuron or direction \(u\) is defined as the percentage at which the model selects the correct query image for that neuron, i.e.:
\[\text{Acc}(u)=\frac{\text{\# of correct selections for direction }u}{\text{\# of queries with direction }u}. \tag{3}\]
We check in practice that directions \(u\) with high interpretability index \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \
results of this binary classification task are shown in Figure 2 for the LPIPS image similarity metric (Zhang et al., 2018), on five of its AlexNet layers.
The predictions of the LPIPS model match human judgments well for earlier layers of the ResNet50--_layers_ 1 and 23--with an AUC up to 0.71 (Figure 2, left two panels). While there is certainly room for improvement, we conclude that this metric, based on LPIPS-based pairwise image comparison, serves as a good first proxy of human perception of interpretability. Crucially, our metric has the added benefit of not having to recruit a cohort of human participants. Thus, we will use this metric to evaluate the interpretability of features across neural network layers in the next subsections. Since the interpretability metric is more accurate for early layers, we focus the remainder of our analyses on layer 1 of the same ResNet50 architecture trained on CIFAR-10.
Footnote 3: This refers to the PyTorch module names, corresponding to layers 10 and 23 in the network.
### Identifying Interpretable Directions in Feature Space
We next apply the II to analyze the interpretability of features in a ResNet50 pre-trained on the CIFAR-10 image dataset (Krizhevsky et al., 2009)4. We evaluate several methods for identifying interpretable directions in activation space: PCA, ICA, NMF, \(K\)-Means with cosine similarity, and the shallow sparse autoencoder used in Sharkey et al. (2022) (see Appendix B for sparse AE analyses). We evaluate several similarity metrics and compute the II for each, comparing the interpretability of individual neurons in a layer with the interpretability of identified directions from that layer.
Footnote 4: Hosted at [https://github.com/huyvnphan/PyTorch_CIFAR10](https://github.com/huyvnphan/PyTorch_CIFAR10)
For a quantitative comparison, we present comparative box plots for the distributions of II indices for neurons and directions in Figure 3 while we vary: the LPIPS layer defin
Figure 3: **Quantification of Interpretability. Left: II score [a.u.] distribution for neurons (\(N=256\)), PCA, ICA and NMF baselines, and K-Means as a function of \(K\in\{128,256,512\}\). Middle: II score distribution for neurons (\(N=256\)) and K-Means (\(K=256\)) as a function of LPIPS layer. Right: II score distribution for uninterpretable neurons (\(N=128\)), i.e. those with II score below the median, and interpretable neurons (\(N=128\)), i.e., those with II score above the median; II score distribution for K-Means (\(K=128\)) computed on each of these subsets separately.**
Figure 2: **Interpretability Metric vs. Human Behaviour. Data from Zimmermann et al. (2023). Left to Right: Agreement between human and in-silico psychophysics on the predictability of the outputs of four ‘layers’ (see text) within a ResNet50. Human and model agree on feature predictability for the ResNet50’s early layers. For these layers, the proposed interpretability metric is a valid representation of the human’s perception of interpretability. AUC: Area Under the Curve.**
used in the II (left), the number of \(K\) directions in activation space (middle), and distributions after splitting neurons into uninterpretable and interpretable groups (right). We observe that the \(K\)-means approach detects directions that are indeed more interpretable (higher II) than the individual neurons of the activation space -- independently of the LPIPS layer considered for the II (Figure 3 left). Our method achieves higher II values (mean \(=-0.0159\)) than all baselines and the sparse autoencoder (best mean \(=-0.0188\)) (detailed comparison in App. B) [note that the II has arbitrary units]. Interestingly, the number of directions \(K\) does not impact their IIs in the regime tested (Figure 3 left). Thus, we focus our analyses on the \(K=N=256\) setting for a fair comparison.
The \(K\)-means approach can detect directions within subsets of uninterpretable neurons as well as within interpretable neurons, as we do not observe II differences in Figure 3 (right). Further, transforming interpretable neurons and uninterpretable neurons into their direction increases the II of both. We see a trend where the II increases with respect to the LPIPS layer used, which is a similar pattern as we saw in Figure 2.
For a qualitative comparison, Figure 4 shows the Maximally Exciting Images (MEIs) for 5 neurons (left) and 5 directions extracted from \(K\)-Means (right) selected in 5 different quantiles of II values (to avoid cherry picking in this qualitative comparison). The distributions of II indices is shifted towards the higher values for the directions detected by \(K\)-means, as shown by the II values associated with each quantile. This is confirmed by the visualization of MEIs, which appear more visually coherent to the human observer for the directions (right) compared to the neurons (left).
### Comparing Similarity Metrics for In-Silico Psychophysics
We now compare the interpretability of the directions measured using the three image similarity metrics described in Section 2. Each metric defines similarity at a different level of abstraction, from low-level to high-level: same _color_ (Figure 5 left), same _perceptual structure_ as defined by LPIPS (Figure 5 middle) or same _category_ (Figure 5 right). For each metric, we perform the _in-silico_ psychophysics task from Section 3.1, varying the difficulty of the psychophysics experiment. The difficulty of a task is controlled by choosing query images that cause less extreme activations--i.e. are farther away from the set of MEIs (Borowski et al., 2020). This allows us to probe a more general understanding of the interpretability of a neuron or direction instead of limiting our analyses to the most preferred stimuli (Vinken et al., 2023).
As expected, we see in Figure 5 that both the neurons and the directions have a decreased psychophysics accuracy as the task becomes more difficult. The directions detected by our approach are more predictable than the individual neurons across low, mid and high-level semantics and across task difficulties. The largest improvement over individual neurons is observed for the low-level semantics using colors, and the improvement decreases as we move towards higher level semantics. Additionally, as observed in Figure 3, the number of clusters \(K\) does not impact the accuracy.
Lastly, in recently published work, Bricken et al. (2023) test whether observed interpretability in activation space is a function of the model or the data--that is, they test whether untrained models possess non-axis aligned directions that are more interpretable than individual neurons and find
Figure 4: **MEIs of Neurons and Interpretable Directions.** These are the Maximally Exciting Images (MEIs) for neurons (left) and directions (right) as retrieved with \(K\)-Means. To represent the interpretability index (II) distribution, we show neurons and directions at different II quantiles.
evidence that they do. We perform the same experiment, running our analysis on untrained versions of the models we analyze here, and find that there is indeed--even before training--a gap in interpretability between neuron axes and activation clusters (see Appendix G). This aligns with prior work on the expressive power of untrained CNNs (Frankle et al., 2020) and suggests paths for further investigation.
### Pairwise Synergies Between Neurons
Efficient coding principles such as minimal wiring length (Laughlin and Sejnowski, 2003), as well as the circuit analysis approach of mechanistic interpretability (Conmy et al., 2023; Nanda et al., 2023) inspire us to look for minimal subcircuits that increase interpretability. Specifically, we investigate the synergies between pairs of neurons. For all pairs of neurons \(a,b\) in the same ResNet50 layer, we compute the II score for their added (z-scored) activity. The _synergy_ is the difference between this II score and the maximum of their individual II scores to account for pairings with highly interpretable neurons:
\[\text{Synergy}(a,b)=\Pi(a+b)-\text{max}\left[\Pi(a),\Pi(b)\right]. \tag{4}\]
The synergy measures whether adding these neurons produces a direction in activation space that is more interpretable than taking each neuron individually. This is visualized in Figure 6 A) and B) which show two pairs of neurons \(a,b\) with the highest synergy: the MEIs resulting from their addition are more interpretable that their individual MEIs.
Figure 5: _In Silico Psychophysics Performance._ Accuracy across neurons and interpretable directions revealed by \(K\)-Means clusters (\(K\in\{128,256,512\}\)) for _in silico_ psychophysics task for different levels of difficult, i.e., limiting query and reference image selection to the central range of activations (e.g., from the \(0.45^{th}\) until the \(0.55^{th}\) quantile, see (Zimmermann et al., 2023)). Predictions are made based on different metrics from low level semantics (colour match, left), over mid level semantics (LPIPS average over layers, center), to high level semantics (label match, right).
Figure 6: **Synergies.****A)**, **B)** Example pairs (two highest synergies) of neurons and the result when adding them (all visualized by their 4 MEIs). **C)** Histogram of synergies for every pair of neurons. **D)** A slight positive relationship between the correlation and synergy over all pairs of neurons (i.e., more correlated neuron pairs have higher synergies). **E)** A strong negative relationship between the II of a neuron and the maximum synergy it can achieve (i.e., pairings dilute interpretable neurons).
The histogram of Figure 6 C) shows a large fraction of negative values of the synergy, i.e., most pairings are, as expected, detrimental for interpretability. However, a good fraction of the added neurons \(a+b\) become more interpretable. Figure 6 D) shows that correlated neurons tend to have higher synergy but correlation alone does not explain everything: two neurons can be uncorrelated, yet their addition can produce a very interpretable feature. This shows that our notion of interpretability is distinct from the familiar notion of decorrelation. Lastly, we find that more interpretable neurons (higher II) show lower maximal synergy (Figure 6 E)). This suggest that their representation is already interpretable and that any pairing would only dilute it.
### Application to Biological Neural Data
Findings of _mixed selectivity_, i.e., hard to interpret neurons that code for multiple unrelated features have been reported before in neuroscience (Yoshida & Mori, 2007; Rigotti et al., 2013; Fusi et al., 2016). This suggests that the cortex may also encode meaningful features in superposition. Below, we perform the same analysis as above, but for cortical recordings from inferior temporal (IT) visual cortex in macaque monkeys--a cortical area involved in high level visual object recognition (Hung et al., 2005) with a specific preference for faces (Tsao et al., 2006).
#### 3.5.1 Face Cell Responses
Figure 7: **Face Cell Responses in IT Cortex. 1a row) Maximally Exciting Images (MEIs) for neurons (left) and directions (right) as retrieved with \(K\)-Means (see Fig. 4). 2d row) Accuracy across neurons and interpretable directions revealed by \(K\)-Means clusters for the _in silico_ psychophysics task for different levels of difficulty (see Fig. 5). 3d row) (left to right, see Fig. 6) Histogram of synergies for each pair of neurons; Relationship between the correlation and synergy over all pairs of neurons; Relationship between the II of a neuron and the maximum synergy it can achieve. 4th row) Example pairs (two highest synergies) of neurons and the result when adding them (see Fig. 6).**
We first examine a dataset from Vinken et al. (2023), consisting of the responses of \(449\) neurons (sites) to 1379 images (447 faces 932 non-face objects). We perform the same analysis pipeline as for the CNN above, i.e., we clustered the activations using K-Means and studied the learned _features_ by their MEIs and our _in silico_ psychophysics. The results are shown in Fig. 7. The top row is the same as Fig. 4, showing quantiles of II sorted neurons/features. This is a rather striking demonstration of the previous claim that IT cortex represents a 'domain-general object space' (Vinken et al., 2023), i.e., we find highly interpretable activation clusters that code, e.g., for faces, keyboard keys, round objects or characters. In the center row of Fig. 7, we perform the same _in silico_ psychophysics experiment as above (labels are now provided by \(3\) distinct image conditions, see (Van der Maaten and Hinton, 2008)). Intriguingly, we find the same if not a larger effect of increased interpretability when moving from individual neurons to the features we find with K-Means. Lastly, in the bottom row of Fig. 7, we perform the same synergy experiment as in Fig. 6 and also find the same qualitative pattern, including a skewed distribution over synergies, a positive link with pairwise neural correlations, and a negative link with the II score.
These results are an interesting extension of the conclusions from the original Vinken et al. (2023) paper, in which the authors concluded that MEIs give an incomplete picture and that face cells should rather be understood as representing a domain-general object space. We fully agree with the former conclusion--when limited to individual neurons. The additional insight that we obtain here is that the object space, represented by multiple IT neurons, is spanned by groups of features (in _superposition_) whose MEIs are meaningful in the sense that they correspond to interpretable coding directions (Fig. 7 (1\({}^{\text{st}}\) row)) and whose activations are interpretable across a wide range of quantiles (Fig. 7 (2\({}^{\text{nd}}\) row)).
#### 3.5.2 Disentangling Interpretable Features in IT Cortex
Next, we apply this analysis to the dataset from Higgins et al. (2021), which consists of 159 neurons in anterior middle (AM) macaque face area that were presented with 2100 human and monkey face images. We apply the same analysis pipeline as before, i.e., we cluster the activations using K-Means and study the learned _features_ through their MEIs and _in silico_ psychophysics. The results are shown in Fig. 8.Note that the images used in the experiment were greyscaled. Thus, we are limited to considering only brightness rather than color for the low-level metric. For the mid-level LPIPS metric, we feed the greyscale value into all three colour channels. For this dataset, it is not possible to consider the label-based method, as there is no category information provided for the images in this dataset.
Again, the MEIs Fig. 8 (top) and _in silico_ psychophysics Fig. 8 (center) tell a consistent story. That is, we find a significant increase in interpretability (psychophysics performance across all levels of difficulty) when moving from individual neurons to the K-Means features. Lastly, the synergy experiment Fig. 8 (bottom) also shows the same result pattern with a skewed synergy distribution, a positive link between pairwise neural correlations and synergies, and a negative link between interpretability and maximal synergy.
In the original paper by Higgins et al. (2021), the key insight was that _individual_ neurons encode disentangled, interpretable features of the data. By contrast, we find that directions in activation space that mix multiple neurons are more interpretable than individual units. To gain more insight into this discrepancy, we perform a similar analysis of disentanglement as the original paper. In the original paper, they trained \(400\)\(\beta\)-variational autoencoder (\(\beta\)-VAE) models (Higgins et al., 2016) with different seeds and hyper-parameters that, empirically, find interpretable factorizations of the data (although see Hyvarinen and Pajunen, 1999; Locatello et al., 2019). They used an unsupervised metric of _disentanglement_(Duan et al., 2019) to check if more disentangled models have a better one-to-one correspondence with IT neurons, and found a positive relationship.5 We find the same positive relationship in Fig. 9 (top) for both neurons (left) and K-Means features (right).
Footnote 5: Note that the same desideratum of having a sparse readout that links model features with neural responses leads to the identification of functional cell types in neural system identification (Klindt et al., 2017; Ustyuzhaninov et al., 2019).
In the middle of Fig. 9, we report the distribution over different disentanglement metrics for neurons and features. Surprisingly, we find that the _features_ tend to achieve higher scores (across the \(400\) model instances), suggesting that they are more disentangled than individual neurons. This
finding is particularly interesting because _disentanglement_ and _interpretability_ are logically separable concepts--the former can be mathematically formalized as _source recovery_ in (non-)linear ICA (Hyvarinen & Morioka, 2016, 2017; Klindt et al., 2020; Hyvarinen et al., 2023), while the latter is a complex function of human semantics. However, based on these results, we hypothesize that our measures of interpretablity are in fact related to classical notions of disentanglement or _source recovery_. Further supporting this idea, in the bottom of Fig. 9, we find a strong relationship between a supervised measure of disentanglement (i.e., the maximal absolute correlation between a neuron/feature and the model units, for the model with the highest UDR score) and our interpretability score (in terms of psychophysics logits).6
Footnote 6: We could not use the same disentanglement metrics above since those are across models, while here, we report scores across neurons/features.
Figure 8: **Interpretability in IT Cortex. 1****4 row** MEIs for neurons (left) and directions (right) as retrieved with \(K\)-Means (see Fig. 4). 2****4 row**) Accuracy across neurons and interpretable directions revealed by \(K\)-Means clusters for _in silico_ psychophysics task for different levels of difficult (see Fig. 5). 3****4 row**) (left to right, see Fig. 6) Histogram of synergies for every pair of neurons; Relationship between the correlation and synergy over all pairs of neurons; Relationship between the II of a neuron and its maximal synergy. 4****4 row**) Example pairs (highest synergies) of neurons and the result when adding them (see Fig. 6).
The results of these analyses are interesting for two reasons. First, they provide an alternative interpretation of the original data: Neurons may sometimes align with disentangled factors of the data, however, activity clusters that involve multiple neurons tend to be even better aligned with disentangled models. Second, we find that our measures of interpretability (here _in silico_ psychophysics accuracy) is strongly related to the more mathematically-grounded concept of disentanglement (Hyvarinen et al., 2023).
#### 3.5.3 Universality and Representational Drift
Shifting the focus away from individual neurons and, instead, considering meaningful clusters of activity in state space the computational primitives of neural network function provides a fresh view on _universality_(Olah et al., 2020), i.e., the similarity of representations of different neural systems, and _representational drift_, i.e., the changes of representations within the same neural system over time (Driscoll et al., 2022). To test this hypothesis, in this section we examine a neuroscience dataset from Allen et al. (2022). This dataset is interesting because it allows us to study representations across brain areas, across time and across human subjects. Specifically, this dataset consists of functional magnetic resonance imaging (fMRI) recordings of \(8\) human subjects, in over \(16\) brain areas, viewing \(9,000-10,000\) natural images across \(30-40\) scan sessions (Allen et al., 2022, for more details, see). The recorded _units_ (i.e., voxels/cubes in a \(3D\) sampling grid over the human
Figure 9: **Disentanglement and Interpretability in IT Cortex. Top)** Relation between _unsupervised_ disentanglement (Duan et al., 2019) of 400 models (\(\beta\)-VAE) and _supervised_ (models vs. neurons / features) disentanglement of neurons and features (Higgins et al., 2021). **Center)** Different measures of disentanglement (_DCI_(Eastwood and Williams, 2018) and _MCC_(Hyvarinen and Morioka, 2016; 2017)) for neurons and features. _DCI Disentanglement_ corresponds to the metric named _alignment_ in the original paper (Eastwood and Williams, 2018; Higgins et al., 2021). **Bottom)** Relationship between interpretability (psychophysics logits for LPIPS metric and full quantile range, i.e., \(x=1.0\) in Fig. 8, \(2^{\text{nd}}\) row, left) and disentanglement (see text).
brain) combine the activity of many neurons across time which presents an interesting extension to the hypothesis of superposition, since these voxels can already be considered linear combinations of individual neurons with, potentially, interesting filtering properties (Kriegeskorte et al., 2010).
As in the previous experiments, we perform the same analysis pipeline of identifying clusters in neural activity space and calculating the psychophysics accuracy for clusters and units. We find, again, that for all brain areas (Subject 1) the clusters achieve a higher psychophysics accuracy, suggesting that they are more interpretable than the individual units Fig. 10 (top). Moreover, we see a tendency for lower areas (e.g., V1v, (see Allen et al., 2022)) to have a larger gain for the low level color metric, whereas higher areas (e.g., FBA-2, FFA-2) have a larger gain for the high level label metric. This provides further support for the existence of non-axis aligned interpretable features across visual cortical areas.
A key finding in interpretability research is the phenomenon of _convergent learning_(Li et al., 2015)--the observation that diverse neural network architectures trained on similar tasks converge to similar internal representations. This naturally leads to the question of _universality_(Olah et al., 2020): Is there a set of canonical features for a data domain that consistently emerge across neural systems? Here, we ask whether the interpretable directions identified with our method exhibit greater universality than neurons alone. If this hypothesis is true, we would expect to see greater similarity between _directions_ in neural activity space across subjects than between _neurons_. We next test the "universality" of these features, by examining how well the discovered directions transfer across subjects and recording sessions.7
Footnote 7: Note that clusters can be easily transferred across recordings or subjects by computing the centroids in the target space based on the clustering labels in the source space.
Figure 10: **Interpretable Features Transfer Across Subjects and Recording Sessions.** Using the dataset from Allen et al. (2022), we examine: **Top)** Differences in psychophysics accuracy between clusters and units (fMRI voxels) for different difficulties (horizontal axis), metrics (columns) and brain areas (colours). A value above zero indicates that the clusters achieve a higher psychophysics accuracy, i.e., are more interpretable. **Middle, left)** Best matching units (maximal correlation between activations, for \(N=1000\) images) across all brain areas of distinct subjects—testing representational universality across brains (Vinken et al., 2023). **Middle, right)** Same as middle left but across scan sessions—testing representational drift across time (for \(N=88\) images that were presented in both scan sessions \(13\) and \(14\)). **Bottom)** Exemplary low (left, V1v) and high (right, FFA-1) level representation drift, i.e. correlation of units and clusters across sessions.
In Fig. 10 (middle), we find that the identified directions transfer across subjects (middle, left) better than units alone. In addition, we find that these directions transfer better across recording sessions (middle, right) than individual units. In Fig. 10 (bottom), we show exemplary low (left) and high (right) level brain areas where the activity across sessions is much more correlated for the clusters than for the units. This is in line with prior results, where Roth & Merriam (2023) observed that the representational dissimilarity (i.e., the relative position of activity patterns for different stimuli (Kriegeskorte et al., 2008) remained stable across scan sessions. This suite of findings provides support to the idea that more interpretable features transfer more easily across different representations, in line with the findings of Vinken et al. (2023) about the universality of interpretable features.
## 4 Discussion
In this work, we have proposed a quantitative metric of _interpretability_ and a method for finding interpretable features in activation space. We hope that further research will find better metrics and better feature identification methods. Nevertheless, we believe that our initial combination of metric and feature recovery method used here demonstrates the viability of our framework for automating interpretability research for vision models and visual cortex. In particular, we emphasize the value of validating quantitative metrics of interpretability against large-scale human psychophysics experiments of interpretability (Zimmermann et al., 2023). This allows us to scale human intuition to large-scale, complex neural network models--thus automating what is ordinarily done in mechanistic interpretability research by hand (Leavitt & Morcos, 2020). We hope that this approach will ultimately lead to a better understanding of neural coding principles and cast light into the black box of deep network representations.
Shifting focus from individual neurons to populations has been an important development in neuroscience (Avebeck et al., 2006; Stanley, 2013; Hebb, 2005; Gao & Ganguli, 2015; Jacobs et al., 2009; Ebitz & Hayden, 2021). In fact, _mixed selectivity_ is widely observed in neuroscience, (Yoshida & Mori, 2007; Rigotti et al., 2013) and there are coding advantages believed to be conferred by such a representation (Fusi et al., 2016; Driscoll et al., 2022). We also tested a recent neural coding hypothesis that combines sparse coding with disentanglement in the framework of the _sparse manifold transform_(Chen et al., 2018). In App. C we find support for the notion that interpretable features are more sparsely localized on the data manifold. Moreover, this theoretical framework could help further elucidate the link between interpretability (of discrete clusters) and disentanglement that we found in neural data (Higgins et al., 2021). Lastly, such a code may be more robust to input perturbations (Morcos et al., 2018), as suggested by our sensitivity analysis (App. D) (but see Barak et al., 2013; Johnston et al., 2020; Fusi et al., 2016). In App. F, we show that network activations follow the same spectral power law as cortical representations (Stringer et al., 2019). That is, they are low-dimensional enough to maintain differentiability (i.e. they are robust to input perturbations), while being high-dimensional enough to capture the data structure. This suggests a _universal_ coding strategy employed by biological and artifical neural networks alike. We believe that future analyses grounded in a quantified metric of interpretability may illuminate the computational function of these convergent neural coding strategies.
## Acknowledgements
We would like to thank Roland Zimmermann and Wieland Brendel for discussions, experiments with metrics and for sharing their psychophysics data. Moreover, thanks to Vinken et al. (2023) and Higgins et al. (2021) for publicly sharing their data and to Le Chang for helping with the extraction. We would also like to thank Katrin Franke and Andreas Tolias for discussions and feedback on the manuscript. This work was supported by the U.S. Department of Energy, under DOE Contract No. DE-AC02-76SF00515, the SLAC National Accelerator Laboratory LDRD program, and the National Science Foundation under Grant 2313150. Finally thanks to the complete Geometric Intelligence Lab at UCSB for providing feedback and support for this work. |
2303.11509 | Light quark and antiquark constraints from new electroweak data | We present a new parton distribution function analysis which includes new
data for W boson production in proton-proton collisions and lepton pair
production in proton-proton and proton-deuteron collisions. The new data
provide strong constraints on the light antiquark parton distribution functions
in the proton. We identify an interesting correlation between the $d/u$ ratio
and the $\bar{d}/\bar{u}$ ratio which leads to a modification of our previous
results for the $d/u$ ratio as the parton momentum fraction $x \rightarrow 1.$ | Alberto Accardi, Xiaoxian Jing, Joseph Francis Owens, Sanghwa Park | 2023-03-21T00:08:36Z | http://arxiv.org/abs/2303.11509v1 | # Light quark and antiquark constraints from new electroweak data
###### Abstract
We present a new parton distribution function analysis which includes new data for W boson production in proton-proton collisions and lepton pair production in proton-proton and proton-deuteron collisions. The new data provide strong constraints on the light antiquark parton distribution functions in the proton. We identify an interesting correlation between the \(d/u\) ratio and the \(\bar{d}/\bar{u}\) ratio which leads to a modification of our previous results for the \(d/u\) ratio as the parton momentum fraction \(x\to 1\).
+
Footnote †: preprint: JLAB-THY-23-3782
## I Introduction
Predictions for high energy lepton-hadron and hadron-hadron hard collisions rely on perturbative QCD-based calculations for the parton-parton scattering cross sections. These are then convoluted with the appropriate parton distribution functions (PDFs) to obtain predictions for experimentally measured observables. A recent evaluation of PDF determinations can be found in Ref. [1]. Global fits for PDFs by the CJ Collaboration have focused on simultaneously extending the reach in \(x\) towards \(x\approx 1\) and reducing the minimum value of the squared four-momentum transfer, \(Q^{2}\), included in the fitting process [2; 3]. The focus of this analysis is, instead, on the flavor dependence of the light-quark sea, specifically the behavior in \(x\) of the ratio \(\bar{d}/\bar{u}\). One important source of information on the behavior of this ratio is lepton pair production using a proton beam on both proton and deuteron targets. The most precise such data available previously came from the E866 experiment [4]. These data suggested that \(\bar{d}/\bar{u}\) initially rose from a value of unity to a maximum near \(x\approx 0.15\) followed by a fall-off to a value below unity by \(x\approx 0.30\). However, by this region in \(x\) the data were statistically limited. The CJ12 PDFs [2] were parametrized in such a way that the ratio could follow the data to values below one. This led to a rapid fall-off of the \(\bar{d}\) PDF below 0 as \(x\) increased beyond about 0.3. The CJ15 PDFs [3] employed an alternative parametrization which constrained \(\bar{d}/\bar{u}\) to approach one from above at large values of \(x\).
New data from the SeaQuest experiment [5], the successor to the E866 experiment, have a greater reach in \(x\) as well as increased statistics. Additionally, new data [6] from the STAR Collaboration on \(W\) boson production in proton-proton collisions have become available. These data also offer additional constraints on \(\bar{d}/\bar{u}\). It is the purpose of this analysis to assess the effects of these new data sets on the behavior of \(\bar{d}/\bar{u}\) over the \(x\) range out to \(x\approx 0.4\). At the same time, we expose a little noticed correlation between the light-antiquark and light-quark ratios inherent in the available lepton pair production data and in the mid-rapidity weak boson production data that affects the extrapolation of the \(d/u\) ratio to values of \(x\) approaching 1. In particular, we examine different parametrizations to see their effects on the extracted ratios. Preliminary results have been presented at DIS 2021 [7], and analyses of the new data have also been performed by the CT [8] and JAM [9] collaborations.
The plan of the paper is as follows. In Section II the framework for the global fits is described, including the parametrizations used for the various PDFs and the higher-twist and nucleon off-shell corrections. Section III contains a discussion of the data sets used with special attention paid to the new data, while Section IV presents the results of this analysis. The conclusions are summarized in Section V.
## II Light quarks and antiquarks in the Cj global analysis
The new CJ22 global fit we report in this paper combines elements of the CJ15 [3] and CJ15-a [10] analyses in order to provide sufficient flexibility in the determination of the mid-\(x\)\(\bar{d}/\bar{u}\) ratio after the inclusion of the STAR and SeaQuest data. The new fit will also allow us to properly analyze the correlation of the mid-\(x\)\(\bar{d}/\bar{u}\) ratio and the large-\(x\)\(d/u\) ratio induced by weak boson production data and how these impact the extrapolation of \(d/u\) to \(x\to 1\). In this section we focus on the methodological and numerical aspects of our fits, and in the next one we will discuss the global data set we utilize.
### PDF parametrization and theoretical setup
The latest CTEQ-JLab global fit (CJ15) was performed using the world deep-inelastic scattering (DIS) data set,
as well as a variety of jet and electroweak boson production measurements [3]. Among these, lepton pair production measurements by the E866 experiment at Fermilab provided the strongest constraints on the light antiquark sea, covering the \(0.015\lesssim x\lesssim 0.3\) parton momentum fraction region, with additional sensitivity provided by fixed target DIS data, in particular from the NMC experiment.
At the input scale of \(Q_{0}^{2}=1.69\) GeV\({}^{2}\), a standard five-parameter functional form was used for most of parton species, including the \(\bar{u}+\bar{d}\) combination:
\[xf(x,Q_{0}^{2})=a_{0}x^{a_{1}}(1-x)^{a_{2}}(1+a_{3}\sqrt{x}+a_{4}x) \tag{1}\]
The valence \(d\) quark was however allowed to mix with the valence \(u\) quark parametrization at large \(x\), as to allow a finite limit for the \(d/u\) ratio:
\[d_{v}(x,Q_{0}^{2})\to a_{0}^{d_{v}}\Big{(}\frac{d_{v}(x,Q_{0}^{2})}{a_{0}^{d_{ v}}}+b\,x^{c}\,u_{v}(x,Q_{0}^{2})\Big{)}\, \tag{2}\]
with \(b\) and \(c\) as two additional parameters. As a result, the ratio \(d_{v}/u_{v}\) could tend to a non-zero value as \(x\to 1\), provided that \(a_{2}^{d_{v}}>a_{2}^{u_{v}}\), which is usually the case. As in the CJ15 and CJ15-a fits, only the \(b\) parameter was left free and \(c=2\) kept fixed, since the new data do not provide additional constraints on the valence quark ratio at large \(x\).
Turning to the light antiquarks, the \(\bar{d}/\bar{u}\) ratio in the original CJ15 fit was parametrized as
\[\bar{d}/\bar{u}=a_{0}\,x^{a_{1}}(1-x)^{a_{2}}+1+a_{3}x(1-x)^{a_{4}} \tag{3}\]
due to the limited \(x\) coverage of the E866 data and the sharp downturn these required of the \(\bar{d}/\bar{u}\) ratio. With this parametrization we also enforced the theoretical expectation from most modeling efforts that \(\bar{d}/\bar{u}\) remains greater than or equal to one all the way up to \(x\to 1\), and tends to 1 in that limit. This assumption was however revisited in Ref. [10], where the \(\bar{d}-\bar{u}\) difference was considered instead of \(\bar{d}/\bar{u}\) and parametrized as in Eq. (1),
\[x\big{(}\bar{d}-\bar{u}\big{)}=\bar{a}_{0}\,x^{\bar{a}_{1}}(1-x)^{\bar{a}_{2} }(1+\bar{a}_{4}x)\, \tag{4}\]
with the resulting fit called CJ15-a. Even if the new parametrization allowed for it, no strong indication of a sign change in the \(\bar{d}-\bar{u}\) asymmetry in the \(x\lesssim 0.3\) region measured by E866 was found.
With the new data from STAR sensitive to the smaller-\(x\) rise of the \(\bar{d}-\bar{u}\) asymmetry, and the new SeaQuest data constraining the \(\bar{d}/\bar{u}\) ratio at \(0.15\lesssim x\lesssim 0.4\), well across the region where E866 indicated this would drop below 1, we can now revisit this whole issue. To allow for sufficient versatility in the description of the light quark sea, in this paper we will utilize the more flexible parametrization (4). Furthermore, we will leave the \(\bar{a}_{2}\) parameter free instead of fixing it to 2.5 units larger than the corresponding parameter for the \(\bar{u}+\bar{d}\) combination as done in the CJ15-a analysis, thus providing additional freedom to the \(\bar{d}/\bar{u}\) ratio in the limiting \(x\to 1\) region.
Apart from this change in parametrization, we will adopt the same theoretical setup as in the CJ15 fits, as described in Ref. [3]. In particular: we perform fits at next-to-leading order accuracy in the ACOT-\(\chi\) heavy quark scheme; include target mass corrections for DIS data according to the OPE prescription by Georgi and Politzer [11; 12]; and adopt the "weak-binding approximation" to correct for nucleon binding and Fermi motion in DIS and DY cross sections on deuteron targets, with the AV18 deuteron wave function describing the nucleon dynamics inside the target. Higher-twist corrections for DIS structure functions and off-shell nucleon corrections in deuteron targets will be discussed in more detail in the next subsection.
From a numerical point of view, and at variance with the CJ15 family of analyses, NLO QCD corrections to the calculation of \(W\) and \(Z\) production cross sections were implemented by means of the APPLgrid [13] fast NLO interface. The necessary coefficient grids were calculated by means of the MCFM 6.8 event generator [14; 15], and tested against the Tevatron weak boson production data already included in the CJ15 analysis. Details about the grids for weak boson production at the Relativistic Heavy Ion Collider (RHIC) will be discussed in Section III.
### Higher-twist and off-shell corrections
In DIS at low \(Q^{2}\) values, power suppressed corrections exist beyond target mass corrections, for example, genuine multiparton correlations; missing higher order perturbative corrections can also resemble power corrections at small scale values. Regardless of their origin, we account for these residual power suppressed contributions by using a phenomenological multiplicative factor to modify the proton and nucleon structure functions as in all earlier CJ fits,
\[F_{2}(x,Q^{2})=F_{2}^{\rm LT}(x,Q^{2})\left(1+\frac{C(x)}{Q^{2}}\right), \tag{5}\]
where \(F_{2}^{\rm LT}\) denotes the leading twist structure function including target mass corrections, and \(C\) is assumed to be isospin independent due to the relatively weak constraining power of the adopted data sets [16; 17; 18]. Following common usage, we generically refer to the fitted \(1/Q^{2}\) term as a "higher-twist" (HT) correction, and parametrize the coefficient function \(C\) by
\[C(x)=a_{\rm HT}\,x^{b_{\rm HT}}(1+c_{\rm HT}x)\,. \tag{6}\]
For ease of notation, we collect the higher-twist parameters in the vector
\[\mathbf{a}_{\rm HT}=(a_{\rm HT},b_{\rm HT},c_{\rm HT})\,. \tag{7}\]
In deuteron targets, the nucleons are off their \(m_{N}^{2}\) mass shell with a four-momentum squared \(p_{N}^{2}\neq m_{N}^{2}\). While the off-shell nucleon PDF, \(\widetilde{f}\), is not an observable per
se, its dependence on the nucleon virtuality \(p_{N}^{2}\) can be studied within a given theoretical framework. For weakly bound nucleons such as in the deuteron, for example, one may expand \(\widetilde{f}\) to lowest order about its mass shell [19; 20],
\[\widetilde{f}(x,p_{N}^{2},Q^{2})=f(x,Q^{2})\left(1+\frac{p_{N}^{2}-M ^{2}}{M^{2}}\delta f(x,Q^{2})\right), \tag{8}\]
and the off-shell correction function \(\delta f\) can be parametrized and fitted to data. In this work, we adopt the flavor-independent CJ15 parametrization
\[\delta f(x)=\mathcal{N}(x-x_{0})(x-x_{1})(1+x_{0}-x) \tag{9}\]
inspired by earlier work by Kulagin and Petti on off-shell PDF deformations in heavier nuclei [21]. The \(x_{0}\) crossing and \(\mathcal{N}\) normalization parameters are simultaneously fitted with the PDF and HT parameters, and \(x_{1}\) is determined by requiring that the off-shell correction does not modify the number of valence quarks in the nucleon,
\[\int_{0}^{1}dx\,\delta f(x)\,\left[q(x)-\bar{q}(x)\right]=0 \tag{10}\]
with \(q=u,d\), see [3] for details. More flexible parametrizations have been studied [22] and will be reported elsewhere. Finally, for ease of discussion, we collect the off-shell parameters into the vector
\[\mathbf{a}_{\rm off}=(\mathcal{N},x_{0},x_{1}). \tag{11}\]
### Treatment of uncertainties
The full set of fit parameters, including the PDF parameters discussed in Section II.1, the higher-twist parameters and the off-shell parameters reads
\[\mathbf{a}=(\mathbf{a}_{\rm pdf},\mathbf{a}_{\rm HT},\mathbf{a}_{\rm off}) \tag{12}\]
for a total number \(n_{\rm par}\) of parameters. The observables \(\sigma\) we are interested in (for example the PDFs themselves, or the DIS structure functions, or the lepton pair production cross section) depend on the fitting parameters via the PDFs \(f\), the HT function \(C\), and the off-shell function \(\delta f\). Schematically,
\[\sigma[\mathbf{a}]=\sigma\big{(}f[\mathbf{a}_{\rm PDF}],C[\mathbf{a}_{\rm HT} ],\delta f[\mathbf{a}_{\rm off}]\big{)}\,. \tag{13}\]
The uncertainty on these observables can be estimated in the Hessian formalism [23; 24]. With a sufficiently precise data set \(\mathbf{m}=\{m_{1},\ldots,m_{n_{\rm def}}\}\) and a suitably defined \(\chi^{2}=\chi^{2}(\mathbf{a},\mathbf{m})\) chi-squared function, this method can approximate the parameter likelihood \(\mathcal{L}(\mathbf{a}|\mathbf{m})=\exp\big{(}-\frac{1}{2}\chi^{2}(\mathbf{a},\mathbf{m}) \big{)}\) as a multi-variate Gaussian distribution in parameter space centered around the best-fit value, \(\mathbf{a_{0}}\), of the parameters [25]. Namely,
\[\mathcal{L}(\mathbf{a}|\mathbf{m})\propto\exp\Big{(}-\frac{1}{2}\Delta \mathbf{a}^{T}\,H\,\Delta\mathbf{a}\Big{)}, \tag{14}\]
where \(m\) represent the data set being fitted, \(\Delta\mathbf{a}=\mathbf{a}-\mathbf{a_{0}}\), and the Hessian matrix elements are given by
\[H_{ij}=\frac{1}{2}\left.\frac{\partial^{2}\chi^{2}(\mathbf{a})}{ \partial a^{i}\partial a^{j}}\right|_{\mathbf{a}=\mathbf{a_{0}}}\,,\quad i,j=1,\ldots n _{\rm par}\,. \tag{15}\]
The Hessian matrix can then be diagonalized, and reparametrized in terms of the eigendirections of the Hessian matrix via
\[\mathbf{a}(\mathbf{t})=\mathbf{a_{0}}+\sum_{k=1}^{n_{\rm par}}t_{k}\frac{ \mathbf{e}_{\mathbf{k}}}{\sqrt{w_{k}}}\,, \tag{16}\]
where \(\mathbf{e}_{\mathbf{k}}\) and \(w_{k}\) are the orthonormal eigenvectors and eigenvalues of the Hessian matrix, respectively, and \(\mathbf{t}=\{t_{1},\ldots,t_{n_{\rm par}}\}\) is a vector of scaling factors. In terms of these variables, the approximated likelihood (14) is a symmetric Gaussian with \(\mathbf{t}^{2}=1\) identifying the 68% confidence level on the fitted parameters.
The standard CJ PDF error sets are then obtained by uniformly scaling each eigenvector by a "tolerance factor" \(t_{k}=T\) to nominally produce an increase of \(T^{2}\) above the minimum in the \(\chi^{2}\) function. In both the CJ15 and the CJ15-a analyses \(T=1.645\) was chosen, corresponding to a 90% Gaussian confidence level. In other analyses different choices are made to also account for tensions between the chosen data sets: for example, \(T=10\) in the CT10 global fit [26].
However, in global QCD fits including CJ15, a few Hessian eigenvectors are typically not constrained enough by the available data and the likelihood can deviate from a Gaussian shape even within \(t_{k}\) variations of order \(O(1)\). This can happen, in particular: when data are scarce for a particular flavor combination, such as for \(\bar{d}/\bar{u}\) at \(x\gtrsim 0.3\); or closer to a kinematic threshold, such as for the \(d/u\) ratio as \(x\to 1\), where one expects the ratio to decrease towards 0, but not necessarily reaching that value [27].
A better approximation to the likelihood function [25; 28] can be obtained by scanning the \(\chi^{2}\) function along each eigenvector starting from the best-fit parameters \(\mathbf{a_{0}}\), until parameters \(\mathbf{a_{i}}\) are found in the plus- and minus-directions such that the \(\chi^{2}\) function increases above its best-fit value by an amount \(T^{2}\):
\[\Delta\chi^{2}(\mathbf{a_{2i+1}})=\Delta\chi^{2}(\mathbf{a_{2i}}) =T^{2}\] \[\forall\ i=1,\ldots,n_{par}\, \tag{17}\]
where \(\Delta\chi^{2}(\mathbf{a})=\chi^{2}(\mathbf{a})-\chi^{2}(\mathbf{a_{0}})\). These parameter vectors correspond to a set of \(t_{k}^{\pm}\) values that are close to \(T\) wherever the Gaussian approximation holds, but can substantially deviate from this value along a few eigendirections. In other words, we adopt a local and asymmetric tolerance criterion instead of assuming \(t_{k}=T\) globally. In practice, this scheme deforms the Hessian approximation of the likelihood in order to account in an approximate way for departures from a purely Gaussian behavior. It is also suitable for large \(T\) values, for which the Hessian approximation cannot be _a priori_ assumed
to hold. As with other global QCD analyses using local tolerance criteria [29; 30], the price to be paid is that, while the chosen \(T\) value legitimately defines a confidence region in parameter space, this cannot be readily and unambiguously associated with a confidence level figure as one can do with the pure Hessian approximation.
In practical terms, we define parameter sets corresponding to variations along each eigendirection in the plus and minus directions, respectively, as
\[\mathbf{a_{2k}} =\mathbf{a_{0}}+t_{k}^{+}\frac{\mathbf{e_{k}}}{\sqrt{w_{k}}} \tag{18}\] \[\mathbf{a_{2k+1}} =\mathbf{a_{0}}-t_{k}^{-}\frac{\mathbf{e_{k}}}{\sqrt{w_{k}}}\, \tag{19}\]
such that \(\Delta\chi^{2}[\mathbf{a_{i}}]=T\) for all \(i=1,\ldots,2n_{par}\). Then, the upper and lower \(\delta\sigma_{+}\) and \(\delta\sigma_{-}\) uncertainties on an observable \(\sigma\) can be calculated using the expressions
\[\delta\sigma_{+}^{2} =\sum_{i=1}^{n_{\text{par}}}\Big{[}\max\Big{(}\sigma[\mathbf{a_{2i-1} }]-\sigma[\mathbf{a_{0}}],\sigma[\mathbf{a_{2i}}]-\sigma(\mathbf{a_{0}}),0\Big{)}\Big{]}^{2} \tag{20a}\] \[\delta\sigma_{-}^{2} =\sum_{i=1}^{n_{\text{par}}}\Big{[}|\max\Big{(}\sigma[\mathbf{a_{0}}] -\sigma[\mathbf{a_{2i-1}}],\sigma[\mathbf{a_{0}}]-\sigma[\mathbf{a_{2i}}],0\Big{)}\Big{]}^{2}. \tag{20b}\]
Alternatively, a symmetrized uncertainty can be obtained via
\[\delta\sigma^{2}=\frac{1}{4}\sum_{i=1}^{n_{\text{par}}}\Big{(}\sigma[\mathbf{a_{2i- 1}}]-\sigma[\mathbf{a_{2i}}]\Big{)}^{2}\,. \tag{21}\]
Note that the choice of tolerance \(T\) value (\(T=1.645\) in this paper) is already incorporated in Eqs. (20a)-(20b) and (21). The effect of alternative \(T^{\prime}\) tolerance choices can be approximately obtained by rescaling these uncertainties by a \(T^{\prime}/T\) factor as long as the two tolerance values are not too different from each other. However, care must be exercised for observables sensitive to the non-Gaussian regions of the parameter space.
## III Dataset
The CJ15 analysis included DIS data from fixed target electron-hadron scattering experiments at Jefferson Lab [31; 32], HERMES [33], SLAC [34], BCDMS [35; 36], and NMC [37; 38], and from the HERA \(ep\) collider [39]; \(W\)[40; 41; 42; 43; 44] and \(Z\)[45; 46] asymmetries as well as jet [47; 48; 49] and \(\gamma+\)jet [50] data from CDF and D0 experiments at Tevatron; lepton pair production (LPP) from the E866 experiment at Fermilab [4].
Previously, the antiquark PDFs in the mid-\(x\) region were mainly constrained by the lepton pair production data from the E866 experiment. In the CJ22 fits we have now included recent data that are sensitive to the antiquarks from the new lepton pair production measurements by the E906/SeaQuest experiment [5] and the rapidity distribution of the \(W^{+}/W^{-}\) ratio in \(pp\) collisions by the STAR experiment [6]. The data sets used in the CJ22 fit are listed in Table 1.
The SeaQuest data covers the kinematic range \(0.1<x<0.45\) extending the large \(x\) reach from the E866 experiment at different \(Q^{2}\). The cross section ratio of the lepton pair production in \(pp\) and \(pd\) interactions attracts particular interest as it can be directly related to \(\bar{d}/\bar{u}\). In the forward region, the ratio can be written as
\[\frac{\sigma_{pd}}{\sigma_{pp}}\approx\frac{4+\frac{d(x_{b})}{u(x_{b})}}{4+ \frac{d(x_{b})}{u(x_{b})}\frac{\bar{d}(x_{t})}{\bar{u}(x_{t})}}\left(1+\frac{ \bar{d}(x_{t})}{\bar{u}(x_{t})}\right). \tag{22}\]
When \(x_{b}\to 1\), the \(d/u\) ratio tends to 0 and can be neglected, so that the cross section ratio becomes sensitive only to the \(\bar{d}/\bar{u}\) ratio. However, neither for the E866 experiment (\(x_{b}=0.3-0.5\)) nor for the SeaQuest experiment (\(x_{b}=0.5-0.7\)) is this condition satisfied, and
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline Obs. & Experiment & Ref. & \# Points & \(\chi^{2}\) \\ \hline DIS & JLab (p) & [31] & 136 & 161.0 \\ & JLab (d) & [31] & 136 & 119.1 \\ & JLab (n/d) & [32] & 191 & 213.2 \\ & HERMES (p) & [33] & 37 & 29.1 \\ & HERMES (d) & [33] & 37 & 29.5 \\ & SLAC (p) & [34] & 564 & 469.8 \\ & SLAC (d) & [34] & 582 & 412.1 \\ & BCDMS (p) & [35] & 351 & 472.2 \\ & BCDNS (d) & [36] & 254 & 321.8 \\ & NMC (p) & [37] & 275 & 416.5 \\ & NMC (d/p) & [38] & 189 & 199.6 \\ & HERA (NC \(e^{-}p\)) & [39] & 159 & 249.7 \\ & HERA (NC \(e^{+}p\) 1) & [39] & 402 & 598.9 \\ & HERA (NC \(e^{+}p\) 2) & [39] & 75 & 98.8 \\ & HERA (NC \(e^{+}p\) 3) & [39] & 259 & 250.0 \\ & HERA (NC \(e^{+}p\) 4) & [39] & 209 & 229.1 \\ & HERA (CC \(e^{-}p\)) & [39] & 42 & 45.6 \\ & HERA (CC \(e^{+}p\)) & [39] & 39 & 52.5 \\ & E866 (\(pp\)) & [4] & 121 & 144.1 \\ & E866 (\(pd\)) & [4] & 129 & 157.4 \\ & SeaQuest (\(d/p\)) & [5] & 6 & 7.5 \\ W & CDF (\(e\)) & [40] & 11 & 12.6 \\ & D0 (\(e\)) & [41] & 13 & 28.8 \\ & D0 (\(\mu\)) & [42] & 10 & 17.5 \\ & CDF (\(W\)) & [43] & 13 & 18.0 \\ & D0 (\(W\)) & [44] & 14 & 14.5 \\ & STAR (\(e^{+}/e^{-}\)) & [6] & 9 & 25.3 \\ & (less \(\eta_{\text{max}}\) point) & (8) & (15.4) \\ Z & CDF & [45] & 28 & 29.2 \\ & D0 & [46] & 28 & 16.1 \\ jet & CDF & [47] & 72 & 14.0 \\ & D0 & [48; 49] & 110 & 14.0 \\ \(\gamma+\)jet & D0 1 & [50] & 16 & 8.7 \\ & D0 2 & [50] & 16 & 19.3 \\ & D0 3 & [50] & 12 & 25.0 \\ & D0 4 & [50] & 12 & 12.2 \\ \hline & total & & 4557 & 4936.6 \\ & total + norm & & 4573 & 4948.6 \\ \hline \end{tabular}
\end{table}
Table 1: Data sets and corresponding number of data points and \(\chi^{2}\) values from the CJ22 analysis.
the data are sensitive both to the \(\bar{d}/\bar{u}\) quark ratio and, subdominantly, to the \(d/u\) ratio.
In \(pp\) collisions, \(W\) bosons are produced from quark-antiquark fusion and therefore provide clean access to quark and antiquark distributions inside the proton at a large momentum scale \(Q^{2}=M_{W}^{2}\). The STAR experiment at RHIC has recently reported the unpolarized \(W\) and \(Z\) boson cross sections at \(\sqrt{s}=510\) GeV via \(W^{\pm}\to e^{\pm}+\nu\) and \(Z\to e^{+}e^{-}\) decays, respectively in the pseudorapidity range \(-1.0<\eta\leq 1.5\). An observable that is particularly sensitive to the \(\bar{d}(x)/\bar{u}(x)\) ratio is the \(W^{+}/W^{-}\) ratio of \(W\) boson cross sections. At leading order this can be written as
\[\frac{\sigma_{W^{+}}}{\sigma_{W^{-}}}\ \approx\ \frac{u(x_{1})\bar{d}(x_{2})+\bar{d}(x_{1})u(x_{ 2})}{d(x_{1})\bar{u}(x_{2})+\bar{u}(x_{1})d(x_{2})}\ \underset{y_{{}_{W}}\approx 0}{\approx}\ \frac{ \bar{d}/\bar{u}}{d/u} \tag{23}\]
where \(x_{1,2}=(M_{{}_{W}}/\sqrt{s})\exp(\pm y_{{}_{W}})\) is the fractional momentum carried by the scattering partons, with \(y_{{}_{W}}\) the rapidity of the produced boson. At midrapidity, where \(x_{1}=x_{2}\approx 0.16\), the cross section ratio directly accesses both the antiquark and the quark ratios. At larger rapidity, the accessible \(x_{1,2}\) range is somewhat limited by the boson decay kinematics as well as by the statistical precision of the data, and the measured lepton asymmetry effectively probes light quarks and antiquarks with fractional momenta \(x\) in the \(0.05\lesssim x\lesssim 0.25\) range.
Of the STAR measurements, the \(W\) boson charge ratio is the most sensitive to the quark and antiquark ratios, and has been included in the CJ22 analysis. We have not included either of the charged separated \(W\) or the \(Z\) measurements because these do not provide significant additional constraints on the PDF determination, but we will discuss in the next section how well the new fit describes those data. As already mentioned, for the STAR \(W\) and \(Z\) cross section calculations we use fast NLO interpolation grids that were created using AP-PLgrid [13] interfaced with the MCFM event generator [14; 15]. The events were generated using the experimental cuts for electron transverse momentum (\(p_{e}>15\) GeV/\(c\)) and energy (\(25<E_{e}<50\) GeV). The STAR \(W^{\pm}\to e^{\pm}+\nu\) measurements also require cuts to suppress jet background. To reproduce a similar condition, we excluded events with produced jets, obtaining approximately a 20% reduction in the calculated cross section. For \(Z\to e^{+}e^{-}\) events, the generated events are collected within the experimental invariant mass range of 70 GeV \(<M_{e^{+}e^{-}}<110\) GeV for electron pairs. During the fit, the grids so obtained are then convoluted with the PDFs to calculate the needed NLO cross sections.
## IV Results
In this Section, the results of the new analysis are presented and compared with the previously published CJ15 results [3].
Figure 1 compares the lepton pair production cross section ratios from E866 and SeaQuest with our calculations before and after including SeaQuest data into the CJ22 fit. The comparison is done separately for the kinematics of each experiment, with E866 using a 800 GeV proton beam and SeaQuest a 120 GeV beam, and each experiment accessing a different range of lepton pair mass \(M\). For the SeaQuest data, we include the spectrometer acceptance matrix provided in Ref. [5], which has a relatively small effect on an already relatively flat observable.
Looking at Eq. (22), if \(d(x_{b})/u(x_{b})\ll 1\) was really negligible, as is often assumed for simplicity, one would expect that the ratio of the proton to deuteron cross section data should be approximately independent of \(M^{2}\). In
Figure 1: Comparison of the measured cross section ratio for lepton pair production in \(pd\) and \(pp\) collisions from the E866 [4; 51] and SeaQuest [5] experiments with NLO calculations. The solid blue (red) curve with \(T=1.645\) uncertainty band represents the ratio calculated at the E866 (SeaQuest) kinematics before (left) and after (right) including the new weak boson production data from SeaQuest and STAR in the fit.
stead, the ratio measured by the experiments is different, which is a direct reflection of the role of the \(d(x_{b})/u(x_{b})\) ratio in Eq. (22). Indeed, at the higher \(x_{b}\) probed by SeaQuest that ratio is smaller than at the lower \(x_{b}\) values of the corresponding E866 measurement. Therefore, one should expect the cross section ratio to be higher for SeaQuest than for E866, which is confirmed in Figure 1. Furthermore, Eq. (22) shows that an increase in the anti-quark \(\bar{d}(x_{t})/\bar{u}(x_{t})\) ratio can be compensated in the PDF fit by a decrease in the \(d(x_{b})/u(x_{b})\) quark ratio and vice versa. The resulting anticorrelation will be important to understand the behavior of the fitted PDF ratios that will be discussed later.
In the left panel of Figure 1, the calculations from the fit that only includes the E866 data show a steeper downturn in the cross section ratio than allowed by the SeaQuest data. With the new data added to the fit, however, the CJ22 PDFs bring the ratio plotted in the right panel distinctly above 1 in the large-\(x\) region where E866 has limited kinematic coverage and statistical precision, with substantially reduced PDF uncertainty. While the new cross section calculation lies higher than the last two E866 data points, only a minor increase in \(\chi^{2}\)/datum from 1.63 to 1.93 is observed for the E866 data because of the relatively large uncertainty of the last few data points. Conversely the \(\chi^{2}\)/datum value sharply reduces from 3.19 to 1.25 for the SeaQuest data after including the new data, reflecting both the enhanced kinematic
Figure 2: The measured (a) \(\sigma_{W^{+}}/\sigma_{W^{-}}\), (b) \(d\sigma_{Z}/dy_{Z}\), (c) \(d\sigma_{W^{+}}/d\eta_{e}^{+}\) and (d) \(d\sigma_{W^{-}}/d\eta_{e}^{-}\) are compared with the CJ22 calculations. The statistical and the total systematic uncertainties are added in quadrature and shown as the solid error bars for the data points. The solid red lines show the central values from our fit. The red bands correspond to the \(T=1.645\) PDF uncertainty. The differences with respect to CJ15 calculations are minor and the corresponding curves are omitted for visual clarity.
range and the precision of the new data.
The CJ22 fit also included the \(W^{+}\to e^{+}\!/\!W^{-}\to e^{-}\) cross section ratio measured by the STAR collaboration, which, as discussed in the previous Section, provide complementary information on the \(\bar{d}/\bar{u}\) ratio around a smaller \(x\approx 0.16\) value, overlapping with the E866 data but at a higher scale. The quality of the fit to the charge ratio data is shown in the upper left panel of Fig. 2. The remaining panels show a comparison of NLO calculations using the new CJ22 PDFs to the unfitted STAR data on the \(Z\), \(W^{+}\to e^{+}\) and \(W^{-}\to e^{-}\) rapidity distributions. Differences with CJ15 calculations are minor, and the corresponding curves not shown in the plots.
Overall, CJ22 describes reasonably well the \(W\) and \(Z\) measurements. However, there is a suggestion of more structure in the \(W^{-}\to e^{-}\) channel than shown by the theory. One can also note that the highest rapidity point of the \(W^{+}\) cross section is much lower than the corresponding theoretical calculation, which has PDFs strongly constrained by the rest of the world data set and cannot accommodate such a small measurement. Similar features were also observed in other PDF analyses of the STAR data [6; 8; 9]. In fact, reducing the calculated \(W^{+}\) cross section in this region would require substantially increasing the \(d(x_{1})/u(x_{1})\) ratio at large values of \(x_{1}\) and/or decreasing the value of the \(\bar{d}(x_{2})/\bar{u}(x_{2})\) ratio at small values of \(x_{2}\). Both possibilities would cause the fits to the lepton pair production data to be worse. Hence, it has not proven possible to get a good description of this one data point.
Figure 3 shows the impact of the new data on \(d/u\) and \(\bar{d}/\bar{u}\) ratios at a scale of \(Q^{2}=10\) GeV\({}^{2}\). The results are compared with the CJ15 light quark and antiquark ratios. In the CJ15 analysis, the E866 data provided the strongest constraints for the light-antiquarks and the larger-\(x\) region (\(x>0.3\)) was essentially left unconstrained by data. As a result, the CJ15 analysis was performed with a more rigid parametrization of the light-antiquarks, and \(\bar{d}/\bar{u}\) ratio was forced to approach 1 as \(x\to 1\). The new data from SeaQuest adds significant constraints on the \(\bar{d}/\bar{u}\) with a larger reach in \(x\) and allowed us to relax the parametrization used in CJ22, which does not prescribe the large \(x\) behavior of the \(\bar{d}/\bar{u}\) ratio. The CJ22 fit obtains a \(\bar{d}/\bar{u}\) ratio that keeps increasing until \(x\approx 0.25\) in the region where E866 data would have required a sharp drop. At \(x\gtrsim 0.25\), the ratio naturally starts decreasing, but remains above 1 within uncertainties, as is also the case for the \(\bar{d}/\bar{u}\) ratio obtained by the CT [8] and JAM [9] collaborations after inclusion of the SeaQuest data in their global fit. At \(x\lesssim 0.2\) the antiquark ratio is driven by the STAR data slightly below the CJ15 result, but remains compatible with the latter. Turning to the light quark ratio displayed in the left panel of Figure 3, one can see that the CJ15 \(d/u\) ratio remains decidedly above 0 at large \(x\), with a central value extrapolated to \(x=1\) of \(0.09\pm 0.03\). On the contrary, in the CJ22 analysis, the ratio approaches 0 within uncertainties as \(x\to 1\). This is due to the anticorrelation between the \(\bar{d}/\bar{u}\) and \(d/u\) induced by the lepton pair production data, and evidenced in Eq. (22), that was discussed earlier: the increase of \(\bar{d}/\bar{u}\) in the medium \(x\) region, which is allowed by the more flexible CJ15-a and CJ22 parametrizations, and further driven by the increased kinematic reach of the SeaQuest data in the latter, has caused a decrease in \(d/u\) in the large \(x\) region. The CJ15 non-zero \(d/u\) limit was thus the result of a parametrization bias that also underestimated the nominal uncertainty band. With this bias removed, the new result is compatible with the
Figure 3: Comparison of the \(d/u\) (left) and \(\bar{d}/\bar{u}\) (right) at \(Q^{2}=10\) GeV\({}^{2}\) between CJ15 (black solid curve) and CJ22 (red dashed curve) with 90% CL uncertainty bands.
recent \(d/u\) fits performed by Alekhin, Kulagin and Petti [52; 53], that have a similar large-\(x\) theoretical setup and data coverage (except for the SeaQuest and STAR data).
## V Summary
We have presented the results of our recent CJ122 global QCD analysis of parton distributions which included new electroweak data from SeaQuest and STAR. The SeaQuest data, in particular, extends the \(x\) coverage to larger \(x<0.45\) compared to the previous measurement by E866, leading to significant constraints on the \(\bar{d}/\bar{u}\) ratio, allowing the use of the more flexible light-antiquark parametrization discussed in the text. In the CJ22 fit the \(\bar{d}/\bar{u}\) ratio remains near 1 as \(x\to 1\) without having to build that into its parametrization, as was previously done in CJ15. The data are also sensitive to the \(d/u\) light quark ratio, and the interplay between this and \(\bar{d}/\bar{u}\) leads to a \(d/u\) ratio that lies below that found in CJ15 as \(x\to 1\), and is compatible with 0 in that limit.
###### Acknowledgements.
We would like to thank J. Bane, S. Fazio, M. Posik, and A. Tadepalli for informative discussions on their experimental measurements, as well as C. Cocuzza and W. Melnitchouk for useful comments and criticism. This work was supported in part by the U.S. Department of Energy (DOE) contract DE-AC05-06OR23177, under which Jefferson Science Associates LLC manages and operates Jefferson Lab. AA also acknowledges support from DOE contract DE-SC0008791. X. Jing was partially supported by DOE Grant No. DE-SC0010129. S. Park acknowledges support from DOE contract DE-FG02-05ER41372 and the Center for Frontiers in Nuclear Science.
|
2302.11620 | Secondary Hochschild cohomology and derivations | In this paper, we introduce a generalization of derivations. Using these
so-called secondary derivations, along with an analogue of Connes' Long Exact
Sequence, we are able to provide computations in low dimension for the
secondary Hochschild and cyclic cohomologies associated to a commutative
triple. We then establish a universal property, which paves the way to relating
secondary K\"ahler differentials with the aforementioned secondary derivations. | Kylie Bennett, Elizabeth Heil, Jacob Laubacher | 2023-02-22T19:54:41Z | http://arxiv.org/abs/2302.11620v1 | # Secondary Hochschild cohomology and derivations
###### Abstract.
In this paper, we introduce a generalization of derivations. Using these so-called secondary derivations, along with an analogue of Connes' Long Exact Sequence, we are able to provide computations in low dimension for the secondary Hochschild and cyclic cohomologies associated to a commutative triple. We then establish a universal property, which paves the way to relating secondary Kahler differentials with the aforementioned secondary derivations.
Key words and phrases:Hochschild cohomology, cyclic cohomology, derivations 2020 Mathematics Subject Classification: Primary 13D03; Secondary 16E40, 13N15 _Corresponding author_. Jacob Laubacher \(\boxtimes\) [email protected] \(\boxtimes\) 920-403-2961.
Triples have now been studied quite broadly, and examples, extensions, and applications can be found in a number of places (see [1], [2], [3], [4], [5], [8], [9], [13], or [18], for example). When convenient and appropriate, we will denote a triple by \(\mathcal{T}=(A,B,\varepsilon)\), so as to make it easier with notation.
### The secondary Hochschild cohomology
Next we recall the secondary Hochschild cohomology, which was introduced by Staic in [17] in 2016. In that paper, Staic studied the secondary Hochschild cohomology of the triple \((A,B,\varepsilon)\) with coefficients in \(M\), which was used to study deformations of \(A\) that have a nontrivial \(B\)-algebra structure. A few years later in [15], the authors established the secondary Hochschild cohomology associated to the triple \((A,B,\varepsilon)\), done through simplicial structures, many details of which can be found in [12]. This construction is what we will use here.
For notation, we will follow the convention made in [15]. For a triple \((A,B,\varepsilon)\), we define \(\overline{C}^{n}(A,B,\varepsilon)=\operatorname{Hom}_{\Bbbk}(A^{\otimes n+ 1}\otimes B^{\otimes\frac{n(n+1)}{2}},\Bbbk)\). As is customary (see [6], for instance), we view the elements of \(A^{\otimes n+1}\otimes B^{\otimes\frac{n(n+1)}{2}}\) organized as the matrix
\[\otimes\begin{pmatrix}a_{0}&b_{0,1}&b_{0,2}&\cdots&b_{0,n-2}&b_{0,n-1}&b_{0,n} \\ 1&a_{1}&b_{1,2}&\cdots&b_{1,n-2}&b_{1,n-1}&b_{1,n}\\ 1&1&a_{2}&\cdots&b_{2,n-2}&b_{2,n-1}&b_{2,n}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 1&1&1&\cdots&a_{n-2}&b_{n-2,n-1}&b_{n-2,n}\\ 1&1&1&\cdots&1&a_{n-1}&b_{n-1,n}\\ 1&1&1&\cdots&1&1&a_{n}\end{pmatrix},\]
where \(a_{i}\in A\), \(b_{i,j}\in B\), and \(1\in\Bbbk\), and consequently can then determine the coboundary maps \(\partial^{n}:\overline{C}^{n}(A,B,\varepsilon)\longrightarrow\overline{C}^{n +1}(A,B,\varepsilon)\) by
\[\partial^{n}f\Big{(}\otimes\begin{pmatrix}a_{0}&b_{0,1}&\cdots&b_ {0,n}&b_{0,n+1}\\ 1&a_{1}&\cdots&b_{1,n}&b_{1,n+1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&1&\cdots&a_{n}&b_{n,n+1}\\ 1&1&\cdots&1&a_{n+1}\end{pmatrix}\Big{)}\] \[=\sum_{i=0}^{n}(-1)^{i}f\Big{(}\otimes\begin{pmatrix}a_{0}&b_{0,1 }&\cdots&b_{0,i}b_{0,i+1}&\cdots&b_{0,n}&b_{0,n+1}\\ 1&a_{1}&\cdots&b_{1,i}b_{1,i+1}&\cdots&b_{1,n}&b_{1,n+1}\\ \vdots&\vdots&\ddots&\vdots&\ddots&\vdots&\vdots\\ 1&1&\cdots&1&\cdots&1&a_{n}&b_{n,n+1}\\ 1&1&\cdots&1&\cdots&1&a_{n+1}\end{pmatrix}\Big{)}\] \[+(-1)^{n+1}f\Big{(}\otimes\begin{pmatrix}a_{n+1}\varepsilon(b_{0, n+1})a_{0}&b_{1,n+1}b_{0,1}&\cdots&b_{n-1,n+1}b_{0,n-1}&b_{n,n+1}b_{0,n}\\ 1&a_{1}&\cdots&b_{1,n-1}&b_{1,n}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&1&\cdots&a_{n-1}&b_{n-1,n}\\ 1&1&\cdots&1&a_{n}\end{pmatrix}\Big{)}\]
for all \(n\geq 0\). It was shown in [15] that \(\partial^{n+1}\circ\partial^{n}=0\), and we denote the induced complex by \(\overline{\mathbf{C}}^{\bullet}(A,B,\varepsilon)\).
**Definition 2.2**.: ([15]) The cohomology of the chain complex \(\overline{\mathbf{C}}^{\bullet}(A,B,\varepsilon)\) is called the **secondary Hochschild cohomology associated to the triple \((A,B,\varepsilon)\)**, and this is denoted by \(HH^{*}(A,B,\varepsilon)\).
_Remark 2.3_.: By taking \(B=\Bbbk\), one can easily see how the secondary Hochschild cohomology associated to the triple \((A,B,\varepsilon)\) reduces to the usual Hochschild cohomology associated to \(A\). In notation, we have that \(HH^{n}(A,\Bbbk,\varepsilon)=HH^{n}(A)\) for all \(n\geq 0\).
Most meaningfully, we will focus on the chain complex \(\overline{\mathbf{C}}^{\bullet}(A,B,\varepsilon)\) in low dimension. Specifically, we have
\[0\longrightarrow\operatorname{Hom}_{\Bbbk}(A,\Bbbk)\xrightarrow{\partial^{0 }}\operatorname{Hom}_{\Bbbk}(A^{\otimes 2}\otimes B,\Bbbk)\xrightarrow{\partial^{1}} \operatorname{Hom}_{\Bbbk}(A^{\otimes 3}\otimes B^{\otimes 3},\Bbbk) \xrightarrow{\partial^{2}}\ldots\]
such that
\[\partial^{0}f\Big{(}\otimes\begin{pmatrix}a&\alpha\\ 1&b\end{pmatrix}\Big{)}=f(a\varepsilon(\alpha)b)-f(b\varepsilon(\alpha)a)\]
and
### The secondary cyclic cohomology
Connes introduced cyclic cohomology in [7], and one can see [16] for more details. Here we recall the analogous secondary version that was introduced in [15]. We start by considering the permutation \(\lambda=(0,1,2,\ldots,n)\) and the cyclic group \(C_{n+1}=\langle\lambda\rangle\). Notice that \(C_{n+1}\) has a natural action on \(\overline{C}^{n}(A,B,\varepsilon)\) given by
\[\lambda f\Big{(}\otimes\begin{pmatrix}a_{0}&b_{0,1}&\cdots&b_{0,n-1}&b_{0,n} \\ 1&a_{1}&\cdots&b_{1,n-1}&b_{1,n}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&1&\cdots&a_{n-1}&b_{n-1,n}\\ 1&1&\cdots&1&a_{n}\end{pmatrix}\Big{)}=(-1)^{n}f\Big{(}\otimes\begin{pmatrix}a _{n}&b_{0,n}&\cdots&b_{n-2,n}&b_{n-1,n}\\ 1&a_{0}&\cdots&b_{0,n-2}&b_{0,n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&1&\cdots&a_{n-2}&b_{n-2,n-1}\\ 1&1&\cdots&1&a_{n-1}\end{pmatrix}\Big{)}.\]
We can then consider the new complex built by setting \(\overline{C}^{n}_{\lambda}(A,B,\varepsilon)=\operatorname{Ker}(1-\lambda)\), where we continue to employ the maps \(\partial^{n}\) from Section 2.1. We then get the following definition.
**Definition 2.4**.: ([15]) The cohomology of the chain complex \(\overline{\mathbf{C}}^{\bullet}_{\lambda}(A,B\varepsilon)\) is called the **secondary cyclic cohomology associated to the triple \((A,B,\varepsilon)\)**, and this is denoted by \(HC^{*}(A,B,\varepsilon)\).
As is predictable, we now recall Connes' long exact sequence for the secondary case.
**Theorem 2.5**.: ([15]) _Let \(\Bbbk\) be a field of characteristic zero. For a triple \((A,B,\varepsilon)\), we have the long exact sequence_
\[\ldots\xrightarrow{I^{*}}HH^{n}(A,B,\varepsilon)\xrightarrow{B^{*}}HC^{n-1}( A,B,\varepsilon)\xrightarrow{S^{*}}HC^{n+1}(A,B,\varepsilon)\xrightarrow{I^{*}}HH^{n+1}(A,B, \varepsilon)\xrightarrow{B^{*}}\ldots\]
### Secondary Kahler differentials
This subsection follows [14], where in that paper we saw how the secondary Hochschild homology associated to a commutative triple corresponded to a generalization of Kahler differentials.
**Definition 2.6**.: ([14]) For a commutative triple \(\mathcal{T}=(A,B,\varepsilon)\), denote \(\Omega^{1}_{\mathcal{T}|\Bbbk}\) to be the left \(B\otimes A\)-module of **secondary Kahler differentials** generated by the \(\Bbbk\)-linear symbols \(d(\alpha\otimes a)\) for \(\alpha\in B\) and \(a\in A\) with the module structure of \((\alpha\otimes a)\cdot d(\beta\otimes b)=a\varepsilon(\alpha)d(\beta\otimes b)\), along with the relations:
1. \(d(\lambda(\alpha\otimes a)+\mu(\beta\otimes b))=\lambda d(\alpha\otimes a)+ \mu d(\beta\otimes b)\),
2. \(d((\alpha\otimes a)(\beta\otimes b))=a\varepsilon(\alpha)d(\beta\otimes b)+ b\varepsilon(\beta)d(\alpha\otimes a)\), and
3. \(d(\alpha\otimes 1)+d(\alpha\otimes 1)=d(1\otimes\varepsilon(\alpha))\)
for all \(a,b\in A\), \(\alpha,\beta\in B\), and \(\lambda,\mu\in\Bbbk\).
One of the goals of this paper is to showcase how the secondary Kahler differentials are indeed nontrivial by way of derivations.
### The universal derivation
Finally we recall some facts related to derivations that can be found in such foundational texts like [16] or [19]. These results are classical and commonly recounted as folklore. Furthermore, the goal of Section 4 will be to get similar results as these, but for the secondary case.
**Definition 2.7**.: The derivation \(D:A\longrightarrow M\) is said to be **universal** if for any other derivation \(\delta:A\longrightarrow N\) there exists a unique \(A\)-linear map \(\varphi:M\longrightarrow N\) such that \(\delta=\varphi\circ D\). In other words, the following diagram commutes:
**Proposition 2.8**.: _For \(A\) commutative, we have that the map \(d:A\longrightarrow\Omega^{1}_{A|\Bbbk}\) given by_
\[a\longmapsto d(a)\]
_is the universal derivation._
**Proposition 2.9**.: _For \(A\) commutative, we have that_
\[\operatorname{Hom}_{A}(\Omega^{1}_{A|\Bbbk},M)\longrightarrow\operatorname{ Der}_{\Bbbk}(A,M)\]
_given by \(f\longmapsto f\circ d\) is an isomorphism._
## 3. Secondary Derivations
In the usual case, one has the classic result of \(HH^{1}(A)\cong\operatorname{Der}_{\Bbbk}(A,A^{*})\), where \(HH^{1}(A)\) denotes the Hochschild cohomology of \(A\) in dimension \(1\), and \(\operatorname{Der}_{\Bbbk}(A,A^{*})\) denotes the set of all \(\Bbbk\)-linear derivations from \(A\) to \(A^{*}\). Furthermore, one can also conclude that \(HC^{1}(A)\cong\operatorname{Der}^{1}_{\Bbbk}(A,A^{*})\), where the superscript \(1\) means we add the condition \(D(a\otimes b)=-D(b\otimes a)\) to the aforementioned set \(\operatorname{Der}_{\Bbbk}(A,A^{*})\).
The main goal of this section is to get results in the secondary case that corresponds to the above.
**Definition 3.1**.: A **secondary derivation** of the commutative triple \(\mathcal{T}=(A,B,\varepsilon)\) with values in \(M\) is a \(\Bbbk\)-linear map \(D:B\otimes A\longrightarrow M\) with the symmetric bimodule structure of \((\alpha\otimes a)\cdot D(\beta\otimes b)=a\varepsilon(\alpha)D(\beta\otimes b )=D(\beta\otimes b)a\varepsilon(\alpha)=D(\beta\otimes b)\cdot(\alpha\otimes a)\) such that
1. \(D(\lambda(\alpha\otimes a)+\mu(\beta\otimes b))=\lambda D(\alpha\otimes a)+\mu D (\beta\otimes b)\),
2. \(D((\alpha\otimes a)(\beta\otimes b))=a\varepsilon(\alpha)D(\beta\otimes b)+D (\alpha\otimes a)b\varepsilon(\beta)\), and
3. \(D(\alpha\otimes 1)+D(\alpha\otimes 1)=D(1\otimes\varepsilon(\alpha))\)
for all \(a,b\in A\), \(\alpha,\beta\in B\), and \(\lambda,\mu\in\Bbbk\). The set of all such secondary derivations is denoted by \(\operatorname{Der}_{\Bbbk}(\mathcal{T},M)\).
Notice that we have \((\alpha\otimes a)(\beta\otimes b)=\alpha\beta\otimes ab\), and therefore
\[D(\alpha\beta\otimes ab)=a\varepsilon(\alpha)D(\beta\otimes b)+D(\alpha \otimes a)b\varepsilon(\beta).\]
As consequence, it is then immediate that
\[D(1\otimes ab) =aD(1\otimes b)+D(1\otimes a)b,\] \[D(\alpha\beta\otimes 1) =\varepsilon(\alpha)D(\beta\otimes 1)+D(\alpha\otimes 1) \varepsilon(\beta),\text{and}\] \[D(\alpha\otimes a) =\varepsilon(\alpha)D(1\otimes a)+D(\alpha\otimes 1)a.\]
One could wonder if secondary derivations have a Lie algebra structure. This could be an avenue worth pursuing in future work.
_Remark 3.2_.: It is easy to see that \(D(1\otimes 1)=0\), and hence \(D(\lambda\otimes\mu)=0\) for all \(\lambda,\mu\in\Bbbk\). Furthermore, we note that the first two conditions of Definition 3.1 make \(D\) a derivation of the commutative algebra \(B\otimes A\) with a symmetric bimodule structure, while the third condition is additional.
_Remark 3.3_.: Under the identification of \(B=\Bbbk\), notice that secondary derivations become the usual derivations \(\operatorname{Der}_{\Bbbk}(A,M)\) for the commutative \(\Bbbk\)-algebra \(A\) into the \(A\)-symmetric bimodule \(M\). In particular, one can see that the final condition in Definition 3.1 becomes trivial when \(B=\Bbbk\).
**Example 3.4**.: With \(A=M=\Bbbk[x]\), \(B=\Bbbk[x^{2}]\), and \(\iota:\Bbbk[x^{2}]\longrightarrow\Bbbk[x]\) given by inclusion, we claim that for the commutative triple \(\mathcal{T}=(\Bbbk[x],\Bbbk[x^{2}],\iota)\), its corresponding set of secondary derivations \(\operatorname{Der}_{\Bbbk}(\mathcal{T},\Bbbk[x])\) is nontrivial. Note \(D\in\operatorname{Der}_{\Bbbk}(\mathcal{T},\Bbbk[x])\) when we define \(D(g\otimes 1)=g^{\prime}/2\), \(D(1\otimes h)=h^{\prime}\), and \(D(g\otimes h)=gh^{\prime}+hg^{\prime}/2\) for \(g\in\Bbbk[x^{2}]\) and \(h\in\Bbbk[x]\).
Before we turn our attention to the case when \(M=A^{*}=\operatorname{Hom}_{\Bbbk}(A,\Bbbk)\), there are a couple straightforward computations and an observation that will prove useful.
**Example 3.5**.: For any triple \((A,B,\varepsilon)\), it is easy to see that
\[HH^{0}(A,B,\varepsilon)=HH^{0}(A)=(A^{*})^{A}=\{f:A\longrightarrow\Bbbk\ |\ f(ab)=f(ba)\text{ for all }a,b\in A\}.\]
Furthermore, using Theorem 2.5, it is immediate that \(HC^{0}(A,B,\varepsilon)\cong HH^{0}(A,B,\varepsilon)\). Of particular interest is when we have a commutative triple \((A,B,\varepsilon)\), we get that the map \(\partial^{0}\equiv 0\). This implies that \(\operatorname{Im}(\partial^{0})\) is trivial.
**Theorem 3.6**.: _For a commutative triple \(\mathcal{T}=(A,B,\varepsilon)\), we have that_
\[HH^{1}(A,B,\varepsilon)\cong\operatorname{Der}_{\Bbbk}(\mathcal{T},A^{*}).\]
Proof.: As stated in Example 3.5, since \(A\) is commutative, we get that \(\operatorname{Im}(\partial^{0})\) is trivial, and so \(HH^{1}(A,B,\varepsilon)=\operatorname{Ker}(\partial^{1})\). Furthermore, \(\operatorname{Ker}(\partial^{1})\) consists of all k-linear maps from \(A^{\otimes 2}\otimes B\) to \(\Bbbk\) such that
\[f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha)b&\beta\gamma\\ 1&c\end{pmatrix}\Big{)}-f\Big{(}\otimes\begin{pmatrix}a&\alpha\beta\\ 1&b\varepsilon(\gamma)c\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}c \varepsilon(\beta)a&\alpha\gamma\\ 1&b\end{pmatrix}\Big{)}=0. \tag{3.1}\]
Next, we observe that under the identification \(M=A^{*}\), we have that \(\operatorname{Der}_{\Bbbk}(\mathcal{T},A^{*})\) also consists of \(\Bbbk\)-linear maps from \(A^{\otimes 2}\otimes B\) to \(\Bbbk\), but has the following two relations:
\[\begin{split} D\Big{(}\otimes\begin{pmatrix}a&\alpha\beta\\ 1&bc\end{pmatrix}\Big{)}=D\Big{(}\otimes\begin{pmatrix}ab\varepsilon(\alpha)& \beta\\ 1&c\end{pmatrix}\Big{)}+D\Big{(}\otimes\begin{pmatrix}ca\varepsilon(\beta)& \alpha\\ 1&b\end{pmatrix}\Big{)}\text{ and }\\ D\Big{(}\otimes\begin{pmatrix}a&\gamma\\ 1&1\end{pmatrix}\Big{)}+D\Big{(}\otimes\begin{pmatrix}a&\gamma\\ 1&1\end{pmatrix}\Big{)}=D\Big{(}\otimes\begin{pmatrix}a&1\\ 1&\varepsilon(\gamma)\end{pmatrix}\Big{)}.\end{split} \tag{3.2}\]
To get our desired isomorphism, it is sufficient to show (3.1) implies (3.2), as well as the converse.
Supposing (3.1), it is easy to see how (3.2) is satisfied: simply take \(\gamma=1_{B}\) to get the first equation, while taking \(b=c=1_{A}\) and \(\alpha=\beta=1_{B}\) obtains the second equation.
On the other hand, suppose (3.2). Notice that we have
\[\begin{split} f\Big{(}\otimes\begin{pmatrix}a&\alpha\beta\\ 1&bc\varepsilon(\gamma)\end{pmatrix}\Big{)}&=f\Big{(}\otimes\begin{pmatrix} a\varepsilon(\alpha\beta)&1\\ 1&bc\varepsilon(\gamma)\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}abc \varepsilon(\gamma)&\alpha\beta\\ 1&1\end{pmatrix}\Big{)}\\ &=f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha\beta)bc&1\\ 1&\varepsilon(\gamma)\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}a \varepsilon(\alpha\beta)b\varepsilon(\gamma)&1\\ 1&c\end{pmatrix}\Big{)}\\ &+f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha\beta)c\varepsilon(\gamma )&1\\ 1&b\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}abc\varepsilon(\gamma) \varepsilon(\alpha)&\beta\\ 1&1\end{pmatrix}\Big{)}\\ &+f\Big{(}\otimes\begin{pmatrix}abc\varepsilon(\gamma)\varepsilon(\beta)& \alpha\\ 1&1\end{pmatrix}\Big{)}\\ &=f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha\beta)bc&\gamma\\ 1&1\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha \beta)bc&\gamma\\ 1&1\end{pmatrix}\Big{)}\\ &+f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha\beta)b\varepsilon(\gamma)& 1\\ 1&c\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}a\varepsilon(\alpha\beta)c \varepsilon(\gamma)&1\\ 1&b\end{pmatrix}\Big{)}\\ &+f\Big{(}\otimes\begin{pmatrix}abc\varepsilon(\gamma\alpha)&\beta\\ 1&1\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}abc\varepsilon(\gamma \beta)&\alpha\\ 1&1\end{pmatrix}\Big{)}\\ &=f\Big{(}\otimes\begin{pmatrix}ab\varepsilon(\alpha)c&\beta\gamma\\ 1&1\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}ab\varepsilon(\alpha) \varepsilon(\beta\gamma)&1\\ 1&c\end{pmatrix}\Big{)}\\ &+f\Big{(}\otimes\begin{pmatrix}ac\varepsilon(\beta)b&\alpha\gamma\\ 1&1\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}ac\varepsilon(\beta) \varepsilon(\alpha\gamma)&1\\ 1&b\end{pmatrix}\Big{)}\\ &=f\Big{(}\otimes\begin{pmatrix}ab\varepsilon(\alpha)&\beta\gamma\\ 1&c\end{pmatrix}\Big{)}+f\Big{(}\otimes\begin{pmatrix}ca\varepsilon(\beta)& \alpha\gamma\\ 1&b\end{pmatrix}\Big{)},\end{split}\]
which was what we wanted. Thus, the isomorphism follows.
**Corollary 3.7**.: _For a commutative triple \(\mathcal{T}=(A,B,\varepsilon)\), we have that_
\[HC^{1}(A,B,\varepsilon)\cong\operatorname{Der}^{1}_{\Bbbk}(\mathcal{T},A^{*}),\]
_where the superscript \(1\) denotes we add the condition_
\[D\Big{(}\otimes\begin{pmatrix}a&\alpha\\ 1&b\end{pmatrix}\Big{)}=-D\Big{(}\otimes\begin{pmatrix}b&\alpha\\ 1&a\end{pmatrix}\Big{)}\]
_to \(\operatorname{Der}_{\Bbbk}(\mathcal{T},A^{*})\)._
Proof.: First we observe that \(\mathcal{T}=(A,B,\varepsilon)\) is a commutative triple, and so Proposition 3.6 will apply when we use Theorem 2.5 in low dimension. Specifically, we have that
\[0\longrightarrow HC^{1}(A,B,\varepsilon)\xrightarrow{I^{*}}\operatorname{ Der}_{\Bbbk}(\mathcal{T},A^{*})\xrightarrow{B^{*}}\dots\]
Thus, by exactness and the first isomorphism theorem, one gets
\[HC^{1}(A,B,\varepsilon)\cong\frac{HC^{1}(A,B,\varepsilon)}{\{0\}}=\frac{HC^{1} (A,B,\varepsilon)}{\operatorname{Ker}(I^{*})}\cong\operatorname{Im}(I^{*}) \subseteq\operatorname{Der}_{\Bbbk}(\mathcal{T},A^{*}).\]
Therefore, since \(I^{*}\) is induced by inclusion, we have that \(\operatorname{Im}(I^{*})\) contains the cyclic maps described in Section 2.2, which are those that satisfy
\[D\Big{(}\otimes\begin{pmatrix}a&\alpha\\ 1&b\end{pmatrix}\Big{)}=-D\Big{(}\otimes\begin{pmatrix}b&\alpha\\ 1&a\end{pmatrix}\Big{)},\]
as desired.
_Remark 3.8_.: Notice that Theorem 3.6 and Corollary 3.7 reduce to the expected classical results when we take \(B=\Bbbk\), as described at the start of Section 3.
## 4. The Universal Property
The purpose of this section is twofold: to highlight a universal property for the secondary derivations (introduced in Definition 3.1), and to show that the secondary Kahler differentials from [14] are nontrivial (see Definition 2.6). In particular, the results from this section will run parallel to what was recalled in Section 2.4.
**Definition 4.1**.: The secondary derivation \(D:B\otimes A\longrightarrow M\) is said to be **universal** if for any other secondary derivation \(\delta:B\otimes A\longrightarrow N\) there exists a unique \(A\)-linear map \(\varphi:M\longrightarrow N\) such that \(\delta=\varphi\circ D\). In other words, the following diagram commutes:
**Proposition 4.2**.: _For a commutative triple \(\mathcal{T}=(A,B,\varepsilon)\), we have that the map_
\[d:B\otimes A\longrightarrow\Omega^{1}_{\mathcal{T}|\Bbbk}\]
_given by \(\alpha\otimes a\longmapsto d(\alpha\otimes a)\) is the universal secondary derivation._
Proof.: In order to verify this map is a secondary derivation, there are three conditions to check from Definition 3.1. However, these all follow immediately from the definition of secondary Kahler differentials \(\Omega^{1}_{\mathcal{T}|\Bbbk}\) (see Definition 2.6), and because \(A\) is commutative. Thus \(d:B\otimes A\longrightarrow\Omega^{1}_{\mathcal{T}|\Bbbk}\) is a secondary derivation.
The map is also universal since for any derivation \(\delta:B\otimes A\longrightarrow N\) there is a unique map \(\varphi:\Omega^{1}_{\mathcal{T}|\Bbbk}\longrightarrow N\) determined by \(d(\alpha\otimes a)\longmapsto\delta(\alpha\otimes a)\). It is then immediate that \(\delta=\varphi\circ d\) by construction.
**Proposition 4.3**.: _For a commutative triple \(\mathcal{T}=(A,B,\varepsilon)\), we have that_
\[\operatorname{Hom}_{A}(\Omega^{1}_{\mathcal{T}|\Bbbk},M)\longrightarrow \operatorname{Der}_{\Bbbk}(\mathcal{T},M)\]
_given by \(f\longmapsto f\circ d\) is an isomorphism._
Proof.: We first note that the domain consists of \(A\)-linear morphisms, which will play a role below. Next we show that \(f\circ d\) is a secondary derivation. From Definition 3.1, there are three conditions to check; condition (i) is clear, but for (ii) we have
\[(f\circ d)((\alpha\otimes a)(\beta\otimes b)) =f(d((\alpha\otimes a)(\beta\otimes b)))\] \[=f(a\varepsilon(\alpha)d(\beta\otimes b)+b\varepsilon(\beta)d( \alpha\otimes a))\] \[=f(a\varepsilon(\alpha)d(\beta\otimes b))+f(b\varepsilon(\beta) d(\alpha\otimes a))\] \[=a\varepsilon(\alpha)f(d(\beta\otimes b))+b\varepsilon(\beta)f( d(\alpha\otimes a))\] \[=a\varepsilon(\alpha)(f\circ d)(\beta\otimes b)+b\varepsilon( \beta)(f\circ d)(\alpha\otimes a),\]
and for (iii) we have
\[(f\circ d)(\alpha\otimes 1)+(f\circ d)(\alpha\otimes 1) =f(d(\alpha\otimes 1))+f(d(\alpha\otimes 1))\] \[=f(d(\alpha\otimes 1)+d(\alpha\otimes 1))\] \[=f(d(1\otimes\varepsilon(\alpha)))\] \[=(f\circ d)(1\otimes\varepsilon(\alpha)).\]
Thus, \(f\circ d\) is a secondary derivation. The isomorphism follows from the universality of \(\Omega^{1}_{\mathcal{T}|\Bbbk}\), coming from Proposition 4.2.
_Remark 4.4_.: By taking \(B=\Bbbk\), note that these results will reduce to the usual case, as described in Section 2.4.
_Remark 4.5_.: Due to the isomorphism in Proposition 4.3 and by Example 3.4 (for instance), we conclude that the secondary Kahler differentials \(\Omega^{1}_{\mathcal{T}|\Bbbk}\) are nontrivial.
|
2303.06075 | Long-tailed Classification from a Bayesian-decision-theory Perspective | Long-tailed classification poses a challenge due to its heavy imbalance in
class probabilities and tail-sensitivity risks with asymmetric misprediction
costs. Recent attempts have used re-balancing loss and ensemble methods, but
they are largely heuristic and depend heavily on empirical results, lacking
theoretical explanation. Furthermore, existing methods overlook the decision
loss, which characterizes different costs associated with tailed classes. This
paper presents a general and principled framework from a
Bayesian-decision-theory perspective, which unifies existing techniques
including re-balancing and ensemble methods, and provides theoretical
justifications for their effectiveness. From this perspective, we derive a
novel objective based on the integrated risk and a Bayesian deep-ensemble
approach to improve the accuracy of all classes, especially the "tail".
Besides, our framework allows for task-adaptive decision loss which provides
provably optimal decisions in varying task scenarios, along with the capability
to quantify uncertainty. Finally, We conduct comprehensive experiments,
including standard classification, tail-sensitive classification with a new
False Head Rate metric, calibration, and ablation studies. Our framework
significantly improves the current SOTA even on large-scale real-world datasets
like ImageNet. | Bolian Li, Ruqi Zhang | 2023-03-10T16:53:51Z | http://arxiv.org/abs/2303.06075v2 | # Long-tailed Classification from a Bayesian-decision-theory Perspective
###### Abstract
Long-tailed classification poses a challenge due to its heavy imbalance in class probabilities and tail-sensitivity risks with asymmetric misprediction costs. Recent attempts have used re-balancing loss and ensemble methods, but they are largely heuristic and depend heavily on empirical results, lacking theoretical explanation. Furthermore, existing methods overlook the decision loss, which characterizes different costs associated with tailed classes. This paper presents a general and principled framework from a Bayesian-decision-theory perspective, which unifies existing techniques including re-balancing and ensemble methods, and provides theoretical justifications for their effectiveness. From this perspective, we derive a novel objective based on the integrated risk and a Bayesian deep-ensemble approach to improve the accuracy of all classes, especially the "tail". Besides, our framework allows for task-adaptive decision loss which provides provably optimal decisions in varying task scenarios, along with the capability to quantify uncertainty. Finally, We conduct comprehensive experiments, including standard classification, tail-sensitive classification with a new False Head Rate metric, calibration, and ablation studies. Our framework significantly improves the current SOTA even on large-scale real-world datasets like ImageNet.
Machine Learning, Bayesian-decision-theory, Bayesian-decision-theory
## 1 Introduction
Machine learning methods usually assume that training and testing data are both i.i.d. (independent and identically distributed) sampled from the same data distribution. However, this is not always true for real-world scenarios (Hand, 2006). One example is _long-tailed classification_(Reed, 2001; Lin et al., 2014; Van Horn and Perona, 2017; Krishna et al., 2017; Liu et al., 2019; Wang et al., 2020; Li et al., 2022), where the training data is biased towards a few "head" classes, while the "tailed" classes have fewer samples, resulting in a "long-tailed" distribution of class probabilities. The long-tailed problem is mainly due to the process of collecting data, which is unavoidably biased. Conventional models trained on long-tailed data often report significant performance drops compared with the results obtained on balanced training data (Wang et al., 2022). Besides, for some real-world applications, the risk of classifying tailed samples as head (which is a common type of mistakes) is obviously more severe than that of classifying head samples as tail (which is less common) (Sengupta et al., 2016; Rahman et al., 2021; Yang et al., 2022). The significant performance drop and the "tail-sensitivity risk" limit the application of ML models to long-tailed classification.
Existing works usually re-balance the loss function to promote the accuracy of tail classes, which typically re-weights the loss function by a factor \(1/f(n_{y})\), where \(f(\cdot)\) is an increasing function and \(n_{y}\) refers to the number of samples in the class \(y\)(Lin et al., 2017; Cao et al., 2019; Cui et al., 2019; Wu et al., 2020). Their re-weighting strategy compensates for the lack of training samples in tailed classes, but suffers from sub-optimal head class accuracies. Other attempts on ensemble models try to reduce the model variance to promote the head and tail accuracies at the same time (Wang et al., 2020; Li et al., 2022). Despite the effectiveness of existing works, they suffer from significant limitations: **i)** their algorithm design is largely based on empirical results without adequate theoretical explanation; **ii)** they do not consider the decision loss, which represents the application-related risks (e.g., the tail-sensitivity risk) in the real world and thus their models are not applicable to tasks with different metrics other than standard classification task with accuracy; **iii)** most methods do not quantify uncertainty in their predictions, which reduces their reliability. These limitations undermine the potential of existing works for real-world long-tailed data.
In this paper, we propose a unified framework for long-tailed classification, rooted in _Bayesian Decision Theory_(Berger, 1985; Robert et al., 2007). Our framework unifies existing methods and provides theoretical justifications for their effectiveness, including re-balancing loss and ensemble methods, which have been shown to achieve promising results. To derive our framework, we first introduce a new objective
based on the _integrated risk_ which unifies three crucial components in long-tailed problems: data distribution, decision loss, and posterior inference. To minimize this objective, we then derive a tractable lower bound based on variational EM (Lacoste-Julien et al., 2011) and approximate the posterior by a particle-based ensemble model (Liu and Wang, 2016; D'Angelo and Fortuin, 2021). Furthermore, we design two kinds of _utility functions_ for the standard and tail-sensitive classifications respectively, which enables real-world applications with tail-sensitivity risks. Finally, we conduct comprehensive experiments on three long-tailed datasets under various evaluation metrics to demonstrate the superiority of our method in general settings. We summarize our contributions as follows:
* _Long-tailed Bayesian Decision_ (LBD) is the first to formulate long-tailed classification under Bayesian Decision Theory, providing a fresh perspective and unifying three key components of long-tailed problems (data distribution, posterior inference and decision making) within a single principled framework.
* We propose a new objective based on _integrated risk_, which exploits variational approximation, utility functions and an efficient particle-based BNN model. It significantly enhances the flexibility (task-adaptive utility functions) and reliability (uncertainty quantification) of our framework.
* For real-world applications, we take the decision loss into account, extending our method to more realistic long-tailed problems where the risk of wrong predictions varies and depends on the type of classes (e.g., head or tail). We also design a new metric (False Head Rate) to evaluate this kind of risk accordingly.
* We conduct comprehensive experiments including generalization accuracy, uncertainty estimation, and a newly designed False Head Rate (FHR) to show the effectiveness of our method on various tasks. In particular, our method outstrips the SOTA in large-scale real-world datasets like ImageNet on all metrics.
## 2 Related Works
Long-tailed Classification.To overcome long-tailed class distributions, over-sampling (Han et al., 2005) uses generated data to compensate the tailed classes, under-sampling (Liu et al., 2008) splits the imbalanced dataset into multiple balanced subsets, and data augmentation (Chu et al., 2020; Kim et al., 2020; Liu et al., 2020) introduces random noise to promote model's robustness. Recent advances focus on improving training loss functions and model architectures. For example, re-weighting methods (Cao et al., 2019; Cui et al., 2019; Lin et al., 2017; Menon et al., 2020; Wu et al., 2020; Mahajan et al., 2018) adjust the loss function by class probabilities in the training data, OLTR (Liu et al., 2019) transfers the knowledge learned from head classes to the learning of tailed classes, LFME (Xiang et al., 2020) uses multiple teacher models to learn relatively balanced subsets of training data, RIDE (Wang et al., 2020) develops a multi-expert framework to promote the overall performances with ensemble model, and TLC (Li et al., 2022) exploits the evidential uncertainty to optimize the multi-expert framework. Besides, SRepr (Nam et al., 2023) explores Gaussian noise in stochastic weight averaging to obtain stochastic representation, and SADE (Zhang et al., 2022) considers the case of non-uniform testing distributions in long-tailed problems. These methods are largely designed based on empirical heuristics, and thus their performances are not explainable and guaranteed. In contrast, our method is rooted in Bayesian principle and decision theory, inheriting their theoretical guarantees and explanation.
Bayesian Decision Theory.Bayesian Decision Theory is introduced in Robert et al. (2007); Berger (1985). It provides a bridge which connects posterior inference, decision, and data distribution. Loss-calibrated EM (Lacoste-Julien et al., 2011) exploits the posterior risk (Schervish, 2012) to simultaneously consider inference and decision. Cobb et al. (2018) further extends this method using dropout-based Bayesian neural networks. Loss-EP (Morais and Pillow, 2022) applies the technique of loss calibration in expectation propagation. Post-hoc Loss-calibration (Vadera et al., 2021) develop an inference-agnostic way to learn the high-utility decisions. These methods all use the notion of utility to represent their prior knowledge about the application-related risks, and exploit the posterior risk. While they show great advantages in some applications, none of them consider the data distribution, which prevents their applications on long-tailed data. Our method overcomes this limitation by introducing the integrated risk, which unifies data distributions, inference, and decision-making.
Ensemble and Particle Optimization.Ensemble models combine several individual deep models to obtain better generalization performances (Lakshminarayanan et al., 2017; Ganaie et al., 2021), which is inspired by the observation that multiple i.i.d. initializations are less likely to generate averagely "bad" models (Dietterich, 2000). Ensemble models can also be used to approximate the posterior with the technique of _particle optimization_, which is first studied in Stein variational gradient descent (SVGD, Liu and Wang (2016)) and then explored by Liu et al. (2019); Korba et al. (2020); D'Angelo and Fortuin (2020). Liu (2017) analyzes SVGD in a gradient-flow perspective. Wang et al. (2018) performs the particle optimization directly in the function space. Chen et al. (2018); Liu et al. (2018) put the particle optimization in the 2-Wasserstein space. D'Angelo and
Fortuin (2021) implements particles by introducing a repulsive force in the gradient flow. Instead of directly modeling the gradient flow, our framework optimizes the particles through stochastic gradient descent (SGD, Bottou (1998)), with repulsive force induced by the integrated risk objective. Compared to existing particle optimization, our method is easy and cheap to implement, which is especially beneficial for large deep models.
## 3 Background
### Long-tailed Distribution
Long-tailed distributed data is a special case of _dataset shift_(Quinonero-Candela et al., 2008), in which the common assumption is violated that the training and testing data follow the same distribution (Moreno-Torres et al., 2012). For the long-tailed scenario studied in this paper, the training data \(\mathcal{D}_{train}\) is distributed in a descending manner over categories in terms of class probability:
\[p(\mathbf{x}_{1},y_{1}=k_{1})\geq p(\mathbf{x_{2}},y_{2}=k_{2}),\ \ \text{if}\ k_{1}\leq k _{2} \tag{1}\]
for all \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2})\in\mathcal{D}_{train}\), while the testing data \(\mathcal{D}_{test}\) is assumed to be distributed uniformly over categories:
\[p(\mathbf{x}_{1},y_{1}=k_{1})=p(\mathbf{x}_{2},y_{2}=k_{2}) \tag{2}\]
for all \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2})\in\mathcal{D}_{test}\). One important feature of long-tailed distribution is that both training and testing data are semantically identical1, and the only difference lies in class probabilities.
Footnote 1: In contrast to the open-set scenario (Geng et al., 2020), where additional classes may cause testing data to be semantically irrelevant to training data.
### Bayesian Decision Theory
Bayesian Decision Theory is a general statistical approach and can be applied to the task of pattern classification (Berger, 1985; Robert et al., 2007). In standard Bayesian inference, for a training dataset \(\mathcal{D}_{train}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) and a \(\theta\)-parameterized model, we have the likelihood \(\prod_{i}p(y_{i}|\mathbf{x}_{i},\mathbf{\theta})\) and prior \(p(\mathbf{\theta})\). The Bayesian approach tries to estimate the posterior \(p(\mathbf{\theta}|\mathcal{D}_{train})=p(\mathbf{\theta})\prod_{i}p(y_{i}|\mathbf{x}_{i}, \mathbf{\theta})/\prod_{i}p(y_{i}|\mathbf{x}_{i})\). For test data \(x^{*}\), we obtain the predictive distribution \(p(y^{*}|\mathbf{x}^{*},\mathcal{D}_{train})=\int_{\theta}p(y^{*}|\mathbf{x}^{*},\mathbf{ \theta})p(\mathbf{\theta}|\mathcal{D}_{train})d\mathbf{\theta}\) by averaging over all possible models weighted by their posterior probabilities. The Bayesian Decision Theory further considers the utility of making different decisions and the data distribution, which bridges the posterior inference, data distribution, and decision-making in a unified manner. For example, the _posterior risk_ is defined by the decision losses averaged over the posterior, and the _integrated risk_ further considers the data distribution. Bayesian Decision Theory has theoretical guarantees on the results and is provable to provide a desirable decision. For example, Robert et al. (2007) illustrates two kinds of optimalities, _minimaxity_ and _admissibility_, of the model that minimizes the integrated risk. Therefore, models following Bayesian Decision Theory are expected to have smaller risks than models trained in other ways. We in this paper simultaneously consider posterior, decision loss, and long-tailed data distribution in a unified Bayesian framework.
## 4 Long-tailed Bayesian Decision
For conventional frameworks in long-tailed classification, one crucial challenge is that: **inference** (how to infer model parameters), **decision** (model's actions in the presence of application-related risks), and **data distribution** (long-tailed distribution) are independent from each other in the training phase (Lacoste-Julien et al., 2011). To the best of our knowledge, none of previous methods can simultaneously consider these three aspects. In order to address this drawback, we introduce the _integrated risk_ from Bayesian Decision Theory, which is computed over the posterior \(p(\mathbf{\theta}|\mathcal{D})\) and the data distribution \(p(\mathbf{x},y)\):
\[R(d):=\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p(\mathbf{x},y)}\mathbb{E}_{\mathbf{\theta} \sim p(\mathbf{\theta}|\mathcal{D})}l(\mathbf{\theta},d(\mathbf{x}_{i})), \tag{3}\]
where \(l(\mathbf{\theta},d(\mathbf{x}_{i}))\) is the loss of making decision \(d(\mathbf{x}_{i})\) for \(\mathbf{x}_{i}\) when the environment is \(\mathbf{\theta}\) (model's parameters). The decision estimator \(d\) that minimizes the integrated risk is proved to give the optimal decisions in terms of the decision loss (Robert et al., 2007).
In order to exploit Eq. 3 as the objective, we need to determine the posterior and the optimal decision at the same time, which is notoriously hard because they are dependent on each other. Inspired by the Expectation-Maximization (EM) algorithm (Dempster et al., 1977; Lacoste-Julien et al., 2011), which alternately conducts the integration and optimization steps, we propose a long-tailed version of variational EM algorithm to alternately update a variational distribution and a classification decision on long-tailed data. To use EM, we convert the minimization problem to a maximization problem. Specifically, we define the _decision gain_\(g(\mathbf{\theta},d(\mathbf{x}_{i}))\propto-l(\mathbf{\theta},d(\mathbf{x}_{i}))\) to be:
\[g(\mathbf{\theta},d(\mathbf{x}_{i})):=\prod_{y^{\prime}}p(y^{\prime}|\mathbf{x}_{i},\mathbf{ \theta})^{u(y^{\prime},d(\mathbf{x}_{i}))} \tag{4}\]
to represent what we gain from making decision \(d(\mathbf{x}_{i})\) given the environment \(\mathbf{\theta}\) (see Appendix A for more details). Here, \(u(y^{\prime},d)\) is a fixed utility function that gives the utility of making decision \(d\) when the true label is \(y^{\prime}\). Then our goal becomes maximizing the _integrated gain_:
\[G(d):=\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p(\mathbf{x},y)}\mathbb{E}_{\mathbf{\theta} \sim p(\mathbf{\theta}|\mathcal{D})}g(\mathbf{\theta},d(\mathbf{x}_{i})). \tag{5}\]
We will work with this objective and discuss the details of Long-tailed Bayesian Decision in the following sections.
### Task-adaptive Utility Functions
We first discuss the design of the utility function: \(u(y,d)\), where \(y\) is the ground truth (true label) and \(d\) is the decision (predicted label). The utility function defines the gain of making different decisions and can encode our preference for specific metrics in various tasks. The utility function is a standard component in Decision Theory and its design has been comprehensively studied in the literature. For example, Chapter 2.2 of Robert et al. (2007) guarantees the existence of utility functions with rational decision-makers. Generally, the values of the utility function over all class labels are stored in a form of utility matrix \(\mathbf{U}\), where \(U_{ij}=u(y=i,d=j)\).
In a standard classification setting, the overall accuracy is the most decisive metric in evaluation. It only matters whether the decision is consistent with the ground truth (i.e., \(y=d\)). Therefore, as shown in Fig. 1(a), a simple one-hot utility can be defined by \(u(y,d)=\mathbb{1}\left\{y=d\right\}\), which is corresponding to the standard accuracy metric.
In modern applications of long-tailed classification, the semantic importance of "tailed" data often implies more penalty in the circumstance of predicting tailed samples as head (Sengupta et al., 2016; Rahman et al., 2021; Yang et al., 2022). Besides, the lack of training samples in tailed classes has been empirically proved to be the bottleneck of classification performance (Li et al., 2022). Therefore, the ratio of false head samples in evaluation would reflect the potential of a model in real-world applications 2. To this end, a tail-sensitive utility can be defined by adding an extra penalty on those false head samples, as shown in Fig. 1(b). The tail-sensitive utility encourages the model to predict any uncertain sample as tail rather than head, while not affecting the predictions of the true class when the model is confident.
Footnote 2: The quantitative form will be discussed in Section 6.3.
### Inference Step
Due to the discrepancy between training (long-tailed) and testing (uniform) data distributions, we propose to compute the integrated gain with the posterior of _testing_ data \(p(\mathbf{\theta}|\mathcal{D}_{test})\) to target at evaluation, where \(\mathcal{D}_{test}=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{N}\) with \((\mathbf{x}_{i},y_{i})\sim p_{test}(\mathbf{x},y)\). To infer the posterior \(p(\mathbf{\theta}|\mathcal{D}_{test})\), we use the variational method where a variational distribution \(q(\mathbf{\theta})\) is introduced to the lower bound of the integrated gain in Eq. 5:
\[\log G(d)=\log\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p_{test}(\mathbf{x}, y)}\mathbb{E}_{\mathbf{\theta}\sim p(\mathbf{\theta}|\mathcal{D}_{test})}g(\mathbf{ \theta},d(\mathbf{x}_{i}))\] \[\geq\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p_{train}(\mathbf{x},y)} \mathbb{E}_{\mathbf{\theta}\sim q(\mathbf{\theta})}\frac{p_{test}(\mathbf{x}_{i},y_{i})}{ p_{train}(\mathbf{x}_{i},y_{i})}.\] \[\quad\quad\left[\log p(y_{i}|\mathbf{x}_{i},\mathbf{\theta})+\sum_{y^{ \prime}}u(y^{\prime},d)\log p(y^{\prime}|\mathbf{x}_{i},\mathbf{\theta})\right]\] \[-KL(q(\mathbf{\theta})||p(\mathbf{\theta}))+C\] \[:=L(q,d), \tag{6}\]
where \((\mathbf{x}_{i},y_{i})\) is training data but we forcefully compute its probability on testing distribution, and \(C\) is a constant. Eq. 6 is proved in Appendix C and the main idea is to apply Jensen's inequality (Jensen, 1906). The lower bound \(L(q,d)\) is our training objective and it provides a cross-entropy-like way to update the variational distribution \(q(\mathbf{\theta})\), and most importantly, converts the data distribution from \(p_{test}(\mathbf{x},y)\) (uniform) to \(p_{train}(\mathbf{x},y)\) (long-tailed) by the technique of importance sampling (Kloek and Van Dijk, 1978) to make the computation during training possible3.
Footnote 3: Thanks to the fact that training and testing data are semantically identical.
Moreover, the variational distribution \(q(\mathbf{\theta})\) is guaranteed to be an approximation of the posterior \(p(\mathbf{\theta}|\mathcal{D}_{test})\), because Eq. 6 contains Bayesian inference on the posterior of testing data. To support this, we look into the KL divergence between \(q(\mathbf{\theta})\) and \(p(\mathbf{\theta}|\mathcal{D}_{test})\):
\[KL(q(\mathbf{\theta})||p(\mathbf{\theta}|\mathcal{D}_{test})) \tag{7}\] \[=-\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p_{train}(\mathbf{x},y)}\mathbb{ E}_{\mathbf{\theta}\sim q(\mathbf{\theta})}\frac{p_{test}(\mathbf{x}_{i},y_{i})}{p_{ train}(\mathbf{x}_{i},y_{i})}\] \[\quad\quad\quad\cdot\log p(y_{i}|\mathbf{x}_{i},\mathbf{\theta})+KL(q(\bm {\theta})||p(\mathbf{\theta}))-C,\]
where \(C\) is a constant. Eq. 7 is proved in Appendix D. Comparing Eq. 6 and Eq. 7, it is clear that part of the objective at inference step is Bayesian inference on the posterior of testing data, with \(q(\mathbf{\theta})\) approaching \(p(\mathbf{\theta}|\mathcal{D}_{test})\).4
Footnote 4: In fact, \(q(\mathbf{\theta})\) approximates the gain-calibrated posterior: \(\tilde{p}(\mathbf{\theta}|\mathcal{D}_{test})\propto p(\mathbf{\theta}|\mathcal{D}_{ test})\prod_{y^{\prime}}p(y^{\prime}|\mathbf{x}_{i},\mathbf{\theta})^{u(y^{\prime},d(\mathbf{x}_{i}))}\).
In summary, the objective \(L(q,d)\) in Eq. 6 enables the framework to simultaneously consider inference, decision (utility), and data distribution (\(p_{test}(\mathbf{x},y)/p_{train}(\mathbf{x},y)\), which will be further discussed in Section 5.1). It provides a principled way to optimize the integrated gain in Eq. 5.
Figure 1: Two examples of utility matrices, designed for (a) standard and (b) tail-sensitive classifications respectively.
### Decision Step
To optimize \(L(q,d)\) w.r.t the decision \(d\), one way is to select the decision \(d^{\star}\) that maximizes the gain respectively for each input \(\mathbf{x}_{i}\) given the current variational distribution:
\[d^{\star}=\operatorname*{arg\,max}_{d}\mathbb{E}_{\mathbf{\theta}\sim q(\mathbf{\theta} )}\sum_{y^{\prime}}u(y^{\prime},d)\log p(y^{\prime}|\mathbf{x}_{i},\mathbf{\theta}). \tag{8}\]
Notably, for symmetric utility functions (e.g., one-hot utility), Eq. 8 can be further simplified: \(d^{\star}=\operatorname*{arg\,max}_{d}\mathbb{E}_{\mathbf{\theta}\sim q(\mathbf{\theta} )}\log p(d|\mathbf{x},\mathbf{\theta})\), which is equivalent to the maximum of the predictive distribution.
However, during training, we essentially know that the optimal decisions for training data are their true labels. Therefore, we can utilize this knowledge and simply set \(d(x_{i})=y_{i}\). We can also view this as selecting the optimal decisions under a well-estimated \(q(\mathbf{\theta})\) in Eq. 8 instead of the current distribution, since we expect \(d^{\star}\) approach the true labels as \(q(\mathbf{\theta})\) keeps updating. Then the objective can be further simplified to be:
\[\begin{split}& L(q,d)=L(q)\\ &=\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim p_{train}(\mathbf{x},y)}\mathbb{ E}_{\mathbf{\theta}\sim q(\mathbf{\theta})}\frac{p_{test}(\mathbf{x}_{i},y_{i})}{p_{ train}(\mathbf{x}_{i},y_{i})}\cdot\\ &\qquad\left[\log p(y_{i}|\mathbf{x}_{i},\mathbf{\theta})+\sum_{y^{\prime }}u(y^{\prime},y_{i})\log p(y^{\prime}|\mathbf{x}_{i},\mathbf{\theta})\right]\\ &-KL(q(\mathbf{\theta})||p(\mathbf{\theta}))+C.\end{split} \tag{9}\]
During testing, we use Eq. 8 to select the decision for testing data \(\mathbf{x}_{i}\).
## 5 On Computation of Inference Step
### Train-test Discrepancy
At the inference step, we exploit the importance sampling to convert \(p(\mathcal{D}_{test})\) to \(p(\mathcal{D}_{train})\) and obtain a discrepancy ratio \(p_{test}(\mathbf{x},y)/p_{train}(\mathbf{x},y)\). Recall that in long-tailed distribution, the training and testing data are semantically identical, and thus the model prediction must be the same for an input regardless of being in the training or testing set (i.e., \(p_{train}(y|\mathbf{x},\mathbf{\theta})=p_{test}(y|\mathbf{x},\mathbf{\theta})\)). Therefore, the discrepancy ratio can be further simplified by:
\[\frac{p_{test}(\mathbf{x},y)}{p_{train}(\mathbf{x},y)}=\frac{p_{test}(y)p_{test}(\mathbf{x} |y)}{p_{train}(y)p_{train}(\mathbf{x}|y)}=\frac{p_{test}(y)}{p_{train}(y)}, \tag{10}\]
which only depends on the class probabilities of training and testing data. Since we assume a uniform distribution for the testing set in long-tailed data, the probability \(p_{test}(\mathbf{y})\) would be a constant for all \(\mathbf{x}\), and thus the discrepancy ratio is equivalent to:
\[\frac{p_{test}(y)}{p_{train}(y)}\propto\frac{1}{p_{train}(y)}\propto\frac{1}{f (n_{y})}, \tag{11}\]
where \(f\) is an increasing function and \(n_{y}\) refers to the number of samples in the class \(y\). We introduce the notation of \(f(n_{y})\) because the class probability only depend on the number of samples in this class.
The choices of \(f\) can determine different strategies used by previous re-balancing methods in long-tailed classification. For example, \(f(n_{y})=n_{y}^{\gamma}\) is the most conventional choice with a sensitivity factor \(\gamma\) to control the importance of head classes (Huang et al., 2016; Wang et al., 2017; Pan et al., 2021); \(f(n_{y})=(1-\beta^{n_{y}})/(1-\beta)\) is the effective number which considers data overlap (Cui et al., 2019). A detailed analysis on the choice of discrepancy ratios will be conducted in Section 6.5. Notably, our framework is compatible with all previous re-balancing methods as long as they can be expressed in the form of \(1/f(n_{y})\).
### Particle-based Variational Distribution
To pursue the efficiency of model architecture, we use particle optimization (Liu and Wang, 2016; D'Angelo and Fortuin, 2021) to obtain the variational distribution: \(q(\mathbf{\theta})=\sum_{j=1}^{M}w_{j}\cdot\delta(\mathbf{\theta}-\mathbf{\theta}_{j})\), where \(\{w_{j}\}_{j=1}^{M}\) are normalized weights which hold \(\sum_{j=1}^{M}w_{j}=1\), and \(\delta(\cdot)\) is the Dirac delta function. The "particles" \(\{\mathbf{\theta}_{j}\}_{j=1}^{M}\) are implemented by ensemble model, which has been empirically explored on the long-tailed data (Wang et al., 2020; Li et al., 2022). Our formulation gives theoretical justification to ensemble approaches in long-tailed problems: Due to the scarcity of tailed data, there is not enough evidence to support a single solution, leading to many equally good solutions (which give complementary predictions) in the loss landscape. Thus, estimating the full posterior is essential to provide a comprehensive characterization of the solution space. Particle optimization reduces the cost of Bayesian inference and is more efficient than variational inference and Markov chain Monte Carlo (MCMC), especially on high-dimensional and multimodal distributions. Besides, the computational cost of our method can be further reduced by leveraging recent techniques, such as partially being Bayesian in model architectures (Kristiadi et al., 2020).
### Repulsive Regularization
In Eq. 6, the regularization term \(KL(q(\mathbf{\theta})||p(\mathbf{\theta}))\) guarantees the variational distribution to approach the posterior as training proceeds. If we assume the prior \(p(\mathbf{\theta})\) to be Gaussian, the regularization can be extended to:
\[\begin{split} KL&(q(\mathbf{\theta})||p(\mathbf{\theta}))\\ &=\lambda\int_{\Theta}||\mathbf{\theta}||^{2}\cdot q(\mathbf{\theta})d \mathbf{\theta}+\int_{\Theta}q(\mathbf{\theta})\log q(\mathbf{\theta})d\mathbf{\theta}\\ &=\lambda\cdot\frac{1}{M}\sum_{j=1}^{M}||\mathbf{\theta}_{j}||^{2}-H (\mathbf{\theta}),\end{split} \tag{12}\]
where \(\lambda\) is a constant, \(\Theta\) is the parameter space, and \(H(\mathbf{\theta})\) is the entropy of \(\mathbf{\theta}\). The \(L_{2}\)-regularization prevents the model from over-fitting and the entropy term applies a _repulsive force_ to the particles to promote their diversity, pushing the particles to the target posterior (D'Angelo and Fortuin, 2021). A simple approximation for the entropy is used in this paper:
\[H(\mathbf{\theta})\propto\frac{1}{2}\log|\hat{\Sigma}_{\mathbf{\theta}}|, \tag{13}\]
where \(\hat{\Sigma}_{\mathbf{\theta}}\) is the covariance matrix estimated by those particles. Other entropy approximations can also be used. By the technique of SWAG-diagonal covariance (Maddox et al., 2019), the covariance matrix can then be directly computed by: \(\hat{\Sigma}_{\mathbf{\theta}}=diag(\overline{\mathbf{\theta}^{2}}-\overline{\mathbf{ \theta}}^{2})\).
Overall, the regularization term is a combination of \(L_{2}\) weight decay and repulsive force, and is computed by:
\[KL(q(\mathbf{\theta})||p(\mathbf{\theta}))\propto\frac{\lambda}{M}\sum_{j=1}^{M}|| \mathbf{\theta}_{j}||^{2}-\frac{1}{2}\sum_{k}\log{(\overline{\mathbf{\theta}}^{2}- \overline{\mathbf{\theta}}^{2})_{k}}. \tag{14}\]
Our regularization is different from existing diversity regularization (Wang et al., 2020), and is more principled and naturally derived from the integrated gain.
In summary, our method, with principled design and theoretical justification, is essentially cheap and easy to implement and can be used as a drop-in replacement for existing re-balancing and ensemble methods in general long-tailed problems. We outline our algorithm in Appendix B.
## 6 Experiments
### Experimental Settings
Datasets.We use three long-tailed image datasets. CIFAR-10-LT and CIFAR-100-LT (Cui et al., 2019) are sampled from the original CIFAR dataset (Krizhevsky and Hinton, 2009). ImageNet-LT (Liu et al., 2019) is sampled from the the dataset of ILSVRC 2012 competition (Deng et al., 2009), and contains 115.8K images in 1,000 classes.
Evaluation.The evaluation protocol consists of standard classification accuracy, a newly designed experiment on the False Head Rate (FHR), and calibration with predictive uncertainty. Besides, we conduct several ablation studies to evaluate different choices of implementation and the effectiveness of components in our method. For all quantitative and visual results, we repeatedly run the experiments five times with random initialization to obtain the averaged results and standard deviations to eliminate random error.
Compared Baselines.We compare our method (LBD) with cross entropy baseline, re-balancing methods (CB Loss (Cui et al., 2019) and LDAM (Cao et al., 2019)), and ensemble methods (RIDE (Wang et al., 2020) and TLC (Li et al., 2022)). The numbers of classifiers in all ensemble models are set to be 3. We also compare the Bayesian predictive uncertainty with MCP baseline (Hendrycks and Gimpel, 2017) and evidential uncertainty (Sensoy et al., 2018). We use \(f(n_{y})=n_{y}\) unless otherwise specified. More implementation details are in Appendix. E.
### Standard Classification
Classification accuracy is the most standard benchmark for long-tailed data, where the overall accuracy and accuracies for three class regions are evaluated. Classes are equally split into three class regions (head, med and tail). For example, there are 33, 33 and 34 classes respectively in the head, med and tail regions of CIFAR-100-LT. The classification results are shown in Table 1 and Table 2. We apply the one-hot utility in our method to accord with standard accuracy metric. Our method consistently outperforms all other compared methods in terms of overall accuracy. For regional accuracies, our method achieves the best performances on all class regions in most cases. In particular, our method significantly outperforms previous methods on the crucial tailed data, while being comparable or even better on med and head classes. These results demonstrate the effectiveness of taking a Bayesian-decision-theory perspective on the long-tailed problem.
cant improvements of LBD over previous methods under all settings, especially on the relatively small CIFAR datasets, which means that the "false head risk" is more severe on smaller datasets with scarce tailed samples. This shows the importance of taking the decision loss into account and also demonstrates the flexibility of our framework which is compatible with different utilities, leading to better performance for different types of tasks. In contrast, previous methods do not consider decision loss and may result in undesirable consequences when some type of errors have high costs.
### Calibration
In our method, the predictive uncertainty can be naturally obtained by the entropy of predictive distribution (Malinin and Gales, 2018). For the compared uncertainty algorithms, MCP is a trivial baseline which obtains uncertainty scores from the maximum value of softmax distribution, which is added to the RIDE (Wang et al., 2020) backbone; evidential uncertainty is rooted in the subjective logic (Audun, 2018), and is introduced to long-tailed classification by TLC (Li et al., 2022). We evaluate the uncertainty algorithms with AUC (McClish, 1989) and ECE (Naeini et al., 2015), which are shown in Table 4. Our Bayesian predictive uncertainty outperforms the other two counterparts and has a remarkable advantage on the ECE metric, demonstrating the superiority of using principled Bayesian uncertainty quantification.
### Ablation Studies
Utility Function.The effectiveness of tail-sensitive utility is shown in Table 5, where we compare the one-hot and tail-sensitive utilities in terms of False Head Rate and classification accuracy. By applying the tail-sensitive utility, the performances on False Head Rate can be significantly improved (18.00%) with negligible drop on the classification accuracy (0.04%).
Train-test Discrepancy.We compare five different forms of discrepancy ratio in terms of classification accuracy in Table 6 and Fig. 2. We also analyze the properties of the compared discrepancy ratios. The differences of \(f(n_{y})\) show up when \(n_{y}\) is large, and it can be measured by the growth rate of weight values (i.e., \(1/f(n_{y})\)) between the first and the last class. We find that as the growth rate becomes larger, the overall accuracy will be better accordingly, which shows the severity of class imbalance.
For the classification accuracies of three class regions, Fig. 2 shows similar results on the relationship between growth rate and the tail accuracy. As the growth rate becomes larger, the tail and med ACC will both become significantly better despite the slight drop on head ACC, which is consistent with the overall improvement. Based on these results, we suggest using \(f(n_{y})=n_{y}\) in general.
\begin{table}
\begin{tabular}{c|c|c c} \hline \multicolumn{1}{c|}{Dataset} & Algorithm & AUC (\%) \(\uparrow\) & ECE (\%) \(\downarrow\) \\ \hline \hline \multirow{3}{*}{CIFAR-10-LT} & MCP & 79.98\(\pm\)0.10 & 14.33\(\pm\)0.37 \\ & evidential & 83.20\(\pm\)0.59 & 13.24\(\pm\)0.55 \\ & Bayesian & **86.83\(\pm\)0.68** & **9.84\(\pm\)0.17** \\ \hline \multirow{3}{*}{CIFAR-100-LT} & MCP & 80.48\(\pm\)0.51 & 23.75\(\pm\)0.51 \\ & evidential & 77.37\(\pm\)0.33 & 21.64\(\pm\)0.47 \\ & Bayesian & **81.24\(\pm\)0.25** & **10.35\(\pm\)0.28** \\ \hline \multirow{3}{*}{ImageNet-LT} & MCP & 84.02\(\pm\)0.24 & 18.35\(\pm\)0.12 \\ & evidential & 81.45\(\pm\)0.13 & 15.29\(\pm\)0.12 \\ \cline{1-1} & Bayesian & **84.45\(\pm\)0.09** & **8.72\(\pm\)0.13** \\ \hline \end{tabular}
\end{table}
Table 4: Quantitative results of calibration of different uncertainty algorithms. LBD outperforms previous methods remarkably on both metrics and all datasets.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{\(\mathcal{D}\)}} & Method & \multicolumn{3}{c}{ACC (\%) \(\uparrow\)} \\ & & head & MED & FAIL \\ \hline \hline \multirow{4}{*}{CIFAR-100-LT} & CE & 93.22\(\pm\)0.26 & 74.27\(\pm\)0.42 & 58.51\(\pm\)0.62 \\ & CB Loss & 91.70\(\pm\)0.57 & 75.41\(\pm\)0.76 & 68.73\(\pm\)1.52 \\ & LDAM & 90.03\(\pm\)0.47 & 75.88\(\pm\)0.81 & 77.14\(\pm\)1.61 \\ & RIDE & **91.49\(\pm\)0.40** & **79.39\(\pm\)0.61** & 79.62\(\pm\)1.56 \\ & TLC & 89.47\(\pm\)0.33 & 74.33\(\pm\)0.96 & 76.39\(\pm\)0.98 \\ & LBD & 09.49\(\pm\)0.60 & 78.89\(\pm\)0.87 & **82.33\(\pm\)1.16** \\ \hline \multirow{4}{*}{CIFAR-100-LT} & CE & 68.30\(\pm\)0.61 & 38.39\(\pm\)0.49 & 10.62\(\pm\)1.23 \\ & CB Loss & 62.53\(\pm\)0.44 & 44.36\(\pm\)0.96 & 20.50\(\pm\)0.51 \\ & LDAM & 63.58\(\pm\)0.93 & 42.90\(\pm\)1.03 & 23.50\(\pm\)1.28 \\ & RIDE & 69.11\(\pm\)0.54 & 49.70\(\pm\)0.59 & 28.78\(\pm\)1.52 \\ & LBD & **69.92\(\pm\)0.77** & **51.07\(\pm\)0.82** & **30.34\(\pm\)1.49** \\ \hline \multirow{4}{*}{CIFAR-100-LT} & CE & 53.46\(\pm\)0.36 & 45.92\(\pm\)0.19 & 44.03\(\pm\)0.24 \\ & CB Loss & 57.62\(\pm\)0.46 & 49.19\(\pm\)0.21 & 48.29\(\pm\)0.41 \\ \cline{1-1} & LDAM & 57.66\(\pm\)0.40 & 48.26\(\pm\)0.19 & 47.21\(\pm\)0.22 \\ \cline{1-1} & RIDE & 60.88\(\pm\)0.71 & 51.35\(\pm\)0.44 & 50.74\(\pm\)0.62 \\ \cline{1-1} & TLC & 61.19\(\pm\)0.53 & 52.35\(\pm\)0.31 & 51.56\(\pm\)0.35 \\ \cline{1-1} & LBD & **62.18\(\pm\)0.28** & **53.06\(\pm\)0.22** & **51.98\(\pm\)0.40** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative results of classification accuracies on three class regions. LBD outperforms previous methods in all class regions in most cases, especially on tailed data.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{\(\mathcal{D}\)}} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{FHR (\%) \# tail ratio \(\downarrow\)} \\ & & 25\% & 50\% & 75\% & avg \\ \hline \hline \multirow{4}{*}{CIFAR-10-LT} & CE & 21.10\(\pm\)0.43 & 37.87\(\pm\)0.57 & 48.75\(\pm\)1.39 & 35.91\(\pm\)0.54 \\ & CB Loss & 14.84\(\pm\)0.93 & 27.98\(\pm\)1.44 & 33.93\(\pm\)1.60 & 25.58\(\pm\)1.27 \\ & LDAM & 10.05\(\pm\)1.01 & 19.64\(\pm\)1.66 & 21.37\(\pm\)1.20 & 17.02\(\pm\)1.56 \\ & RIDE & 8.94\(\pm\)0.40 & 17.80\(\pm\)1.39 & 19.77\(\pm\)3.20 & 15.50\(\pm\)1.68 \\ & TLC & 10.42\(\pm\)0.64 & 20.27\(\pm\)0.77 & 22.24\(\pm\)1.53 & 17.64\(\pm\)0.93 \\ & LBD & **4.99\(\pm\)0.32** & **11.16\(\pm\)0.29** & **11.12\(\pm\)0.28** & **1.25\(\pm\)0.49** \\ \hline \multirow{4}{*}{CIFAR-100-LT} & CE & 45.53\(\pm\)1.54 & 73.03\(\pm\)1.59 & 9.10\(\pm\)1.24 & 69.95\(\pm\)1.40 \\ & CB Loss & 24.88\(\pm\)0.34 & 48.41\(\pm\)1.24 & 74.38\(\pm\)1.47 & 49.22\(\pm\)0.83 \\ & LDAM & 21.22\(\pm\)0.99 & 43.04\(\pm\)1.18 & 65.62\(\pm\)1.31 & 43.29\(\pm\)1.04 \\ & RIDE & 18.83\(\pm\)0.70 & 39.50\(\pm\)1.53 & 6.62\(\pm\)1.20 & 40.11\(\pm\)1.62 \\ & TLC & 21.18\(\pm\)0.54 & 41.15\(\pm\)0.55 & 61.34\(\pm\)1.03 & 41.22\(\pm\)0.55 \\ & LBD & **15.39\
Repulsive Force.We evaluate the effectiveness of repulsive force in Table 7. The repulsive force effectively pushes the particles to the target posterior and avoids collapsing into the same solution. Therefore, with the repulsive force, better predictive distributions can be learned, and thus better predictive uncertainty can be obtained. Besides, the repulsive force can also improve the accuracy by promoting the diversity of particles.
Number of Particles.Generally, using more classifiers in ensembles will induce better performances. However, we also need to balance the performance with the computational cost. We visualize the classification accuracies under different numbers of particles in Fig. 3. The error bars are scaled to be \(2\sigma\), where \(\sigma\) is the standard deviation from repeated experiments. The accuracy curves are all logarithm-like and the accuracy improvement is hardly noticeable for more than six particles. However, the computational cost is increasing in a linear speed. Therefore, we recommend using no more than six particles in practice for a desirable performance-cost trade-off.
## 7 Conclusion and Future Directions
In this paper, we propose Long-tailed Bayesian Decision (LBD), a principled framework for solving long-tailed problems, with both theoretical explanation and strong empirical performance. Based on Bayesian Decision Theory, LBD unifies data distribution, posterior inference, and decision-making and further provides theore
\begin{table}
\begin{tabular}{c|c|c c c|c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Discrepancy ratio \\ \end{tabular} } & \multicolumn{4}{c|}{Weight value} & \multirow{2}{*}{ACC (\%) \(\uparrow\)} \\ & & first class & last class & growth (\%) \\ \hline linear (Wang et al., 2017) & \multirow{4}{*}{\(f(n_{y})=\frac{1-\beta^{n_{y}}}{1-\beta}\)} & 0.0020 & 0.1667 & **8250** & **50.17\(\pm\)0.25** \\ effective (Cui et al., 2019) & & 0.0023 & 0.1669 & 7297 & 49.90\(\pm\)0.36 \\ sqrt (Pan et al., 2021) & & 0.0447 & 0.4082 & 814 & 47.03\(\pm\)0.30 \\ log & & \(\log n_{y}\) & 0.1609 & 0.5581 & 247 & 45.26\(\pm\)0.51 \\ plain & & constant & 1.0000 & 1.0000 & 0 & 43.27\(\pm\)0.30 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on the choice of discrepancy ratio, compared by classification accuracies on CIFAR-100-LT.
\begin{table}
\begin{tabular}{c|c c c c|c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Repulsive \\ force \\ \end{tabular} } & \multicolumn{2}{c}{AUC (\%) \(\uparrow\)} & \multicolumn{2}{c}{ECE (\%) \(\downarrow\)} & \multicolumn{2}{c}{ACC (\%) \(\uparrow\)} \\ \hline ✓ & **81.24\(\pm\)0.25** & **10.35\(\pm\)0.28** & **50.24\(\pm\)0.70** \\ \(\times\) & 75.94\(\pm\)0.56 & 13.40\(\pm\)0.80 & 50.15\(\pm\)0.41 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study on repulsive force, compared by uncertainty calibration on CIFAR-100-LT.
Figure 3: Visual results of classification with respect to the number of particles on CIFAR-100-LT.
Figure 2: Visual results of classification with respect to the choice of discrepancy ratio on CIFAR-100-LT.
existing techniques such as re-balancing and ensemble. In LBD, we introduce the integrated risk as the objective, find a tractable variational lower bound to optimize this objective, and apply particle optimization to efficiently estimate the complex posterior. For the real-world scenario with tail sensitivity risk, we design a tail-sensitive utility to pursue a better False Head Rate. In experiments, we evaluate our framework on standard classification, tail-sensitive classification, calibration, and ablation studies. Our framework outperforms the current SOTA even on large-scale real-world datasets like ImageNet.
Our method is simple to use in general long-tailed problems, providing superior accuracy on all types of classes and uncertainty estimation. We believe there is considerable space for future developments that build upon our method and we list a few below:
Long-tailed Regression.Long-tail problem also exists in regression, where the distribution of targets can be heavily imbalanced. With some adjustments on the decision gain, our framework might also be adapted to regression.
Utility Function.Beyond long-tailed classification, there are other tasks which also need specific utility functions. For example, we might have to separately deal with the relationship between categories due to their semantic connections. In this case, all of the values in the utility matrix will need re-calculating.
Dataset Shift.In more general dataset shift scenarios like out-of-distribution data, the assumption about semantically identical training and testing sets will be no longer valid. Another example is about the distribution of testing data. If it is assumed to be not uniform, the discrepancy ratio \(p_{test}(y)/p_{train}(y)\) will no longer be expressed in the form \(1/f(n_{y})\), but a more general form.
|
2307.14186 | Complexity results for the Pilot Assignment problem in Cell-Free Massive
MIMO | Wireless communication is enabling billions of people to connect to each
other and the internet, transforming every sector of the economy, and building
the foundations for powerful new technologies that hold great promise to
improve lives at an unprecedented rate and scale. The rapid increase in the
number of devices and the associated demands for higher data rates and broader
network coverage fuels the need for more robust wireless technologies. The key
technology identified to address this problem is referred to as Cell-Free
Massive MIMO (CF-mMIMO). CF-mMIMO is accompanied by many challenges, one of
which is efficiently allocating limited resources. In this paper, we focus on a
major resource allocation problem in wireless networks, namely the Pilot
Assignment problem (PA). We show that PA is strongly NP-hard and that it does
not admit a polynomial-time constant-factor approximation algorithm. Further,
we show that PA cannot be approximated in polynomial time within
$\mathcal{O}(K^2)$ (where $K$ is the number of users) when the system consists
of at least three pilots. Finally, we present an approximation lower bound of
$1.058$ (resp. $\epsilon|K|^2$, for $\epsilon >0$) in special cases where the
system consists of exactly two (resp. three) pilots. | Shruthi Prusty, Sofiat Olaosebikan | 2023-07-26T13:26:12Z | http://arxiv.org/abs/2307.14186v2 | # Complexity results for the Pilot Assignment problem in Cell-Free Massive MIMO
###### Abstract
Wireless communication is enabling billions of people to connect to each other and the internet, transforming every sector of the economy, and building the foundations for powerful new technologies that hold great promise to improve lives at an unprecedented rate and scale. The rapid increase in the number of devices and the associated demands for higher data rates and broader network coverage fuels the need for more robust wireless technologies. The key technology identified to address this problem is referred to as Cell-Free Massive MIMO (CF-mMIMO). CF-mMIMO is accompanied by many challenges, one of which is efficiently allocating limited resources. In this paper, we focus on a major resource allocation problem in wireless networks, namely the Pilot Assignment problem (PA). We show that PA is strongly NP-hard and that it does not admit a polynomial-time constant-factor approximation algorithm. Further, we show that PA cannot be approximated in polynomial time within \(\mathcal{O}(K^{2})\) (where \(K\) is the number of users) when the system consists of at least three pilots. Finally, we present an approximation lower bound of \(1.058\) (resp. \(\epsilon|K|^{2}\), for \(\epsilon>0\)) in special cases where the system consists of exactly two (resp. three) pilots.
Keywords:Pilot Assignment Cell-Free Massive MIMO NP-hard optimization problem Strong NP-hardness Approximability
## 1 Introduction
### Background: Cell-free Massive MIMO
Wireless networks is an essential technology for enabling flexible communication and connectivity between individuals (or machines) across regions. In addition, it is transforming every sector of the economy (transportation, healthcare, education, etc.), and powerful new technologies (artificial intelligence, internet of things, etc.) are being built upon it. Cellular networks is the technology that 1G to 5G relies on [16, 26]. Here, the coverage area is divided up into non-overlapping cells, and we have a single Access Point (AP) coordinating data transmissions amongst user devices within its cell.
As the number of devices that depend on wireless communication networks continues to grow, each needing a high connection rate and better coverage with minimal interference, this technology will no longer be suitable [20, 26]. Particularly, resources (e.g. channels, power, spectrum) in a wireless network are limited and must be managed efficiently. Further, the User Equipments (UEs) at the edge of the cells get poor service due to _inter-cell interference_.
For future wireless communications (e.g., 6G), the key technology that has the potential to enhance connectivity and provide better coverage for billions of users is referred to as Cell-Free Massive Multiple-Input Multiple-Output (CF-mMIMO). As the name suggests, CF-mMIMO allows a device/user (UE) to be served by multiple access points (APs) that are within its range without the notion of boundaries; in contrast to the current technology, which allows each UE to be served by only one AP within a defined boundary. The goal of this network is to reduce inter-cell interferences, improve the uniform distribution of spectral efficiency amongst users and enhance network reliability [20]. CF-mMIMO is accompanied by many challenges, one of which is how to efficiently manage limited resources (spectrum, pilot signals, energy, and power) - the so-called resource allocation problem in wireless networks [5].
### The Pilot Assignment problem
Throughout this paper, we will consider a CF-mMIMO system with \(M\) single-antenna APs and \(K\) (\(K\ll M\)) UEs where the APs are randomly distributed in a large area. Since a large number of distributed APs jointly provide uniform service to a small number of UEs without any notion of boundaries, it is often the case that _AP selection_ is done for each user, such that only a subset of the APs providing service above a certain threshold are considered for any energy or spectral efficiency calculations pertaining to that user [19, 25]. An important assumption in this paper is that AP selection is always done for any CF-mMIMO system we consider [25]. We now turn our attention towards a major problem that hinders resource allocation in wireless communication networks, and is a resource allocation problem itself: the Pilot Assignment problem (PA).
It is essential to acquire accurate _Channel State Information (CSI)_ between the UEs and the APs to reap all the benefits potentially provided by the distributed user-centric CF-mMIMO architecture [5]. Channel estimation allows APs to process data signals from the UEs. To perform channel estimation for a user, a _pilot signal_ needs to be assigned to it. In the system setting, it is assumed that APs and UEs do not have _a priori_ CSI at the beginning of a coherent interval. The CSI is estimated in what is called a _pilot training phase_, which usually happens in the uplink. Channel estimation is needed only at the beginning of a coherent interval \(\tau_{c}\). Thus, \(\tau\) pilot sequences (or signals) of length \(\tau\) each are assigned to the UEs prior to uplink data transmission for channel estimation. Each UE is assigned one pilot. The received pilot information is used for channel estimation.
The estimated channel is used to detect the received data, thereby allowing us to calculate the Spectral Efficiency (SE) of each UE [7]. However, due to the limited length of the coherence interval, the available number of orthogonal pilot sequences is normally smaller than the number of UEs (\(\tau\ll K\)) and some UEs have to reuse a given pilot. Hence, the orthogonality among the pilot sequences for all UEs is typically not achieved. The pilot reuse causes an impairment known as pilot contamination, which can degrade the system performance, by lowering the achievable uplink/downlink rates and signal-to-interference-plus-noise ratio (SINR) of the system [19, 25].
### Existing work and contributions of this paper
PA has been extensively researched from the engineering perspective, with the aim of constructing heuristic-based algorithms that return sub-optimal solutions for this distributed architecture. The most straightforward and naive pilot allocation strategy is the _random_ pilot assignment scheme, in which the available pilots are assigned randomly to each user [20]. Of course, this does not address any of the problems, including two nearby users sharing the same pilot, and thus turns out to be the worst scheme. The _greedy_ pilot assignment scheme proposed in [20] works by iteratively improving the downlink rate for the worst user. However, such a method can only improve the worst user's performance at any given point, and cannot guarantee an improvement in the whole system's performance. This means that this algorithm is not guaranteed to converge stably to a global maximum value.
The _location-based greedy_ pilot assignment scheme utilizes the location information of the users for the initial assignment of pilots in the greedy scheme, instead of random assignment. This method, however, does not prove to be very effective in practice and only promotes the throughput performances of a few users [19, 27]. The _structured_ pilot assignment scheme maximizes the minimum distance among UEs that share the same pilot using a clustering algorithm, but the implementation in a real-world cell-free massive MIMO system is hindered by the difficulty of finding the centroid APs in such practical systems [2, 19]. The _Maximal Increment_ (MI) algorithm maximizes the achievable (downlink) rate by maximizing the increment in an iterative algorithm, but it has high time complexity [24]. The _Hungarian Algorithm_ based pilot assignment scheme is an iterative procedure based on the Hungarian algorithm to enhance system throughput and fairness by avoiding pilot reuse among nearby users. However, it has been observed that it is not sufficiently accurate to measure pilot contamination solely based on the geographical distance between the users [4, 25].
Recently, two graph-based pilot assignment schemes have been used: the _graph-colouring_ based pilot assignment scheme [19] and the _Max-\(k\)-Cut_ based pilot assignment scheme in a weighted graphic framework [25]. In both cases, the CF-mMIMO architecture is modelled as a graph, with the UEs forming the vertices and the edges representing the interference between the UEs. The pilot
assignment optimization is then solved on this graph. Since both the graph-colouring and the Max-\(k\)-Cut problems fall in the class of NP-hard problems, heuristic graph algorithms have been employed to find reasonable solutions to PA, supported by experimental results obtained via simulations, where they outperform other non-graph theoretic algorithms with reasonable time complexity. Further, the Max-\(k\)-Cut scheme even outperforms the graph-colouring scheme [19, 25], making it the state of the art. This led us to believe that there must be some sort of natural relation between the pilot assignment optimization problem and classical graph-theoretic optimization problems. The existing experimental results in the literature aim to optimize a certain Quality of Service (QoS) metric such as the sum-user SE, the system throughput, and the system Energy Efficiency (EE), all of which can be expressed via the uplink/downlink achievable rates, which further depend on the Signal-to-Interference-plus-Noise Ratio (SINR) [4, 5, 18, 19, 25].
Our contribution.Despite all the research effort that has been put into solving the pilot assignment problem in cell-free massive MIMO systems, PA has received very little attention from a theoretical computer science perspective. In particular, to the best of our knowledge, there are no complexity results for this problem in the literature. Consequently, in this paper, we show that PA is _strongly_ NP-hard via a reduction from Min-\(k\)-Partition. We further show that the problem does not admit a polynomial-time constant-factor approximation algorithm. Inspired by the results in [8], we show that PA cannot be approximated in polynomial time within \(\mathcal{O}(K^{2})\) (where \(K\) is the number of users) when there are at least three pilots in the system. We also present an approximation lower bound of \(1.058\) (resp. \(\epsilon|K|^{2}\), for \(\epsilon>0\)) in special cases where the system consists of exactly two (resp. three) pilots. The implication of our result is that any positive (explicit bounds on performance ratios) or negative approximation results for Min-\(k\)-Partition can be directly translated for PA.
Organization.The remainder of this paper is organized as follows: We present preliminary definitions, and formally define the CF-mMIMO system as well as the PA optimization problem in Section 2. We present our complexity results on PA in Section 3. Finally, in Section 4, we present some concluding remarks and potential directions for future work.
## 2 Preliminaries and problem definition
### Definitions
The results in this paper use well-established terminologies from complexity and approximation theory. While we recall the most important ones for completness, we refer interested readers to [3, Chapter 1] for the formal definitions of terms such as optimization problems, NP-hardness, strong NP-hardness, approximation classes APX and \(F\)-APX, and the complete definition of AP-reducibility.
Definition 1 (Optimization problem): An **optimization problem**\(\Pi\) is a tuple \((I_{\Pi},\,\mathit{SOL}_{\Pi},\,m_{\Pi},\,\mathrm{goal}_{\Pi})\) where:
* \(I_{\Pi}\) is the set of instances of \(\Pi\),
* \(\mathit{SOL}_{\Pi}\) is a function that associates to an instance \(x\in I_{\Pi}\), the set of feasible solutions of \(x\),
* \(m_{\Pi}\) is a measure function, defined for a pair \((x,y)\) where \(x\in I_{\Pi}\) and \(y\in SOL_{\Pi}(x)\), that returns a positive rational which is the value of the feasible solution \(y\).
* \(\mathrm{goal}_{\Pi}\in\{\textsc{min, max}\}\) denotes whether \(\Pi\) is a minimization or maximization problem.
Given an input instance \(x\), we will denote by \(\mathit{SOL}_{P}^{*}(x)\) the set of optimal solutions of \(x\). The value of any optimal solution \(y^{*}(x)\) of \(x\) will be denoted by \(m_{\Pi}^{*}(x)\).
Definition 2 (The class NPO): An optimization problem \(\Pi=(I_{\Pi},\,\mathit{SOL}_{\Pi},\)\(m_{\Pi},\,\mathrm{goal}_{\Pi})\), where the tuple consists of problem instances, feasible solutions for every instance, measure function associated with every feasible solution, and goal of optimization respectively, belongs to the class **NPO** if the following holds:
1. the set of instances \(I_{\Pi}\) is recognizable in polynomial time,
2. there exists a polynomial \(q\) such that, given an instance \(x\in I_{\Pi}\),
1. \(|y|\leq q(|x|)\)\(\forall\)\(y\in SOL_{\Pi}(x)\),
2. and it is decidable in polynomial time whether \(y\in SOL_{\Pi}(x)\) for every \(y\) with \(|y|\leq q(|x|)\).
3. the measure function \(m_{\Pi}\) is computable in polynomial time.
Definition 3 (NP-hard optimization problems): An optimization problem \(\Pi\) is called **NP-hard** if, for every decision problem \(\Pi^{\prime}\in\mathrm{NP}\), \(\Pi^{\prime}\leq_{T}^{p}\Pi\), that is, \(\Pi^{\prime}\) can be solved in polynomial time by an algorithm which queries an "oracle" that, for any instance \(x\in I_{\Pi}\), returns an optimal solution \(y^{*}(x)\) of \(x\) along with its value \(m_{\Pi}^{*}(x)\).
Definition 4 (Strong NP-hardness): Let \(\Pi\) be a decision problem in NP, with an instance \(x\in I_{\Pi}\). Let \(\max(x)\) denote the value of the largest number occurring in the instance \(x\). Given a polynomial \(p\), let \(\Pi^{\max,p}\) be the restriction of \(\Pi\) to those instances with the property that \(\max(x)\leq p(|x|)\). If \(\Pi^{\max,p}\) remains an \(\mathrm{NP}\)-hard problem for some polynomial \(p\), then \(\Pi\) is called **strongly NP-hard**.
Definition 5 (Performance ratio): Given an optimization problem \(\Pi\), for any instance \(x\) of \(\Pi\) and for any feasible solution \(y\) of \(x\), the **performance ratio** of \(y\) with respect to \(x\) is defined as
\[R(x,y)=\max\left(\frac{m(x,y)}{m^{*}(x)},\frac{m^{*}(x)}{m(x,y)}\right).\]
**Definition 6** (\(r(n)\)-approximate algorithm): _Given an optimization problem \(\Pi\) in \(\mathrm{NPO}\), an approximation algorithm \(\mathcal{A}\) for \(\Pi\), and a function \(r:\mathbb{N}\mapsto(1,\infty)\), we say that \(\mathcal{A}\) is an \(r(n)\)**-approximate algorithm** for \(\Pi\) if, for any instance \(x\) such that \(SOL(x)\neq\emptyset\), the performance ratio of the feasible solution \(\mathcal{A}(x)\) with respect to \(x\) satisfies the following inequality:_
\[R(x,\mathcal{A}(x))\leq r(|x|).\]
**Definition 7** (Class APX): _The class of all \(\mathrm{NPO}\) problems \(\Pi\) such that, for some fixed constant \(c>1\), there exists a polynomial-time \(c\)-approximate algorithm \((\)also called constant-factor approximation algorithm\()\) for \(\Pi\)._
**Definition 8** (Class \(F\)-Apx): _Given a class of functions \(F\), \(F\)-\(\mathrm{APX}\) is the class of all NPO problems \(\Pi\) such that, for some function \(r\in F\), there exists a polynomial-time \(r(n)\)-approximate algorithm for \(\Pi\)._
In the context of approximation algorithms, a reduction from a problem \(\Pi_{1}\) to a problem \(\Pi_{2}\) should guarantee that an approximate solution for \(\Pi_{2}\) yields an approximate solution for \(\Pi_{1}\). Thus we need not only a function \(f\) mapping instances of \(\Pi_{1}\) into instances of \(\Pi_{2}\), but also a function \(g\) mapping back solutions of \(\Pi_{2}\) into solutions of \(\Pi_{1}\). Next, we define an approximation-preserving reducibility called _AP-reducibility_.
Definition 9 (AP-reducibility): _Let \(P_{1}\) and \(P_{2}\) be two optimization problems in NPO. \(P_{1}\) is said to be **AP-reducible** to \(P_{2}\), written \(P_{1}\leq_{AP}P_{2}\), if there exist two functions \(f\) and \(g\) and a constant \(\alpha\geq 1\) such that:_
1. _For an instance_ \(x\in I_{P_{1}}\)_, and for any rational_ \(r>1\)_,_ \(f(x,r)\in I_{P_{2}}\)_._
2. _For an instance_ \(x\in I_{P_{1}}\)_, and for any rational_ \(r>1\)_, if_ \(SOL_{P_{1}}(x)\neq\emptyset\)_, then_ \(SOL_{P_{2}}(f(x,r))\neq\emptyset\)_._
3. _For any instance_ \(x\in I_{P_{1}}\)_, for any rational_ \(r>1\)_, and for any_ \(y\in SOL_{P_{2}}(f(x,r))\)_,_ \(g(x,y,r)\in SOL_{P_{1}}(x)\)_._
4. \(f\) _and_ \(g\) _are computable by two algorithms_ \(\mathcal{A}_{f}\) _and_ \(\mathcal{A}_{g}\)_, respectively, whose running time is polynomial for any fixed rational_ \(r\)_._
5. _For any instance_ \(x\in I_{P_{1}}\)_, for any rational_ \(r>1\)_, and for any_ \(y\in SOL_{P_{2}}(f(x,r))\)_,_ \[R_{P_{2}}(f(x,r),y)\leq r\,\Rightarrow\,R_{P_{1}}(x,g(x,y,r))\leq 1+\alpha(r-1).\]
_In the rest of this paper, this condition will be referred to as the **AP-condition**. The triple \((f,g,\alpha)\) is said to be an **AP-reduction** from \(P_{1}\) to \(P_{2}\)._
Remark 1: In most reductions from one optimization problem to another in the literature, the quality of the solution we are looking for is not required to be known explicitly. This is the case with our reductions as well, so we shall replace \(f(x,r)\) and \(g(x,y,r)\) with \(f(x)\) and \(g(x,y)\), respectively.
### System Model
Building on the description of PA in cell-free massive MIMO (CF-mMIMO) in Section 1.2, we remark that the channel estimation error caused by pilot contamination translates into affected achievable rates and eventually leads to an observable degradation of system throughput. Thus, the system performance of cell-free massive MIMO has been characterized by the system throughput in the literature [25], which is defined as \(\sum\limits_{k=1}^{K}R_{k}^{u}\), where \(K\) is the number of users and \(R_{k}^{u}\) denotes the uplink achievable rate for user \(k\). The quantity \(R_{k}^{u}\) is further defined as \(R_{k}^{u}=\)
\[\frac{1-\tau/\tau_{c}}{2}\log_{2}\left(1+\frac{\rho^{u}\eta_{k} \Big{(}\sum_{m\in A(k)}\gamma_{km}\Big{)}^{2}}{\rho^{u}\sum_{k^{\prime}\in O(k) }\eta_{k^{\prime}}\Big{(}\sum_{m\in A(k)}\gamma_{km}\frac{\beta_{k^{\prime}m}} {\beta_{km}}\Big{)}^{2}}\right) \tag{1}\]
where
* \(\eta_{k}\) is the uplink power control coefficient,
* \(\rho^{u}\) is the normalized uplink SNR (signal-to-noise ratio),
* \(\beta_{km}\) denotes the large-scale fading coefficient between user \(k\) and AP \(m\) including geometric path loss and shadowing,
* \(\gamma_{km}\) denotes the mean-square of the channel estimation of the channel coefficient between user \(k\) and AP \(m\),
* \(A(k)\) denotes the indices of the APs serving user \(k\), and
* \(O(k)\) denotes the set of indices of users \(k^{\prime}\) with the same pilot as user \(k\), excluding user \(k\).
### Problem formulation
We present the first mathematical definition of a CF-mMIMO system as well as a feasible pilot assignment for the system.
Definition 10 (Cell-Free Massive MIMO system): Let \(\mathcal{A}\) denote the set of APs with cardinality \(M\), \(U\) denote the set of users with cardinality \(K\), and \(\Psi\) denote the set of pilots with cardinality \(\tau\). Let \(\mathbf{\beta}\) be a \(K\times M\) matrix, with the \((k,m)\)-th element denoting the large-scale coefficient \(\beta_{km}\) between user \(k\) and AP \(m\). For each user \(k\), the set \(A(k)\)\((1\leq k\leq K)\) denotes the indices of the subset of APs serving it, as described above. We shall refer to the tuple \(\big{(}\mathcal{A},U,\{A(k)\}_{k=1}^{K},\mathbf{\beta},\Psi\big{)}\) as a **cell-free massive MIMO (CF-mMIMO) system S**.
Definition 11 (Feasible Pilot Assignment): A **feasible** pilot assignment for a CF-mMIMO system \(S\) is a well-defined, surjective function \(f:U\rightarrow\Psi\).
We state an easy-to-see lemma:
Lemma 1: _Finding feasible pilot assignments for the users in a given cell-free massive MIMO system \(S\) can be done in polynomial time._
Proof: All we really need is the partition of the set of users \(U\) such that no two subsets in the partition intersect. As long as \(K\gg\tau\), this is always possible. A simple greedy approach to see that this is true would be to randomly select \(\tau-1\) users from \(U\), and assign them to the first \(\tau-1\) pilots in \(\Psi\) respectively. Then assign the remaining \(K-\tau+1\) users to the remaining \(\tau\)-th pilot in \(\Psi\). By construction, this is a feasible pilot assignment for the system \(S\).
We now motivate our definition of the Pilot Assignment problem:
As mentioned earlier in Introduction Section 1.2, we assume that AP selection for each user has already been done. Thus, every quantity in Equation (1) is a constant. The pilot assignment problem seeks to find the optimum sets \(O(k)\) for all \(k\). Put simply, we seek a partition of the users into \(\tau\) non-empty disjoint sets so that the pilot contamination in the system is minimized. How pilot contamination can be defined mathematically is still a topic of great interest. In this vein, notice that only the first term in the sum in the denominator of the huge fraction inside the logarithm expression in Equation (1) changes with a change in the assignment of pilots to users. Further, as noted before, only a single term in the denominator changes, as the rest are all constants. Thus we focus on the term
\[\rho^{u}\sum_{k^{\prime}\in O(k)}\eta_{k^{\prime}}\Big{(}\sum_{m\in A(k)} \gamma_{km}\frac{\beta_{k^{\prime}m}}{\beta_{km}}\Big{)}^{2}.\]
As the logarithm is an increasing function, a decrease in this term causes the entire expression to increase. In many well-cited papers in the field [20, 19, 25, 27], the pilot contamination effect at the \(k\)-th user is denoted by the term
\[\sum_{k^{\prime}\in O(k)}\sum_{m\in A(k)}\left(\beta_{k^{\prime}m}/\beta_{km} \right)^{2}.\]
We understand here that dropping all the constants multiplied with this term technically reduces it to a _simpler_ form of the pilot contamination problem. Thus, in this paper, we prove hardness results for this definition of the pilot contamination problem. Since our goal is to minimize pilot contamination in the system with an optimum pilot assignment for all the users, we shall consider the sum of the above terms for all the users, i.e., we aim to minimize
\[\sum_{k=1}^{K}\sum_{k^{\prime}\in O(k)}\sum_{m\in A(k)}\left(\beta_{k^{\prime }m}/\beta_{km}\right)^{2}.\]
There is also some physical intuition behind dropping the above constants. The large-scale coefficient for a user and an AP is higher when they are geographically closer [22]. It is safe to assume that for a user \(k\), AP selection yields
only those APs \(m\) in the set \(A(k)\) for which the large-scale coefficients \(\beta_{km}\) are reasonably high. The above equations tell us that given a user \(k\), the potential disturbance from users \(k^{\prime}\) is severe when the proximity between the interfering users \(k^{\prime}\) and the APs in the set \(A(k)\) is higher than that between the user \(k\) and the APs in \(A(k)\). In a way, we want to put those users \(k^{\prime}\) in \(O(k)\), whose large-scale coefficients with respect to the APs that have been selected to serve \(k\) are low. In other words, they are farther from those APs than user \(k\).
To summarize the development so far, we have that given \(M\) APs, \(K\) users, with each user assigned a set \(A(k)\) (\(1\leq k\leq K\)) which denotes the indices of the subset of APs serving it, large-scale coefficients \(\beta_{km}\) between a user \(k\) and AP \(m\), and \(\tau\) pilots, we need to find a partition of the \(K\) users into \(\tau\) disjoint sets (where a set contains the users served by the same pilot) such that
\[\sum_{k=1}^{K}\sum_{k^{\prime}\in O(k)}\sum_{m\in A(k)}\left(\beta_{k^{\prime }m}/\beta_{km}\right)^{2} \tag{2}\]
is minimized, where \(O(k)\) is the set of indices users \(k^{\prime}\) with the same pilot as user \(k\), excluding user \(k\). Note that since AP selection is done before pilot assignment, the innermost summation remains untouched in this optimization problem.
Now let the \(\tau\) subsets of \(U\) induced by \(f\) be \(V_{1},V_{2},\ldots V_{\tau}\). In Equation (2), observe the outer two summations. Given a user \(k\), we only have to consider those users \(k^{\prime}\) which are served by the same pilot as it. In other words, given a set in a partition, we must look at all possible pairs of users within it. We must then look at the contribution of the _pair_\(k,k^{\prime}\) to the second sum, which by symmetry turns out to be
\[\sum_{m\in A(k)}\left(\beta_{k^{\prime}m}/\beta_{km}\right)^{2}+\sum_{m^{ \prime}\in A(k^{\prime})}\left(\beta_{km^{\prime}}/\beta_{k^{\prime}m^{\prime }}\right)^{2} \tag{3}\]
As mentioned in [25], we can regard the above term as the quantity of interference (thus leading to potential pilot contamination) between users \(k\) and \(k^{\prime}\). Since the order or permutation of the elements of the pair \(k,k^{\prime}\) matters, each pair of elements in a partition contributes _two_ terms to the second sum.
We then rewrite the outer two summations in Equation (2) as a summation over all possible pairs \(k,k^{\prime}\) of Equation (3), and then a summation of this over all sets in the partition:
\[\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\Biggl{(}\sum_{m\in A(k)}\left( \beta_{k^{\prime}m}/\beta_{km}\right)^{2}+\sum_{m^{\prime}\in A(k^{\prime})} \left(\beta_{km^{\prime}}/\beta_{k^{\prime}m^{\prime}}\right)^{2}\Biggr{)} \tag{4}\]
Thus, we get the following definition for the Pilot Assignment optimization problem:
**Definition 12** (Pilot Assignment Problem (Pa)): _Given a cell-free massive MIMO system \(S\), we call the optimization problem_
\[\min_{ffeasible}\sum_{\begin{subarray}{c}k,k^{\prime}\in U\\ f(k)=f(k^{\prime})\end{subarray}}\left(\sum_{m\in A(k)}\left(\beta_{k^{\prime} m}/\beta_{km}\right)^{2}+\sum_{m^{\prime}\in A(k^{\prime})}\left(\beta_{km^{ \prime}}/\beta_{k^{\prime}m^{\prime}}\right)^{2}\right)\]
_the **Pilot Assignment (PA)** problem._
We now define a classical graph problem that shall be crucial in proving our results.
**Definition 13** (Min-\(k\)-Partition): _Given an undirected graph \(G=(V,E)\) with \(n\) vertices and weight \(\omega_{i,j}\in\mathbb{Q}_{+}\) for the edge joining vertices \(i\) and \(j\)\(\forall\ 1\leq i,j\leq n\), the Min-\(k\)-Partition problem seeks to find a partition \(\mathcal{V}\) of \(V\) into \(k\) disjoint sets \(\{V_{1},V_{2},\ldots,V_{k}\}\) such that the total weight of the edges with endpoints within the same set is minimum._
The objective of the above problem can be formulated as:
\[\min_{\mathcal{V}=\{V_{1},V_{2},\ldots,V_{k}\}}\ \sum_{p=1}^{k}\ \sum_{i,j\in V _{p}}\omega_{i,j} \tag{5}\]
Note that the Max-\(k\)-Cut problem referred to in Section 1.3 is the dual of the Min-\(k\)-Partition problem. Moreover, both the Min-\(k\)-Partition and the Max-\(k\)-Cut problems are known to be NP-hard [1, 6, 12].
We shall formally define both the optimization and decision versions of the problem as per Definition 1 in Appendix 0.A. These can be referred to if the reader wants an explicit definition of PA as a tuple.
## 3 Complexity results for PA
### Primary results on complexity
To talk about the computational complexity of PA, we assume that all numerical values appearing as input data are rational and that they are encoded in binary form. We now present our first major theorem.
Theorem 3.1: _The following results hold for the time complexity of PA:_
1. \(\text{PA}\in\text{NPO}\)_._
2. \(\text{PA}\) _is_ NP-hard_._
3. \(\text{PA}\) _is strongly_ NP-hard_._
Proof:
1. It is recognizable in polynomial time whether a string encodes a tuple representing a cell-free massive MIMO system, as it primarily involves a check of the cardinality of multiple sets, and the dimension of a matrix. If all the users and pilots are listed individually, then the encoding length of a feasible pilot assignment for the users cannot exceed the encoding length of the CF-mMIMO system. From Lemma 1, we know that it is decidable in polynomial time if a pilot assignment is feasible. Finally, it is clear that Equation (4) is computable in polynomial time. Therefore, PA \(\in\) NPO.
2. We now present our proof of part \((ii)\), i.e., PA is NP-hard, by establishing the following claim: Min-\(k\)-Partition\(\leq_{T}^{p}\) PA. Consider an arbitrary instance of the Min-\(k\)-Partition problem, specified by a weighted graph \(G=(V,E,\omega)\), where \(\omega:E\rightarrow\mathbb{R}_{+}\) is a function mapping edges to their weights. We construct an instance of PA as follows: Set \(K=|V|\). The set of users \(U\) is set to be the set of vertices \(V\). Thus, \(|U|=K\). The number of pilots \(\tau\) is set to be \(k\). This determines the set \(\Psi=\{1,\ldots,k\}\). For all users indexed by \(1\leq i\leq K\), set \(|A(i)|=1\). This determines \(K\) number of APs. Since we know that in a practical cell-free massive MIMO system, we have \(M\gg K\), we could define many dummy APs to achieve this condition. We set the total number of APs in our PA instance as some arbitrarily large constant \(M\gg K\), which determines the set \(\mathcal{A}=\{1,\ldots,K,\ldots,M\}\). Further, \(\forall\;1\leq i\leq K\), let \(A(i)=\{i\}\). Thus, a single, distinct AP indexed by \(i\) serves the user \(i\). Now, \(\forall\;1\leq i\leq K\) and \(\forall\;1\leq m\leq M\) set \[\beta_{im}=\left\{\begin{array}{ll}1&\mbox{if }m=i,\\ \sqrt{\frac{\omega((i,m))}{2}}&\mbox{if }m\neq i\mbox{ and }m\leq K,\\ 0&\mbox{otherwise.}\end{array}\right.\] This determines the matrix \(\boldsymbol{\beta}\) in our instance of the problem. (\(\Rightarrow\)) If we have a solution to an instance of the Min-\(k\)-Partition problem, then by Definition 13, we have a partition \(\mathcal{V}=\{V_{1},V_{2},\ldots,V_{k}\}\) of the vertex set \(V\) such that the expression \(\sum\limits_{t=1}^{k}\sum\limits_{i,j\in V_{t}}\omega((i,j))\) is minimized. By our construction, the set of vertices \(V\) is the set of users \(U\) and \(k\) is the cardinality of the set \(\Psi\), \(\tau\). Replacing \(k\) by \(\tau\) and \(i,j\) by the arbitrary indices \(k,k^{\prime}\) to denote the users in our constructed instance of the PA problem, we get that the minimized equation is \(\sum\limits_{t=1}^{\tau}\sum\limits_{k,k^{\prime}\in V_{t}}\omega((k,k^{\prime }))\). Recall that the objective function to be minimized in a general instance of PA is Equation (4), which
can be simplified to \[\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\Biggl{(}\sum_{m\in A(k )}\left(\beta_{k^{\prime}m}/\beta_{km}\right)^{2}+\sum_{m^{\prime}\in A(k^{ \prime})}\left(\beta_{km^{\prime}}/\beta_{k^{\prime}m^{\prime}}\right)^{2}\, \Biggr{)}\] \[=\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\Biggl{(}\left(\beta_ {k^{\prime}k}/\beta_{kk}\right)^{2}+\left(\beta_{kk^{\prime}}/\beta_{k^{\prime }k^{\prime}}\right)^{2}\,\Biggr{)}\] \[=\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\omega((k,k^{\prime }))\] Thus we have a solution to our PA problem instance. (\(\Leftarrow\)) By the above simplification of Equation (4), if we have a solution for the above-constructed instance of PA, we end up minimizing the expression \[\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\omega((k,k^{\prime}))\] Now, we map the users \(k,k^{\prime}\) back to the vertices of the graph via arbitrary indices \(i,j\), and note that the number of pilots \(\tau\) is in fact the desired number of cuts, \(k\). So we end up minimizing the expression \[\sum_{t=1}^{k}\sum_{i,j\in V_{t}}\omega((i,j))\] which represents the total weight of such edges which have both endpoints in the same partition. Thus, due to the construction of our PA instance, we also have a solution to the corresponding Min-\(k\)-Partition instance. Therefore, the Min-\(k\)-Partition instance has a solution _if and only if_ if the above PA instance has a solution. This proves our claim. Since Min-\(k\)-Partition is NP-hard, we conclude that PA is NP-hard.
3. It is stated in [9] that Min-\(k\)-Partition is strongly NP-hard due to a reduction from the Graph \(k\)-Colourability problem, which has been proven to be strongly NP-complete [11]. As we could not find a reference for an explicit proof of this fact in the literature, we give a simple proof of the aforementioned reduction: The Chromatic Number or Graph \(k\)-Colourability is a non-numeric decision problem which asks whether the vertices of a graph \(G\) can be coloured using at most \(k\) colours such that no two adjacent vertices have the same colour. In other words, it asks if we can partition the vertices into at most \(k\) sets such that each of the sets is an independent set. We give a simple Turing reduction from Graph \(k\)-Colourability to Min-\(k\)-Partition as follows: Given an instance \((G,E,k)\) of the Graph \(k\)-Colourability problem, associate to it an instance of Min-\(k\)-Partition defined
by \(G=(V,E,\omega)\), such that \(\omega:E\to 1\). It is easy to see that \(G\) is \(k\)-colourable if and only if the optimal solution to the associated instance of Min-\(k\)-Partition yields a value of 0. This is due to the fact that a solution to the \(k\)-Colouring problem on \(G\) induces \(k\) independent sets in \(G\). In our instance of Min-\(k\)-Partition, these \(k\) independent sets translate to \(k\) subsets of the set of vertices \(V\) of the graph, such that no two vertices in a given subset are adjacent. Thus, the endpoints of any edge in \(G\) must be in two different sets. This gives us the minimum possible value of the sum of the weights of the edges with endpoints in the same partition: 0. Thus, we see that independent sets in \(G\) give us the optimum solution to the constructed instance of Min-\(k\)-Partition. On the other hand, if the solution to our Min-\(k\)-Partition instance has a value of 0, then all the edges in \(G\) have endpoints in disjoint sets of the partition. This follows from the fact that the weight of each edge is 1. Thus, the vertices in a given subset are pairwise disjoint, forming an independent set. Colouring the \(k\) independent sets in this partition of \(V\) using \(k\) different colours gives us the desired solution to the Graph \(k\)-Colourability problem.
A corollary of the above theorem follows immediately:
Corollary 1: _For every \(q\in\mathbb{Q}_{+}\), the decision version of PA is NP-complete._
### Further results on approximability
The first theorem tells us that unless P = NP, there exists no polynomial-time algorithm to solve PA. We turn our attention to the more practical aspects of tackling this problem. Instead of trying to obtain optimal solutions, we look at how well we can approximate the optimal solution to the problem in polynomial time. Since we are dealing with optimization problems (as opposed to decision problems), we must be careful when giving a polynomial-time reduction from one optimization problem to the other. While the decision problems corresponding to most NP-hard optimization problems admit a standard polynomial-time many-one (or Karp) reduction to each other, the optimization problems do not share the same approximability properties. This is due to the fact that many-one reductions do not always preserve the measure function and, even if they do, they seldom preserve the quality of the solutions. This is reflected in the approximability of the equivalent Min-\(k\)-Partition and Max-\(k\)-Cut problems as well. We know that while Max-\(k\)-Cut is APX-complete, Min-\(k\)-Partition is not in APX (see Appendix 0.B of [3]).
Thus, we appeal to Definition 9 to introduce a stronger kind of reducibility, namely AP-reducibility. Although different types of approximation-preserving reducibilities exist in the literature, AP-reducibility is sufficiently general to incorporate the properties of almost all such reducibilities, while also establishing a _linear relation_ between performance ratios. Approximation-preserving reductions induce an order on optimization problems based on their "difficulty" of
being approximated. Approximation-preserving reductions are also an essential tool for proving non-approximability results.
Next, we state four lemmas, which will help us prove our second theorem.
Lemma 2 ([3]): _If \(\mathrm{P}\neq\mathrm{NP}\), \(\mathrm{exp}\)-\(\mathrm{APX}\subset\mathrm{NPO}\)._
Proof: While this fact is stated in [3], we give a detailed proof (with an example mentioned in [3]) as follows:
Let \(P=(I,\,SOL,\,m,\,\mathrm{goal})\) be a problem in \(\mathrm{NPO}\). Since \(m\) is computable in polynomial time, there exist \(h\) and \(k\) such that for any \(x\in I\) with \(|x|=n\) and for any \(y\in SOL(x)\), \(m(x,y)\leq h2^{n^{k}}\). This is because the range of possible values of \(m(x,y)\) has an upper bound given by \(M=2^{p(|x|)}\) for some polynomial \(p\), which is again due to the properties of \(\mathrm{NPO}\) problems which state that the length \(|y|\) of any solution \(y\in SOL(x)\) is bounded by \(q(|x|)\) for some polynomial \(q\), and \(m\) is computable in polynomial time (see Definition 9). This implies that any feasible solution has a performance ratio bounded by \(h2^{n^{k}}\). Indeed, the polynomial bound on the computation time of the measure function for all \(\mathrm{NPO}\) problems implies that they are \(h2^{n^{k}}\)-approximable for some \(h\) and \(k\). This seems to imply that the classes \(\mathrm{exp}\)-\(\mathrm{APX}\) and \(\mathrm{NPO}\) are the same. However, we note that there exist several problems in \(\mathrm{NPO}\) for which it is hard even to decide whether any feasible solution exists, (and thus to find such a feasible solution) unless \(\mathrm{P}=\mathrm{NP}\). An example of such a problem is the Minimum \(\{0,1\}\)-Linear Programming problem, which belongs to \(\mathrm{NPO}\). The problem instance consists of a matrix \(A\in\mathbb{Z}^{m\times n}\) and vectors \(b\in\mathbb{Z}^{m}\), \(w\in\mathbb{N}^{n}\). The problem asks for a solution \(x\in\{0,1\}^{n}\) such that \(Ax\geq b\), and the measure function \(\sum\limits_{i=1}^{n}w_{i}x_{i}\) is minimized. Given an integer matrix \(A\) and an integer vector \(b\), deciding whether a binary vector \(x\) exists such that \(Ax\geq b\) is \(\mathrm{NP}\)-hard, as the Satisfiability problem is polynomial-time reducible to this decision problem (see Example 1.10 in Section 1.3.1, Chapter 1 of [3]). This implies that, if \(\mathrm{P}\neq\mathrm{NP}\), then Minimum \(\{0,1\}\)-Linear Programming does not belong to \(\mathrm{exp}\)-\(\mathrm{APX}\).
Lemma 3: _The polynomial-time reduction from Min-\(k\)-Partition to \(\mathrm{PA}\) given in Theorem 1.1(ii) is an AP-reduction with \(\alpha=1\)._
Proof: Consider an instance \(G=(V,E,\omega)\) of Min-\(k\)-Partition, we need to determine the function \(f\) that maps it to an instance of \(\mathrm{PA}\). A general instance of \(\mathrm{PA}\) is determined by the tuple \(\big{(}\mathcal{A},U,\{A(k)\}_{k=1}^{K},\boldsymbol{\beta},\Psi\big{)}\). Recall the definition of \(\mathrm{AP}\)-reduction from Definition 9. From our reduction, we get that \(f:(V,E,\omega)\mapsto\Big{(}\{1,\ldots,|V|,\ldots,M\},\,V,\,\{i\}_{i=1}^{|V|},\,\boldsymbol{\beta},\,\{1,\ldots,k\}\Big{)}\), where \(M\) is an arbitrary constant such that \(M\gg|V|\), and the \(im\)-th element of the matrix \(\boldsymbol{\beta}\) is \(\beta_{im}=1\) if \(m=i\), \(\sqrt{\frac{\omega((i,m))}{2}}\) if \(m\neq i\) and \(m\leq K\), and \(0\) otherwise. Moreover, if \(h\) is the feasible assignment that forms the solution of our constructed instance of
PA, then the partition of vertices \(\mathcal{V}=\{V_{1},V_{2},\ldots,V_{k}\}\) that forms the solution of our original Min-\(k\)-Partition instance is defined as \(V_{t}=\{i\,|\,h(i)=t\}\) where \(i\in V\). We set \(g:(G,h)\mapsto\mathcal{V}\), where \(V_{t}=\{k\,|\,k\in U\text{ and }h(k)=t\}\quad\forall\;1\leq t\leq\tau\). Notice that \(f\) and \(g\) do not depend on the performance ratio.
Finally, for any instance \(G\) of Min-\(k\)-Partition, with \(S=f(G)\) as the constructed instance of PA, \(h\) as the solution of \(S\), and \(\mathcal{V}=g(G,\mathcal{V})\) as the solution to the original instance \(G\) of Min-\(k\)-Partition, we see that \(m_{\text{M}k\text{P}}(G,\mathcal{V})=m_{\text{PA}}(S,h)\) (where M\(k\)P is short for Min-\(k\)-Partition), by the nature of the constructed instances and solutions. We conclude that \(f\) and \(g\) satisfy the AP-condition with \(\alpha=1\), and thus the polynomial-time reduction described above is an AP-reduction.
Lemma 4: PA \(\leq_{AP}\) Min-\(k\)-Partition _with \(\alpha=1\)._
Proof: Since we've proved that the decision version of PA is NP-complete, it follows from the definition of NP-completeness that PA and Min-\(k\)-Partition are equivalently hard. We first give an explicit polynomial-time reduction from PA to Min-\(k\)-Partition, and then prove that it is in fact an AP-reduction.
Consider an arbitrary instance of PA, specified by the cell-free massive MIMO system \(S=(\mathcal{A},U,A(k),\boldsymbol{\beta},\Psi)\) with \(|\mathcal{A}|=M,\,|U|=K,\,|\Psi|=\tau\). We construct an instance of Min-\(k\)-Partition as follows: Define a _complete_ graph \(G=(V,E,\omega)\) (where \(\omega:E\rightarrow\mathbb{R}_{+}\) is a function that maps the edges in our graph to positive real values), by setting \(V=U\). By construction, we have \(|V|=K\) and \(|E|=\frac{K(K+1)}{2}\). For any edge \((k,k^{\prime})\in E\), where \(k,k^{\prime}\in V\), we set
\[\omega((k,k^{\prime}))=\sum_{m\in A(k)}\left(\beta_{k^{\prime},m}/\beta_{k,m} \right)^{2}+\sum_{m^{\prime}\in A(k^{\prime})}\left(\beta_{k,m^{\prime}}/ \beta_{k^{\prime},m^{\prime}}\right)^{2}.\]
The weight function \(\omega\) defined above satisfies a symmetric property, i.e., \(\omega((k,k^{\prime}))=\omega((k^{\prime},k))\). It is important to note that we set the value of \(k\) that is referred to in the title of the Min-\(k\)-Partition problem to \(\tau\). So we have constructed an instance of a Min-\(\tau\)-Partition problem.
(\(\Rightarrow\)) If we have a solution to an instance of PA, then we have minimized the optimization function given by Equation (4), which we state again:
\[\sum_{t=1}^{\tau}\sum_{k,k^{\prime}\in V_{t}}\left(\sum_{m\in A(k)}\left(\beta _{k^{\prime}m}/\beta_{km}\right)^{2}+\sum_{m^{\prime}\in A(k^{\prime})}\left( \beta_{km^{\prime}}/\beta_{k^{\prime}m^{\prime}}\right)^{2}\,\right)\]
But notice that the objective function to be minimized in a general instance of the Min-\(k\)-Partition problem is \(\sum\limits_{t=1}^{k}\sum\limits_{i,j\in V_{t}}\omega((i,j))\). By the construction of our instance of the Min-\(k\)-Partition problem, the users \(U\) form the vertices \(V\) of
the graph and the cardinality \(\tau\) of the set of pilots \(\Psi\) is \(k\) (which is in the title of the problem, denoting the number of subsets of the vertices). Replacing \(\tau\) by \(k\) and the user indices \(k,k^{\prime}\) by the arbitrary indices \(i,j\) to denote the vertices of our graph, we get that the function minimized by solving the PA instance is
\[\sum\limits_{t=1}^{k}\sum\limits_{i,j\in V_{t}}\Biggl{(}\sum\limits_{m\in A(i)} \left(\beta_{jm}/\beta_{im}\right)^{2}+\sum\limits_{m^{\prime}\in A(j)}\left( \beta_{im^{\prime}}/\beta_{jm^{\prime}}\right)^{2}\Biggl{)}=\sum\limits_{t=1}^{ k}\sum\limits_{i,j\in V_{t}}\omega((i,j))\]
Therefore, we have a solution to our instance of the Min-\(\tau\)-Partition problem.
(\(\Leftarrow\)) By the above argument, it is clear that if we have a solution to our constructed instance of the Min-\(\tau\)-Partition problem, we also have a solution to the corresponding PA instance by a reverse change of indices, as seen in Theorem 1.(ii).
Hence, we have shown an explicit polynomial-time reduction from PA to Min-\(k\)-Partition. What remains to be verified is that this is an AP-reduction. To see this, consider an instance of PA given by \(S=\left(\mathcal{A},U,\{A(k)\}_{k=1}^{K},\boldsymbol{\beta},\Psi\right)\). A general instance of Min-\(k\)-Partition is given by an undirected graph \(G=\left(V,E,\omega\right)\). In the above reduction, we have that \(G\) is a complete graph with \(V=U\) and \(\omega((i,j))=\sum\limits_{m\in A(i)}\left(\beta_{j,m}/\beta_{i,m}\right)^{2} +\sum\limits_{m^{\prime}\in A(j)}\left(\beta_{i,m^{\prime}}/\beta_{j,m^{ \prime}}\right)^{2}\quad\forall\,i,j\in V\). Recall Definition 9. Thus we have \(f:\left(\mathcal{A},U,\{A(k)\}_{k=1}^{K},\boldsymbol{\beta},\Psi\right)\mapsto \left(U,E(K_{U}),w\right)\) where \(E(K_{U})\) denotes the set of edges of the complete graph on the vertices denoted by \(U\), and \(\omega((k,k^{\prime}))=\sum\limits_{m\in A(k)}\left(\beta_{k^{\prime},m}/ \beta_{k,m}\right)^{2}+\sum\limits_{m^{\prime}\in A(k^{\prime})}\left(\beta_{ k,m^{\prime}}/\beta_{k^{\prime},m^{\prime}}\right)^{2}\). Further, if we have obtained the partition \(\mathcal{V}=\{V_{1},V_{2},\ldots,V_{k}\}\) of the vertex set \(V\) in our constructed instance of Min-\(k\)-Partition, we get that the feasible assignment \(h\) which is the solution to the corresponding PA instance is defined as \(h(k)=t\) such that \(k\in V_{t}\quad\forall\,k\in U\). We set \(g:(S,\mathcal{V})\mapsto h\), where \(h(i)=t\) such that \(i\in V_{t}\in\mathcal{V}\). Notice that \(f\) and \(g\) do not depend on the performance ratio.
Finally, for any instance \(S\) of PA, with \(G=f(S)\) as the constructed instance of Min-\(k\)-Partition, \(\mathcal{V}\) as the solution of \(G\), and \(h=g(S,\mathcal{V})\) as the solution to the original instance \(S\) of PA, we see that \(m_{\text{PA}}(S,h)=m_{\text{M}k\text{P}}(G,\mathcal{V})\) (where M\(k\)P is short for Min-\(k\)-Partition), by the nature of the constructed instances and solutions. We conclude that \(f\) and \(g\) satisfy the AP-condition with \(\alpha=1\), and thus the polynomial-time reduction described above is an AP-reduction.
Observe that an AP-reduction from PA to Min-\(k\)-Partition is important from a practical viewpoint to prove positive approximation results for PA. In particular, it allows us to translate the performance ratios that exist for the approximation of special cases of Min-\(k\)-Partition (e.g., when \(k\) is 2 or 3) into performance ratios for the corresponding PA problem [17]. On the other hand, an AP-reduction from Min-\(k\)-Partition to PA is necessary to prove negative
approximability results for PA. We recall a few results on the approximability of Min-\(k\)-Partition stated in [8, 17] in the form of a lemma.
Lemma 5: _Assuming \(\textsc{P}\neq\textsc{NP}\), the following statements hold with respect to the computational complexity of Min-\(k\)-Partition:_
1. _The problem is not in_ APX_._ _[_21_]___
2. _For_ \(k\geq 3\)_, it is NP-hard to approximate Min-\(k\)-Partition _within_ \(O(|E|)\)_, even when restricting the instances to graphs with_ \(|E|=\Omega(|V|^{2-\epsilon})\)_, for a fixed_ \(\epsilon\)_,_ \(0<\epsilon<1\)_._ _[_17_]___
3. _No polynomial time algorithm can achieve a better performance ratio than 1.058 in the case of_ \(k=2\)_._ _[_15_]___
4. _In case of_ \(k=2\)_, a polynomial time algorithm with a performance guarantee of_ \(\log|V|\) _is known._ _[_13_]___
5. _In case of_ \(k=3\)_, a polynomial time algorithm with a performance guarantee of_ \(\epsilon|V|^{2}\) _for any_ \(\epsilon>0\) _is known._ _[_17_]___
Using Lemmas 2, 3, 4 and 5, we are now ready to prove the following theorem on the approximability of PA, inspired by [8, Section 3.2.3].
Theorem 2.2: _Assuming \(\textsc{P}\neq\textsc{NP}\), The following statements hold true for the Pilot Assignment problem:_
1. \(\textsc{PA}\) _is not in_ APX_, however it is in_ exp-APX_._
2. _An approximation of_ PA _within_ \(\mathcal{O}(K^{2})\) _for_ \(\tau\geq 3\) _is impossible in polynomial time._
3. _In the special case of_ \(\tau=2\)_, while no polynomial time algorithm can achieve a performance ratio better than 1.058, there exists an algorithm with a performance guarantee of_ \(\log|K|\) _in polynomial time._
4. _In the special case of_ \(\tau=3\)_, there exists a polynomial time algorithm with a performance guarantee of_ \(\epsilon|K|^{2}\) _for any_ \(\epsilon>0\)_._
Proof:
1. From Lemmas 1, 2 and 5.(i), the result follows immediately.
2. From Lemma 5.(ii), we have that Min-\(k\)-Partition cannot be approximated within \(\mathcal{O}(|E|)\) in polynomial time for \(k\geq 3\). Further, in PA, we typically deal with complete (or dense) graphs. So we have that \(|E|=\Omega(K^{2})\), where \(K\) is the number of users, or vertices in the graph. The number of pilots available, or the number of partitions required in the graph is \(\tau\). Hence, using Lemma 3, we get that unless \(\textsc{P}=\textsc{NP}\), it is impossible to approximate PA within \(\mathcal{O}(K^{2})\) in polynomial time for \(\tau\geq 3\).
3. The first part of this claim follows from Lemma 3 and Lemma 5.(iii), while the second part follows from Lemma 4 and Lemma 5.(iv).
4. As above, this follows easily from Lemma 4 and Lemma 5.(v).
## 4 Conclusion
In this paper, we studied the inherent hardness of the pilot assignment problem and found provable guarantees on the quality of achievable solutions to the problem. We defined the cell-free massive MIMO system mathematically, which is crucial in any theoretical study of this problem. We further proved that PA is strongly NP-hard, and at the same time, does not belong to the APX class of problems. The big picture here is that, to the best of our knowledge, our paper is the first to study PA from a theoretical computer science perspective, providing complexity results for the problem.
Recall that while the Max-\(k\)-Cut problem (see Section 1.3) has been used to develop the current state of the art for PA, our contribution here is to focus on its dual problem, the Min-\(k\)-Partition. Due to the varying nature of the approximability of these optimization problems, it is better to apply a heuristic for the Min-\(k\)-Partition problem to solve PA. This is due to the fact that the performance ratio of any heuristic/algorithm designed for Min-\(k\)-Partition can be translated directly for PA. There is extensive research on formulating and solving the Min-\(k\)-Partition problem using Linear Programming (LP) and Semi-definite Programming (SDP) relaxation approaches [6, 8, 9, 10, 14, 23]. This remains an active area of research. Further research in this area seems to be a step in the right direction towards finding better optimal solutions for PA.
It is also interesting to note that there has been very little research on the parameterized complexity of the Min-\(k\)-Partition problem. This leads us to believe that the problem may not be FPT (fixed-parameter tractable). A practical question that arises here is if this problem admits an XP algorithm for some suitable parameter. Such an algorithm may yield a pilot assignment scheme with superior performance, depending on the parameter used in the algorithm.
#### 4.0.1 Acknowledgements.
The authors thank Prof. David Manlove for his feedback on drafts of this paper. Further, we gratefully acknowledge the support of Dr. Yusuf Sambo and Dr. Abdulrahman Al Ayidh in understanding the problem as well as its existing heuristics, during the early stages of this project. This work was done while S. Prusty was working on her Master's project at the School of Computing Science, University of Glasgow. S. Prusty would like to thank DST-INSPIRE, Government of India for their support via the Scholarship for Higher Education (SHE) programme.
## Appendix 0.A Appendix for Section 2.3
### Definitions of the Decision and Optimization versions of PA
Definition 14 (Decision version of PA): The decision version of PA is defined by a triple \((I_{\mathrm{PA}},q,SOL_{\mathrm{PA}})\), where \(I_{\mathrm{PA}}\) is the set of all cell-free massive
MIMO systems \(S\), \(q\in\mathbb{Q}_{+}\) and \(SOL_{\text{PA}}:I_{\text{PA}}\rightarrow\{0,1\}\) is a function that assigns to each pair \((S,q)\) either the value 0 or 1. The assignment is done based on whether an instance \(S\) has a value of at most \(q\) for Equation (4). The problem asks if there exists an instance \(S\in I_{\text{PA}}\) such that given a number \(q\), \(SOL_{\text{PA}}(S,q)=1\)._
Definition 15 (Optimization version of PA): The minimization problem \(\text{PA}\) is characterized by the tuple \((I_{\text{PA}},SOL_{\text{PA}},m_{\text{PA}},\text{\sc min})\), where \(I_{\text{PA}}\) is the set of all cell-free massive MIMO systems \(S\) (instances of PA), \(SOL_{\text{PA}}\) is a function that assigns every instance \(S\in I_{\text{PA}}\) to its set of feasible pilot assignments \(f\), and \(m_{\text{PA}}\) is a measure function that assigns to each pair \((S,f)\), where \(S\in I_{\text{PA}}\) and \(f\in SOL_{\text{PA}}(S)\), the value of Equation (4). The problem asks for a given instance \(S\), a feasible assignment \(f\in SOL_{\text{PA}}(S)\) such that \(m_{\text{PA}}(S,f)=\min_{f^{\prime}\in SOL_{\text{PA}}(S)}m_{\text{PA}}(S,f^{ \prime})\).
|
2305.01494 | Colimits in 2-dimensional slices | We generalize to dimension 2 the well-known fact that a colimit in a
1-dimensional slice is precisely the map from the colimit of the domains which
is induced by the universal property. We show two different approaches to this;
a more intuitive one, based on the reduction of the weighted 2-colimits to
oplax normal conical ones, and a more abstract one, based on an original
concept of colim-fibration and on an extension of the Grothendieck
construction. We find the need to consider lax slices, and prove results of
preservation, reflection and lifting of 2-colimits for the domain 2-functor
from a lax slice. The preservation result is shown by proving a general theorem
of $\F$-category theory, which states that a lax left adjoint preserves
appropriate colimits if the adjunction is strict on one side and is suitably
$\F$-categorical. Finally, we apply this theorem of preservation of 2-colimits
to the 2-functor of change of base along a split Grothendieck opfibration
between lax slices, after showing that it is such a left adjoint by laxifying
the proof that Conduch\'{e} functors are exponentiable. We conclude extending
the result of preservation of 2-colimits for the change of base 2-functor to
any finitely complete 2-category with a dense generator. | Luca Mesiti | 2023-05-02T15:16:11Z | http://arxiv.org/abs/2305.01494v1 | # Colimits in 2-dimensional slices
###### Abstract.
We generalize to dimension 2 the well-known fact that a colimit in a 1-dimensional slice is precisely the map from the colimit of the domains which is induced by the universal property. We show two different approaches to this; a more intuitive one, based on the reduction of the weighted 2-colimits to oplax normal conical ones, and a more abstract one, based on an original concept of colim-fibration and on an extension of the Grothendieck construction.
We find the need to consider lax slices, and prove results of preservation, reflection and lifting of 2-colimits for the domain 2-functor from a lax slice. The preservation result is shown by proving a general theorem of \(\mathcal{F}\)-category theory, which states that a lax left adjoint preserves appropriate colimits if the adjunction is strict on one side and is suitably \(\mathcal{F}\)-categorical.
Finally, we apply this theorem of preservation of 2-colimits to the 2-functor of change of base along a split Grothendieck opfibration between lax slices, after showing that it is such a left adjoint by laxifying the proof that Conduche functors are exponentiable. We conclude extending the result of preservation of 2-colimits for the change of base 2-functor to any finitely complete 2-category with a dense generator.
Key words and phrases:2-categories, slice, colimits, lax adjoints, F-categories, change of base 2020 Mathematics Subject Classification: 18N10, 18A30, 18A40, 18D30, 18D15
###### Contents
* 1 Introduction
* 2 Colimits in 2-dimensional slices
* 3 \(\mathcal{F}\)-categories and a lifting result for the domain 2-functor
* 4 Lax \(\mathcal{F}\)-adjoints and preservation of colimits
* 5 Change of base between lax slices
## 1. Introduction
It is well-known that, in dimension 1, a colimit in a slice is precisely the map from the colimit of the domains which is induced by the universal property of the colimit. This fact, together with the results of preservation, reflection and lifting of all colimits for the domain functor from a slice, gives a complete calculus of colimits in 1-dimensional slices (see Theorem 1.1). And such a calculus has been proven useful in myriads of applications, in particular in the context of locally cartesian closed categories or for general exponentiability of morphisms, in categorical logic, algebraic geometry and topos theory. Indeed, an exponentiable morphism \(f:E\to B\) in \(\mathcal{C}\), that is an exponentiable object in \(\mathcal{C}/_{B}\), can be characterized as a morphism which admits all pullbacks along it and is such that the change of base functor \(f^{*}\colon\mathcal{C}/_{B}\to\mathcal{C}/_{E}\) has a right adjoint. The latter condition implies, and by adjoint functor theorems is often implied by, preservation of all
colimits for \(f^{*}\). The calculus of colimits in 1-dimensional slices is what allows to apply the exponentiability of a morphism to colimits in the slice that come from colimits in the category \(\mathcal{C}\), i.e. the ones that we have in practice.
The main result of this paper is a generalization to dimension 2 of this fruitful 1-dimensional calculus, including results of preservation, reflection and lifting of 2-colimits for the domain 2-functor from a lax slice. Theorems on suitable change of base 2-functors between lax slices are presented as well. The lax slice is indeed the appropriate 2-dimensional slice to consider in order to achieve such generalization, as we justify with two different approaches.
These results, in combination with our [11], will then be applied in forthcoming papers on 2-dimensional subobject classifiers and elementary 2-toposes; a subject firstly introduced by Weber in [16] to generalize the corresponding 1-dimensional fundamental concepts.
The following theorem condenses the calculus of colimits in 1-dimensional slices.
**Theorem 1.1**.: _Let \(\mathcal{C}\) be a category and let \(M\in\mathcal{C}\). The domain functor \(\operatorname{dom}\colon\mathcal{C}\,/_{M}\to\mathcal{C}\) preserves, reflects and lifts uniquely all colimits \((\)and so it creates all colimits\()\)._
_Moreover, for every diagram \(D\colon\mathcal{A}\to\mathcal{C}\) with \(\mathcal{A}\) small that admits a colimit in \(\mathcal{C}\), every morphism \(q\colon\operatorname{colim}_{A}D(A)\to M\) in \(\mathcal{C}\) is the colimit of a diagram in \(\mathcal{C}\,/_{M}\). More precisely,_
\[\begin{array}{ccc}\operatorname{colim}_{A}D(A)&&D(A)\\ \downarrow^{q}&&=\,\operatorname{colim}_{A}&&\downarrow^{q\circ i_{A}}\end{array} \text{ in }\mathcal{C}\,/_{M}, \tag{1}\]
_where the \(i_{A}\colon D(A)\to\operatorname{colim}_{A}D(A)\) are the inclusions that form the universal cocone._
Notice that this theorem recovers the property we mentioned above, that a colimit in \(\mathcal{C}\,/_{M}\) is precisely the map from the colimit of the domains which is induced by the universal property. Indeed, half of this fact is captured by the preservation of colimits for \(\operatorname{dom}\colon\mathcal{C}\,/_{M}\to\mathcal{C}\), whereas the other half, that is harder to capture, is represented by equation (1). In dimension 1, the latter special property holds because a cocone on \(M\) is the same thing as a diagram in the slice over \(M\). But in dimension 2, we need weighted 2-colimits and then weighted 2-cocylinders rather than cocones. This makes it then harder to establish a bijection with diagrams in a 2-dimensional slice, as such diagrams still have a conical shape.
In this paper, we first focus on generalizing the special property of equation (1) to dimension 2 (Theorem 2.20), extracting from a now weighted 2-cocylinder a diagram in a 2-dimensional slice that works. We show two different approaches to this. The first approach (Construction 2.1) is more intuitive, based on the reduction of the weighted 2-colimits to essentially conical ones, namely oplax normal conical ones. We have described such reduction in detail in our [11] with new more elementary proofs, but the idea goes back to Street's [14]. Remember that conical colimits do not suffice anymore in dimension 2, basically because functors from \(\boldsymbol{1}\) to a category \(\mathcal{C}\) cannot capture the whole of \(\mathcal{C}\), but just the objects. The philosophy behind weighted 2-colimits is to capture the whole of \(\mathcal{C}\) with functors from every possible category (or actually just \(\boldsymbol{2}\)) to it. But another solution is given by considering functors from \(\boldsymbol{1}\) to \(\mathcal{C}\) together with natural transformations between them. This brings to oplax normal conical 2-colimits, that are as expressive as weighted 2-colimits, but offer substantial advantages in certain situations. Indeed there is sometimes the need, as in this paper, to use essentially conical shapes. The price to pay is to have (coherent) 2-cells inside the cones, but this is something that can often be handled. The
reduction of weighted \(2\)-colimits to oplax normal conical ones is allowed and regulated by what we call the \(2\)-\(\operatorname{\mathcal{S}\!\mathit{e}t}\)-enriched Grothendieck construction. Such construction is an extension of the usual Grothendieck construction that admits \(2\)-functors \(\operatorname{\mathcal{A}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
needed assumptions, it is always possible to start from a weighted 2-colimit and view it in this context, after reducing it to an oplax normal conical one.
Finally, we apply the general \(\mathcal{F}\)-categorical theorem of preservation of 2-colimits to the 2-functor of change of base along a split Grothendieck opfibration between lax slices (see Proposition 5.2). In dimension 1, the concept of change of base between slices is definitely helpful, and it is well known that the pullback perfectly realizes such a job. For \(\mathcal{C}\mathcal{A}\mathcal{T}\), given a functor \(\tau\colon\mathcal{E}\to\mathcal{B}\), it is still a good idea to consider the pullback 2-functor \(\tau^{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{B}\right. \to\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{E}\right.\) between strict slices. And it is well known that such change of base 2-functor has a right adjoint \(\tau_{*}\) (and automatically a right 2-adjoint) precisely when \(\tau\) is a Conduche functor (as the latter functors are the exponentiable morphisms in \(\mathcal{C}\mathcal{A}\mathcal{T}\)). This is proved in Conduche's [4], with the ideas already present in Giraud's [5]. So that when \(\tau\) is Conduche then \(\tau^{*}\) is nicely behaved, preserving all the weighted 2-colimits. However, in order to generalize the calculus of colimits in 1-dimensional slices to dimension 2, we find the need to consider lax slices. And it is then helpful to have a change of base 2-functor between lax slices of a finitely complete 2-category. We believe that the most natural way to achieve this is by calculating comma objects rather than pullbacks. This is also connected to the construction of the category of elements, as we have described in our [11], but also, in general, to the concept of 2-dimensional elementary topos (see Weber's [16]). Equivalently to calculating comma objects, we can take pullbacks along split Grothendieck opfibrations, that serve as a kind of fibrant replacement (see Proposition 5.1). Such a point of view is preferable for us since Grothendieck opfibrations in \(\mathcal{C}\mathcal{A}\mathcal{T}\) are always Conduche and we can generalize the ideas for finding a right adjoint to the pullback functor \(\tau^{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{B}\right. \to\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{E}\right.\) (from Conduche's [4]) to lax slices. Notice that considering lax slices we are fixing a direction and we then need to take opfibrations, while Conduche functors are now too unbiased.
We prove that \(\tau^{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{\text{\rm\!}} \mathcal{B}\right.\to\mathcal{C}\mathcal{A}\mathcal{T}\left/\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm \!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm \!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{ \rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{ \text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}} \mathcal{\text{\rm\!}}\mathcal{\text{\rm\!}}\mathcal{\text{\rm\!
by \(\mathcal{F}\)-category theory, that we recall in Recall 3.3. We then show a result of lifting of \(2\)-colimits for \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\to\operatorname{\mathcal{E}}\!\left(\text{Proposition \ref{prop:2-colimits}}\right)\right.\).
In Section 4, we prove a result of preservation of \(2\)-colimits for the domain \(2\)-functor from a lax slice (Theorem 4.13). This is shown by proving a general theorem of \(\mathcal{F}\)-category theory (Theorem 4.11), which states that a lax left adjoint (Recall 4.2) preserves appropriate colimits if the adjunction is strict on one side and is suitably \(\mathcal{F}\)-categorical.
In Section 5, we apply this theorem of preservation of \(2\)-colimits to the \(2\)-functor of change of base along a split Grothendieck opfibration between lax slices (Theorem 5.3), laxifying the proof that Conduche functors are exponentiable. We conclude extending such result to prestacks (Proposition 5.6) and then to any finitely complete \(2\)-category with a dense generator (Theorem 5.8).
## 2. Colimits in \(2\)-dimensional slices
We aim at generalizing to dimension \(2\) the well-known \(1\)-dimensional result that a colimit in a slice corresponds to the map from the colimit of the domains which is induced by the universal property. Half of such result will be captured by preservation of \(2\)-colimits for the domain \(2\)-functor from a lax slice, that we will address in Section 4. In this section, we focus on the other half, that is the generalization to dimension \(2\) of the special property of equation (1) (of Theorem 1.1). Namely, we want to prove that a morphism from a \(2\)-colimit to some \(M\) can be expressed as a \(2\)-colimit in a \(2\)-dimensional slice over \(M\).
We show two different approaches to this, that lead to the same result (compare Construction 2.1 with Theorem 2.20). The first one is more intuitive, based on the reduction of the weighted \(2\)-colimits to oplax normal conical ones. Such reduction is explored in detail in our [11] with new more elementary proofs and before that in Street's [14]. The second approach is more abstract, based on an original concept of _colim-fibration_ (Definition 2.3 in dimension \(1\) and Definition 2.14 in dimension \(2\)). This will offer a shorter and more elegant proof, in Theorem 2.20. Both the approaches show the need to consider lax slices. In the first one, for example, this corresponds to only being able to essentially conicalize weighted \(2\)-colimits, rather than to strictly conicalize them.
This section also contains a result of reflection of \(2\)-colimits for the domain \(2\)-functor \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\to\operatorname{\mathcal{E}}\!\left(\text{\rm from a lax slice}\right.\) Indeed the concept of \(2\)_-colim-fibration_ involves reflecting (appropriate) \(2\)-colimits, together with being a \(2\)-\(\mathcal{S}\)_et_-fibration. The latter notion is described both in our [11] and in Lambert's [10] (with the name "discrete \(2\)-fibration").
We now begin exploring the first approach to the generalization to dimension \(2\) of equation (1) (of Theorem 1.1).
**Construction 2.1**.: Let \(\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\to\operatorname{\mathcal{E}}\!\left(\text{\rm from a lax slice}\right.\) Indeed the concept of \(2\)_-colim-fibration_ involves reflecting (appropriate) \(2\)-colimits, together with being a \(2\)-\(\mathcal{S}\)_et_-fibration. The latter notion is described both in our [11] and in Lambert's [10] (with the name "discrete \(2\)-fibration").
We now begin exploring the first approach to the generalization to dimension \(2\) of equation (1) (of Theorem 1.1).
**Construction 2.1**.: Let \(\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\right.\) be a \(2\)-category and let \(M\in\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\right.\). Consider a \(2\)-diagram \(F\colon\operatorname{\mathcal{A}}\to\operatorname{\mathcal{E}}\!\left/{}_{\! \text{\rm lax }M}\right.\) with \(\operatorname{\mathcal{A}}\!\left/{}_{\!\text{\rm small}}\right.\) and a weight \(W\colon\operatorname{\mathcal{A}}\!\left/{}_{\!\text{\rm op}M}\to\operatorname {\mathcal{C}}\!\left/{}_{\!\text{\rm lax }M}\right.\) such that the colimit \(\operatorname{colim}^{W}F\) of \(F\) weighted by \(W\) exists in \(\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\right.\) Take then a morphism \(q\colon\operatorname{colim}^{W}F\to M\), or equivalently the corresponding weighted \(2\)-cocylinder
\[\nu^{q}\colon W\Rightarrow\operatorname{\mathcal{E}}\!\left(F(-),M\right).\]
We would like to express \(q\) as a \(2\)-colimit in a \(2\)-dimensional slice of \(\operatorname{\mathcal{E}}\!\left/{}_{\!\text{\rm lax }M}\right.\). So we need to construct from \(\nu^{q}\) a \(2\)-diagram in a \(2\)-dimensional slice. In dimension \(1\), equation (1) (of Theorem 1.1) is based on the fact that a cocone on \(M\) coincides with a diagram in \(\operatorname{\mathcal{C}}\!\left/{}_{\!\text{\rm lax }M}\right.\) But here, in dimension \(2\), we have a weighted \(2\)-cocylinder \(\nu^{q}\) instead of a strict cocone, and thus it is not clear how to directly find a corresponding diagram in a slice. We notice that this is essentially a matter of selecting a cocone out of the bunch of cocones that
form the weighted \(2\)-cocylinder \(\nu^{q}\). And then we obtain a great help from the reduction of weighted \(2\)-colimits to oplax normal conical ones.
As described in our [11], while it is not possible to represent \(\nu^{q}\) with a selected strict cocone, it is possible to reduce \(\nu^{q}\) to an oplax normal \(2\)-cocone. Indeed
\[\operatorname{colim}^{W}F\cong\operatorname{oplax}^{\operatorname{n}}\operatorname {-colim}^{\Delta 1}(F\circ\mathcal{G}(W))\]
where \(\mathcal{G}(W):\,\int\!W\to\mathcal{A}\) is the \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction of \(W\). And \(\nu^{q}\) corresponds to an oplax normal \(2\)-cocone
\[\lambda^{q}\colon\Delta 1\xRightarrow{}_{\operatorname{oplax}^{\operatorname{n}}} \operatorname{\mathds{E}}\left((F\circ\mathcal{G}(W))(-),M\right):\left(\int\! W\right)^{\operatorname{op}}\to\mathcal{C}\!\mathcal{A}\!\mathcal{T}.\]
We recall from our [11] and Street's [14] that the \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction is a natural extension of the usual Grothendieck construction to admit \(2\)-presheaves with domain a \(2\)-category. An oplax normal natural transformation is an oplax one such that the structure \(2\)-cells on the morphisms of the kind \((f,\operatorname{id})\) in \(\left(\int\!W\right)^{\operatorname{op}}\) are identities. This is a particular case of more general transformations introduced in Gray's [6]. The normality condition is what encodes the strict naturality of weighted \(2\)-cocylinders.
It is now easy to check that an oplax normal \(2\)-cocone on \(M\) can be reorganized as a \(2\)-diagram in the lax slice \(\operatorname{\mathds{E}}/_{\!\!\operatorname{lax}}M\) on \(M\) (we will see the complete correspondence in Proposition 3.1), where a \(1\)-cell in the lax slice from \(E\xrightarrow{g}M\) to \(E^{\prime}\xrightarrow{g^{\prime}}M\) is a filled triangle
More precisely, we can reorganize \(\lambda^{q}\) as the \(2\)-diagram
In Theorem 2.20, we will prove that
\[\begin{array}{ccc}\operatorname{colim}^{W}F&=&\operatorname{oplax}^{ \operatorname{n}}\operatorname{-colim}^{\Delta 1}(F\circ\mathcal{G}(W))\\ \downarrow q&\downarrow q\\ M&&M\end{array}=\operatorname{oplax}^{\operatorname{n}}\operatorname{- colim}^{\Delta 1}L^{q}\]
in the lax slice \(\operatorname{\mathds{E}}/_{\!\!\operatorname{lax}}M\). Of course, one could prove this directly, but our proof will be shorter and more abstract, in Theorem 2.20, based on the _colim-fibrations_ point of view (that is, the second approach named above).
**Remark 2.2**.: Despite weighted \(2\)-colimits cannot be conicalized, we can almost conicalize them reducing them to oplax normal conical \(2\)-colimits. The price to pay is to have \(2\)-cells inside the cocones. And this then translates as the need to consider lax slices in order to generalize the \(1\)-dimensional Theorem 1.1 to dimension \(2\).
Such need is further justified by the second approach (see Remark 2.19), that we now present. The idea is to capture Theorem 1.1 from a more abstract point of view, in a
way that resembles the property of being a discrete fibration. We will then proceed to generalize such approach to dimension 2, arriving to Theorem 2.20.
The following definition does not seem to appear in the literature.
**Definition 2.3**.: A functor \(p\colon\mathcal{S}\to\mathcal{C}\) is a _colim-fibration_ if for every object \(S\in\mathcal{S}\) and every universal cocone \(\mu\) that exhibits \(p(S)\) as the colimit of some diagram \(D\colon\mathcal{A}\to\mathcal{C}\) with \(\mathcal{A}\) small, there exists a unique pair \((\overline{D},\overline{\mu})\) with \(\overline{D}\colon\mathcal{A}\to\mathcal{S}\) a diagram and \(\overline{\mu}\) a universal cocone that exhibits \(S=\operatorname{colim}\overline{D}\) such that \(p\circ\overline{D}=D\) and \(p\circ\overline{\mu}=\mu\).
(2)
**Remark 2.4**.: This is actually stronger than the property written in equation (1) of Theorem 1.1, but it will be clear after Proposition 2.10 that \(\operatorname{dom}\colon\mathcal{C}\,/_{M}\to\mathcal{C}\) is also a colim-fibration. The following propositions shed more light on what it means to be a colim-fibration.
**Proposition 2.5**.: _Every colim-fibration is a discrete fibration._
Proof.: Let \(p\colon\mathcal{S}\to\mathcal{C}\) be a colim-fibration. We firstly show that only identities can be over identities with respect to \(p\). So suppose \(v\colon S^{\prime}\to S\) is a morphism in \(\mathcal{S}\) such that \(p(v)=\operatorname{id}_{p(S)}\). We have that \(p(S)\) is trivially the colimit of the diagram \(D\colon\mathcal{Z}\to\mathcal{C}\) given by the arrow \(\operatorname{id}_{p(S)}\), with universal cocone given by just identities. But then both the arrows \(v\) and \(\operatorname{id}_{S}\) give a diagram \(\overline{D}\colon\mathcal{Z}\to\mathcal{S}\) with a universal cocone that exhibits \(S=\operatorname{colim}\overline{D}\) such that it is over the universal cocone given by the identities of \(p(S)\). And we conclude that \(v=\operatorname{id}_{S}\).
Take now \(S\in\mathcal{S}\) and \(u\colon C\to p(S)\) a morphism in \(\mathcal{C}\). We want to show that there is a unique lifting of \(u\) to \(S\). Consider then the diagram \(D\colon\mathcal{Z}\to\mathcal{C}\) given by the arrow \(u\) in \(\mathcal{C}\). Then the colimit of \(D\) exists trivially and is \(p(S)\), with universal cocone
(2)
As \(p\) is a colim-fibration, there exist a unique diagram \(\overline{D}\colon\mathcal{Z}\to\mathcal{S}\) and a unique universal cocone \(\overline{\mu}\) that exhibits \(S=\operatorname{colim}\overline{D}\) with \(p\circ\overline{D}=D\) and \(p\circ\overline{\mu}=\mu\). But then we need to have \(\overline{D}(1)=S\) and \(\overline{\mu}_{1}=\operatorname{id}_{S}\) by the argument above, whence \(\overline{D}\) is the unique lifting of \(u\) to \(S\).
**Corollary 2.6**.: _Let \(p\colon\mathcal{S}\to\mathcal{C}\) be a functor. The following are equivalent:_
1. \(p\) _is a colim-fibration;_
2. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
3. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
4. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
5. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
6. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
7. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
8. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
9. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
10. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
11. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
12. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
13. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
14. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
15. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a colim-fibration;_
16. _for every object_ \(S\in\mathcal{S}\) _and every universal cocone_ \(\mu\) _that exhibits_ \(p(S)\) _as the colimit of some diagram_ \(D\colon\mathcal{Z}\to\mathcal{C}\) _with_ \(\mathcal{A}\) _small, there exists a unique pair_ \((\overline{D},\overline{\mu})\) _with_ \(p(S)\) _such that_ \(p(S)\) _is a col
\(\overline{D}\colon\mathcal{A}\to\mathcal{S}\) a diagram and a \(\overline{\mu}\) a cocone for \(\overline{D}\) on \(S\) such that \(p\circ\overline{D}=D\) and \(p\circ\overline{\mu}=\mu\); moreover \(\overline{\mu}\) is a universal cocone that exhibits \(S=\operatorname{colim}\overline{D}\)._
**Remark 2.7**.: Corollary 2.6 shows that, for a colim-fibration, the liftings \(\overline{\mu}\) of universal cocones \(\mu\) are unique as mere cocone over \(\mu\) on the starting \(S\in\mathcal{S}\).
We notice that the definition of creating colimits (see for example Adamek, Herrlich and Strecker's [1]) and condition \((ii)\) of Corollary 2.6 for being a colim-fibration are actually pretty similar, but somehow dual to each other. Indeed, looking at the diagram in equation (2), creation of colimits starts from a diagram \(\overline{D}\) and produces a colimit \(S\) for it, while being a colim-fibration starts from some \(S\) and produces a diagram \(\overline{D}\) with colimit \(S\).
To further clarify the connection between these two, we recall the following proposition from Adamek, Herrlich and Strecker's [1] (Proposition 13.34).
**Proposition 2.8**.: _For a functor \(F\), the following are equivalent:_
1. \(F\) _preserves and lifts_ \([\)_uniquely_\(]\) _all the colimits;_
2. \(F\) _preserves and detects all the colimits, and moreover it is a_ \([\)_discrete_\(]\) _iso-fibration._
**Remark 2.9**.: By Proposition 2.5, a colim-fibration is always a discrete fibration and so always a discrete iso-fibration. But we still have to clarify the connection between being a colim-fibration and reflecting colimits.
**Proposition 2.10**.: _Let \(p\colon\mathcal{S}\to\mathcal{C}\) be a functor. The following are equivalent:_
1. \(p\) _is a colim-fibration;_
2. \(p\) _is a discrete fibration that reflects all the colimits._
Proof.: We prove "\((ii)\Rightarrow(i)\)". Take \(S\in\mathcal{S}\) and a universal cocone \(\mu\) that exhibits \(p(S)\) as the colimit of some diagram \(D\colon\mathcal{A}\to\mathcal{C}\) with \(\mathcal{A}\) small. Since \(p\) is a discrete fibration, there exists a unique pair \((\overline{D},\overline{\mu})\) with \(\overline{D}\colon\mathcal{A}\to\mathcal{S}\) a diagram and \(\overline{\mu}\) a cocone for \(\overline{D}\) on \(S\) such that \(p\circ\overline{D}=D\) and \(p\circ\overline{\mu}=\mu\). Indeed, for every \(A\in\mathcal{A}\), we can define \(\overline{D}(A)\) and \(\overline{\mu}_{A}\) by taking the unique lifting of \(\mu_{A}\) to \(S\), and, for every \(f\colon A\to B\) in \(\mathcal{A}\), define \(\overline{D}(f)\) to be the unique lifting of \(D(f)\) to \(\overline{D}(B)\), whose domain needs to be \(\overline{D}(A)\) since any discrete fibration is split. Moreover \(\overline{\mu}\) needs to be universal since \(p\) reflects all the colimits and \(p\circ\overline{\mu}=\mu\) is universal.
We now prove "\((i)\Rightarrow(ii)\)". So let \(p\) be a colim-fibration. After Proposition 2.5, we just need to prove that \(p\) reflects all the colimits. Take a diagram \(H\colon\mathcal{A}\to\mathcal{S}\) with \(\mathcal{A}\) small and a cocone \(\zeta\) for \(H\) on some object \(S\in\mathcal{S}\); assume then that \(p\circ\zeta\) is a universal cocone, exhibiting \(p(S)=\operatorname{colim}\left(p\circ H\right)\). By Corollary 2.6, we know that there exist a unique pair \((\overline{p\circ H},\overline{p\circ\zeta})\) with \(\overline{p\circ H}\colon\mathcal{A}\to\mathcal{S}\) a diagram and \(\overline{p\circ\zeta}\) a cocone for \(\overline{p\circ H}\) on \(S\) such that \(p\circ\overline{p\circ H}=p\circ H\) and \(\underline{p\circ\overline{p\circ\zeta}}=p\circ\zeta\), and that moreover \(\overline{p\circ\zeta}\) is a universal cocone that exhibits \(S=\operatorname{colim}\overline{p\circ H}\). But then we need to have that \(\overline{p\circ H}=H\) and \(\overline{p\circ\zeta}=\zeta\), whence we conclude.
**Corollary 2.11**.: _Let \(p\colon\mathcal{S}\to\mathcal{C}\) be a colim-fibration that preserves and detects all the colimits. Then \(p\)_(_preserves and_) _creates all the colimits._
Proof.: Clear combining Proposition 2.8 and Proposition 2.10, since creating all the colimits is equivalent to lifting uniquely and reflecting all the colimits (see for example Adamek, Herrlich and Strecker's [1]).
**Remark 2.12**.: We can now rewrite Theorem 1.1 by saying that \(\operatorname{dom}\colon\mathcal{C}\left/_{M}\to\mathcal{C}\right.\) is a colim-fibration that preserves and detects all the colimits. This is actually stronger than
Theorem 1.1, but we see that \(\operatorname{dom}\) does satisfy this as it can be expressed as the category of elements of the representable \(\operatorname{y}\left(M\right)\colon\operatorname{\mathcal{C}^{op}}\to \operatorname{\mathcal{S}\!\mathfrak{e}t}\) and is thus a discrete fibration. The explicit formula in equation (1) then comes from the explicit liftings of \(\operatorname{dom}\).
**Construction 2.13**.: At this point, we are ready to generalize the concept of colimfibration to dimension \(2\). As described in our [11], what we think most naturally generalizes discrete fibrations to dimension \(2\) are the \(2\)-\(\operatorname{\mathcal{S}\!\mathfrak{e}t}\)-fibrations. The latter are what gets classified by the \(2\)-\(\operatorname{\mathcal{S}\!\mathfrak{e}t}\)-Grothendieck construction. Precisely they are an extension of the usual Grothendieck fibrations that are, locally, discrete opfibrations. Hence they correspond to a locally discrete version of Hermida's \(2\)-fibrations; see Lambert's [10] and also Hermida's [7]. Being, locally, discrete opfibrations, they are able to uniquely lift \(2\)-cells to a fixed domain \(1\)-cell. Be careful, though, that it would now be much harder to directly generalize Definition 2.3 in a way that implies being a \(2\)-\(\operatorname{\mathcal{S}\!\mathfrak{e}t}\)-fibration. So we think it is most concise to just ask having a \(2\)-\(\operatorname{\mathcal{S}\!\mathfrak{e}t}\)-fibration.
We then need to use a \(2\)-categorical concept of cocone. While weighted \(2\)-cocylinders would be hard to handle, we notice that a \(2\)-\(\operatorname{\mathcal{S}\!\mathfrak{e}t}\)-fibration has the ability to lift \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft{ \leftleftleft{\leftleftleftleft{\leftleftleftleft{\leftleftleftleft{ \leftleftleftleftleft{\leftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleft{\leftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleftleftleftleftleftleftleft{\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleft{ \left
which coincides with \(\theta_{g,\beta}\), to \(\overline{\theta}_{(A,X)}\), and the cartesianity of \(\overline{\theta}_{(B,X^{\prime})}\). This argument also proves the \(2\)-dimensional property of the oplax naturality of \(\overline{\theta}\). It is straightforward to check that \(\overline{D}\) is a \(2\)-functor and that \(\overline{\theta}\) is oplax normal natural, using the cartesianity of the \(\overline{\theta}_{(A,X)}\)'s and the uniqueness of the liftings of a \(2\)-cell to a fixed domain \(1\)-cell (with arguments similar to the above one).
Clearly, the \(\overline{\theta}_{(A,X)}\)'s are not unique above the \(\theta_{(A,X)}\), but cartesian. Having fixed them, however, the rest of the oplax normal cocone \(\overline{\theta}\) is uniquely defined. It is also true that, given another pair \((\widetilde{D},\widetilde{\theta})\) that lifts \((D,\theta)\), the unique vertical morphisms \(\widetilde{D}(A,X)\to\overline{D}(A,X)\) that produce the factorization of the \(\widetilde{\theta}_{(A,X)}\)'s through the cartesian \(\overline{\theta}_{(A,X)}\)'s form a unique vertical \(2\)-natural transformation \(j\colon\widetilde{D}\to\overline{D}\) such that \(\widetilde{\theta}=(-\circ j)\circ\overline{\theta}\). This can be checked using the uniqueness of the liftings of a \(2\)-cell to a fixed domain \(1\)-cell and the cartesianity of the \(\overline{\theta}_{(A,X)}\)'s.
The following definition is original.
**Definition 2.14**.: A \(2\)-functor \(p\colon\mathcal{S}\to\mathcal{E}\) is a _\(2\)-colim-fibration_ if it is a cloven \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)-fibration such that, for every \(S\in\mathcal{S}\), marking \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\) with \(\mathcal{A}\) small, \(2\)-diagram \(D\colon\,\int\!W\to\mathcal{E}\) and universal oplax normal \(2\)-cocone
\[\theta\colon\Delta 1\xRightarrow{}_{\mathrm{oplax}^{n}}\mathcal{E}\left(D(-),p(S)\right)\]
that exhibits \(p(S)=\mathrm{oplax}^{n}\) - \(\mathrm{colim}^{\Delta 1}D\), the pair \((\overline{D},\overline{\theta})\) obtained by lifting \((D,\theta)\) through \(p\) to \(S\) as in Construction 2.13 exhibits
\[S=\mathrm{oplax}^{n}\) - \(\mathrm{colim}^{\Delta 1}\overline{D}.\]
**Remark 2.15**.: Remember that every weighted \(2\)-colimit can be reduced to an oplax normal conical one, so the property of being a \(2\)-colim-fibration can as well be applied to any universal weighted \(2\)-cocylinder
\[\mu\colon W\Rightarrow\mathcal{E}\left(F(-),p(S)\right)\]
for some \(2\)-diagram \(F\colon\mathcal{A}\to\mathcal{E}\), after reducing it to a universal oplax normal \(2\)-cocone.
We would now like to generalize Proposition 2.10 to dimension \(2\). We see, however, that a \(2\)-colim-fibration does not necessarily reflect all the (oplax normal conical) \(2\)-colimits, because the lifting \((\overline{D},\overline{\theta})\) of Construction 2.13 is not unique anymore. Indeed, if we start from an oplax normal \(2\)-cocone above (at the level of \(\mathcal{S}\)), project it down and lift, we do not find in general the starting oplax normal \(2\)-cocone. We almost find the same, however, if we start from an oplax normal \(2\)-cocone that we call _cartesian_, and now define. This will bring to Proposition 2.18.
**Definition 2.16**.: Let \(p\colon\mathcal{S}\to\mathcal{E}\) be a \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)-fibration. Consider then \(S\in\mathcal{S}\), a marking \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\) with \(\mathcal{A}\) small and a \(2\)-diagram \(H\colon\,\int\!W\to\mathcal{S}\). An oplax normal \(2\)-cocone
\[\zeta\colon\Delta 1\xRightarrow{}_{\mathrm{oplax}^{n}}\mathcal{S}\left(H(-),S\right)\]
is _cartesian_ if for every \((A,X)\in\int\!W\) the component \(\zeta_{(A,X)}\) (seen as a morphism in \(\mathcal{S}\)) is cartesian with respect to \(p\).
We say that \(p\)_reflects all the cartesian_ (oplax normal conical) _\(2\)-colimits_ if it reflects the universality of cartesian oplax normal \(2\)-cocones.
**Example 2.17**.: Let \(\mathcal{E}\) be a \(2\)-category and \(M\in\mathcal{E}\). The cartesian morphisms in \(\mathcal{E}\mathbin{/\!\!\!/_{\mathrm{ax}}}M\) with respect to \(\mathrm{dom}\colon\mathcal{E}\mathbin{/\!\!\!/_{\mathrm{ax}}}M\to\mathcal{E}\) are precisely the triangles
with the \(2\)-cell \(\gamma\) an isomorphism. So the cartesian oplax normal \(2\)-cocones in \(\mathcal{E}\mathbin{/\!\!\!/_{\mathrm{ax}}}M\) are the ones with components triangles filled with isomorphisms.
**Proposition 2.18**.: _Let \(p\colon\mathcal{S}\to\mathcal{E}\) be a cloven \(2\)-\(\mathcal{S}\!\!\!\!\mathcal{E}\)-fibration. The following are equivalent:_
1. \(p\) _is a_ \(2\)_-colim-fibration;_
2. \(p\) _reflects all the cartesian_ \(2\)_-colimits._
Proof.: We prove "\((ii)\Rightarrow(i)\)". In the notation of Definition 2.14, the oplax normal \(2\)-cocone \(\overline{\theta}\) is cartesian by Construction 2.13. Since \(p\) reflects all the cartesian colimits and \(p\circ\overline{\theta}=\theta\) is universal, then \(\overline{\theta}\) needs to be universal as well, exhibiting \(S=\mathrm{oplax}^{n}\operatorname{-colim}^{\Delta 1}\overline{D}\).
We now prove "\((i)\Rightarrow(ii)\)". So consider \(S\in\mathcal{S}\), a marking \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\!\!\!\!\mathcal{A}T\) with \(\mathcal{A}\) small, a \(2\)-diagram \(H\colon\,\int\!\!W\to\mathcal{S}\) and a cartesian oplax normal \(2\)-cocone
\[\zeta\colon\Delta 1\xrightarrow[\mathrm{oplax}^{n}]{}\mathcal{S}\left(H(-),S \right).\]
Assume that \(p\circ\zeta\) is universal, exhibiting \(p(S)=\mathrm{oplax}^{n}\operatorname{-colim}^{\Delta 1}(p\circ H)\). We prove that \(\zeta\) is universal as well. Consider the lifting \((\overline{p\circ H},\overline{p\circ\zeta})\) of \((p\circ H,p\circ\zeta)\) through \(p\) to \(S\), as in Construction 2.13. It is straightforward to check that, since cartesian liftings are unique up to a unique vertical isomorphism, there exists a \(2\)-natural isomorphism \(j\colon H\cong\overline{p\circ H}\) such that \(\zeta=(-\circ j)\circ\overline{p\circ\zeta}\) (see the last part of Construction 2.13). Since \(\overline{p\circ\zeta}\) is universal, as \(p\) is a \(2\)-colim-fibration, and \((-\circ j)\) is a \(2\)-natural isomorphism, we conclude that \(\zeta\) is universal.
**Remark 2.19**.: We have seen in Remark 2.12 that we can rephrase the \(1\)-dimensional Theorem 1.1 by saying that \(\mathrm{dom}\colon\mathcal{C}\mathbin{/\!\!\!/_{M}}\to\mathcal{C}\) is a colim-fibration that preserves and detects all the colimits. And this latter is actually stronger than Theorem 1.1, but true since \(\mathrm{dom}\colon\mathcal{C}\mathbin{/\!\!\!/_{M}}\to\mathcal{C}\) can be obtained as the category of elements of the representable \(\mathrm{y}\left(M\right)\colon\mathcal{C}^{\mathrm{op}}\to\mathcal{S}\!\!\! \!\mathcal{E}\).
As described in our [11], we believe the most natural categorification of the construction of the category of elements is given by the \(2\)-\(\mathcal{S}\!\!\!\!\mathcal{E}\)-enriched Grothendieck construction. So, to obtain a generalization of Theorem 1.1 (or better, the stronger colim-fibration result) to dimension \(2\), we consider the \(2\)-\(\mathcal{S}\!\!\!\mathcal{E}\)-enriched Grothendieck construction of a representable \(\mathrm{y}\left(M\right)\colon\mathcal{E}^{\mathrm{op}}\to\mathcal{C}\!\!\! \mathcal{A}T\) (given \(\mathcal{E}\) a \(2\)-category and \(M\in\mathcal{E}\)). This gives the domain functor from the lax slice \(\mathcal{E}\mathbin{/\!\!\!/_{\mathrm{ax}}}M\), further justifying Construction 2.1:
We now prove that \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax}}M \to\operatorname{\mathcal{E}}\right.\) is a \(2\)-colim-fibration (Theorem 2.20). In particular, considering how liftings along \(\operatorname{dom}\) are calculated, this will also imply the conclusion of Construction 2.1 (first approach) from this abstract point of view. We will then address lifting (that is stronger than detection) of \(2\)-colimits in Proposition 3.5 and preservation of \(2\)-colimits in Section 4, establishing a full \(2\)-categorical generalization of Theorem 1.1.
**Theorem 2.20**.: _Let \(\operatorname{\mathcal{E}}\) be a \(2\)-category and \(M\in\operatorname{\mathcal{E}}\). Then the \(2\)-functor \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax }}M\to\operatorname{\mathcal{E}}\right.\) is a \(2\)-colim-fibration. As a consequence, in the notation of Construction 2.1,_
\[\begin{array}{ccc}\operatorname{colim}^{W}F&=&\operatorname{oplax^{n}\,\text {-colim}^{\Delta 1}(F\circ\operatorname{\mathcal{G}}(W))}\\ \downarrow q&\downarrow q\\ M&&M\end{array}\ =\operatorname{oplax^{n}\,\text{-colim}^{\Delta 1}L^{q}}\]
_in the lax slice \(\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax}}M\right.\). Here, \(L^{q}\) is the \(2\)-diagram in \(\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax}}M\right.\) that corresponds to the oplax normal \(2\)-cocone \(\lambda^{q}\) on \(M\) associated to the weighted \(2\)-cocylinder on \(M\) that \(q\) represents._
Proof.: By Remark 2.19, we know that \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax }}M\to\operatorname{\mathcal{E}}\right.\) can be obtained as the \(2\)-\(\operatorname{\mathcal{S}\!\mathcal{E}}\)-enriched Grothendieck construction of \(\operatorname{y}(M):\operatorname{\mathcal{E}}^{\operatorname{op}}\to \operatorname{\mathcal{C}\!\mathcal{A}\!\mathcal{T}}\). So \(\operatorname{dom}\) is a \(2\)-\(\operatorname{\mathcal{S}\!\mathcal{E}}\)-fibration with a canonical cleavage (the chosen cartesian lifting of a morphism \(f\) is \((f,\operatorname{id})\)).
We prove that the second part of the statement is a consequence of the first one. So assume we have already proved that \(\operatorname{dom}\) is a \(2\)-colim-fibration. Calling \(\theta\) the universal oplax normal \(2\)-cocone that exhibits \(C=\operatorname{oplax^{n}\,\text{-colim}^{\Delta 1}(F\circ\operatorname{ \mathcal{G}}(W))}\), we then obtain that the lifting of \((F\circ\operatorname{\mathcal{G}}(W)\,,\theta)\) through \(\operatorname{dom}\) to \(q\) (calculated as in Construction 2.13)
exhibits
\[q=\operatorname{oplax^{n}\,\text{-colim}^{\Delta 1}\overline{F\circ \operatorname{\mathcal{G}}(W)}}\]
in \(\operatorname{\mathcal{E}}\left/{}_{\operatorname{\!lax}}M\right.\). And we can calculate \(\overline{F\circ\operatorname{\mathcal{G}}(W)}\) and \(\overline{\theta}\) explicitly, looking at the action of \(\operatorname{y}(M):\operatorname{\mathcal{E}}^{\operatorname{op}}\to \operatorname{\mathcal{C}\!\mathcal{A}\!\mathcal{T}}\) on \(1\)-cells and \(2\)-cells, since \(\operatorname{dom}=\operatorname{\mathcal{G}}(\operatorname{y}(M))\). Given \((A,X)\in\int\!W\),
\[\overline{F\circ\operatorname{\mathcal{G}}(W)}(A,X)=\operatorname{y}(M)\,( \theta_{(A,X)})(q)=q\circ\theta_{(A,X)}=\lambda^{q}_{(A,X)}=L^{q}(A,X)\]
\[\overline{\theta}_{(A,X)}=\]
Given \((f,\alpha)\colon(A,X)\to(B,X^{\prime})\) in \(\int\!W\),
\[\overline{\theta}_{f,\alpha}=\theta_{f,\alpha}\colon(\theta_{(A,X)},\operatorname{ id})\to(\theta_{B,X^{\prime}}\circ F(f),\operatorname{y}(M)\,(\theta_{f,\alpha})_{q})\]
whence, since \(\operatorname{y}(M)\,(\theta_{f,\alpha})_{q}=q*\theta_{f,\alpha}=\lambda_{f, \alpha}^{q}\),
Given \(\delta\colon(f,\alpha)\to(g,\beta)\colon(A,X)\to(B,X^{\prime})\) in \(\int\!W\),
We now prove that \(\operatorname{dom}\colon\operatorname{\mathcal{L}}/_{\operatorname{lax}}M\to \operatorname{\mathcal{I}}\) is a \(2\)-colim-fibration. By Proposition 2.18, it suffices to prove that \(\operatorname{dom}\) reflects all the cartesian colimits. So take \(t\colon K\to M\), a marking \(W\colon\operatorname{\mathcal{A}^{op}}\to\operatorname{\mathcal{C}\!\!\!\! \mathcal{A}T}\) with \(\operatorname{\mathcal{A}}\) small and a \(2\)-diagram \(H\colon\int\!W\to\operatorname{\mathcal{I}}/_{\operatorname{lax}}M\). Consider then a cartesian oplax normal \(2\)-cocone
\[\zeta\colon\Delta 1\xrightarrow[\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{ \!{ \!{ \!{ }}}}}}}}}}}} \operatorname{}} \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\!{\operatorname{\!{ \!{\!{\!{\!{\!{{ \!{{ \,}}}}}}}} \
The component \(\zeta_{(A,X)}\) is indeed a triangle filled with an isomorphism, by Example 2.17, since \(\zeta\) is cartesian. Whence we need to take
It is straightforward to check that \(\Xi\) is a modification between oplax normal \(2\)-cocones. So \(\Xi\) induces a unique \(2\)-cell \(\gamma\colon t\to g\circ\widehat{\gamma}\) in \(\mathpzc{E}\) such that \((\gamma*-)*(\operatorname{dom}\circ\zeta)=\Xi\). We check that \(\sigma=(\gamma\circ-)\circ\zeta\) (as oplax normal natural transformations). It surely holds on object components by construction of \(\Xi\). Given a morphism \((f,\alpha)\) in \(\bigl{(}\int\!W\bigr{)}^{\operatorname{op}}\), it suffices to check that
\[\sigma_{f,\alpha}=\widehat{\gamma}*\zeta_{f,\alpha}\]
But this holds by construction of \(\widehat{\gamma}\).
We now show the uniqueness of \(\gamma\). So assume there is some \(\gamma^{\prime}\colon t\to g\) in \(\mathpzc{E}\mathbin{/\!\!\!/_{\operatorname{\!\!/\operatorname{\!\!/\operatorname{ \!\!/\operatorname{\!\!/\operatorname{\!\!/\operatorname{\!\!/\operatorname{\! \!/\operatorname{\!\!/\operatorname{\!\!/\!/\operatorname{\!\!/\!/\!\!/ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3. \(\mathcal{F}\)-categories and a lifting result for the domain \(2\)-functor
A main result of this paper will be the complete generalization of the \(1\)-dimensional Theorem 1.1 to dimension \(2\). After Theorem 2.20, it remains to address preservation (Section 4) and lifting of colimits for \(\operatorname{dom}\colon\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}/_{\! \operatorname{lax}\,M}\to\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}\) (in this section). Remember that we indeed need to consider lax slices (see Remark 2.19).
In dimension \(1\), the fact that \(\operatorname{dom}\colon\operatorname{\mathcal{C}}/_{\!M}\to\operatorname{ \mathcal{C}}\) lifts all the colimits is based on the correspondence between the diagrams in \(\operatorname{\mathcal{C}}/_{\!M}\) and the cocones in \(\operatorname{\mathcal{C}}\) on \(M\) (see also the Introduction). More precisely, while the fact that \(\operatorname{dom}\) is a colim-fibration is based on the ability to produce a diagram in the slice over \(M\) from a cocone on \(M\), the lifting of colimits is based on the converse.
We have already seen in Construction 2.1 (first approach) that, in dimension \(2\), we can reorganize an oplax normal \(2\)-cocone on \(M\) as a \(2\)-diagram in the lax slice \(\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}/_{\!\operatorname{lax}\,M}\). Indeed this was the main idea, together with the reduction of the weighted \(2\)-colimits to oplax normal conical ones, of the first approach to the categorification of the equivalence between colimits in \(1\)-dimensional slices and maps from the colimit of the domains. As we see in the proof of Proposition 3.1, the second approach allows us to capture this reorganization process from a more abstract point of view. Indeed this is close to the first part of the proof of Theorem 2.20, where we proved the equivalence between the two approaches. However, not every \(2\)-diagram in \(\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}/_{\!\operatorname{lax}\,M}\) can produce an oplax normal \(2\)-cocone on \(M\), as the normality condition may fail.
In this section, we prove a generalization to dimension \(2\) of the bijective correspondence between cocones on \(M\) and diagrams in the slice over \(M\) (Proposition 3.1). We then justify this result via \(\operatorname{\mathcal{F}\mskip-2.0mu \mathcal{F}}\)-category theory, also called enhanced \(2\)-category theory and introduced in Lack and Shulman's [9]. We recall \(\operatorname{\mathcal{F}\mskip-2.0mu \mathcal{F}}\)-category theory in Recall 3.3. Finally, we show a result of lifting of \(2\)-colimits for \(\operatorname{dom}\colon\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}/_{\! \operatorname{lax}\,M}\to\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}\) in Proposition 3.5. Such result is not about all oplax normal \(2\)-colimits, but this makes sense from an \(\operatorname{\mathcal{F}\mskip-2.0mu \mathcal{F}}\)-categorical point of view.
**Proposition 3.1**.: _Let \(\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}\) be a \(2\)-category and \(M\in\operatorname{\mathcal{I}\mskip-2.0mu \mathcal{E}}\). Consider then a marking \(W\colon\operatorname{\mathcal{A}\mskip-2.0mu \mathcal{G}\mskip-2.
It is clear that the two constructions we have produced are inverses of each other.
**Remark 3.2**.: We believe Proposition 3.1 is best captured by \(\mathcal{F}\)-category theory, also called enhanced \(2\)-category theory, for which we take as main reference Lack and Shulman's [9]. We give a quick recall of \(\mathcal{F}\)-category theory in Recall 3.3, and rephrase Proposition 3.1 in Remark 3.4. \(\mathcal{F}\)-category theory will then be even more useful to us to prove the preservation of (appropriate) \(2\)-colimits for \(\operatorname{dom}\colon\operatorname{\mathcal{E}}/_{\operatorname{lax}} \to\operatorname{\mathcal{E}}\) in Section 4. We will indeed show that \(\operatorname{dom}\) preserves a big class of \(2\)-colimits (Theorem 4.13), despite not every \(2\)-colimit, and that this makes a lot of sense from an \(\mathcal{F}\)-categorical point of view.
**Recall 3.3**.: \(\mathcal{F}\) is the cartesian closed full subcategory of \(\operatorname{\mathcal{CAT}}^{2}\) (the category of arrows in \(\operatorname{\mathcal{CAT}}\)) determined by the functors which are injective on objects and fully faithful (i.e. full embeddings). It is possible to enrich over \(\mathcal{F}\), obtaining \(\mathcal{F}\)-category theory. An \(\mathcal{F}\)-category \(\mathcal{S}\) is then given by a collection of objects, a hom-category \(\mathcal{S}\left(X,Y\right)_{\tau}\) of tight morphisms and a second hom-category \(\mathcal{S}\left(X,Y\right)_{\lambda}\) of loose morphisms that give \(2\)-category structures (respectively) \(\mathcal{S}_{\tau}\) and \(\mathcal{S}_{\lambda}\) to \(\mathcal{S}\), together with an identity on objects, faithful and locally fully faithful \(2\)-functor \(J_{\mathcal{S}}\colon\mathcal{S}_{\tau}\to\mathcal{S}_{\lambda}\). An \(\mathcal{F}\)-functor \(F\colon\mathcal{S}\to\mathcal{T}\) is a \(2\)-functor \(F_{\lambda}\colon\mathcal{S}_{\lambda}\to\mathcal{T}_{\lambda}\) that restricts to a \(2\)-functor \(F_{\tau}\colon\mathcal{S}_{\tau}\to\mathcal{T}_{\tau}\) (forming a commutative square); this is equivalent to \(F_{\lambda}\) preserving tightness. And an \(\mathcal{F}\)-natural transformation is a \(2\)-natural transformation \(\alpha_{\lambda}\) between loose parts that restricts to one between the tight parts; this is equivalent to \(\alpha_{\lambda}\) having tight components. It is then true that the category \(\mathcal{F}\) is enriched over itself, with tight morphisms the morphisms of \(\mathcal{F}\), loose morphisms the functors between loose parts and \(2\)-cells the \(2\)-natural transformations between the latter. And for every \(\mathcal{F}\)-category \(\mathcal{S}\) and \(S\in\mathcal{S}\) we can build a copresheaf \(\mathcal{S}\left(S,-\right)\colon\mathcal{S}\to\mathcal{F}\), that sends \(S^{\prime}\) to the full embedding \(\mathcal{S}_{\tau}\left(S,S^{\prime}\right)\to\mathcal{S}_{\lambda}\left(S,S^ {\prime}\right)\).
Given \(\mathcal{F}\)-categories \(\mathcal{S}\) and \(\mathcal{T}\), there is an \(\mathcal{F}\)-category \(\left[\mathcal{S},\mathcal{T}\right]^{\mathcal{F}}\) of \(\mathcal{F}\)-functors from \(\mathcal{S}\) to \(\mathcal{T}\), where the tight morphisms are the \(\mathcal{F}\)-natural transformations, the loose morphisms are the \(2\)-natural transformations between the loose parts and the \(2\)-cells are the modifications between the loose morphisms. But we will need also an (op)lax version \(\left[\mathcal{S},\mathcal{T}\right]^{\mathcal{F}}_{\operatorname{(op)lax}}\) of it, which is the \(\mathcal{F}\)-category defined as follows:
_an object_ is an \(\mathcal{F}\)-functor \(G\colon\mathcal{S}\to\mathcal{T}\);
_a loose morphism_\(G\underset{\text{loose}}{\Longrightarrow}H\) is an (op)lax natural transformation \(\alpha_{\lambda}\) between the loose parts such that the structure \(2\)-cells on tight morphisms are identities, that precisely means that \(\alpha_{\lambda}\ast J_{\mathcal{S}}\) is (strictly) \(2\)-natural; we call them _loose strict/(op)lax_; _a tight morphism_\(G\Rightarrow H\) is a loose one that restricts to a \(2\)-natural transformation between the tight parts, which is equivalent to a loose morphism with tight components; they are usually called _strict/(op)lax_;
_a \(2\)-cell_ is a modification between the loose morphisms.
We can then apply the definitions above to the case \(\mathcal{T}=\mathcal{F}\), obtaining two \(\mathcal{F}\)-categories of copresheaves on \(\mathcal{S}\). The strict one, \(\left[\mathcal{S},\mathcal{F}\right]^{\mathcal{F}}\), can be characterized as follows:
_an object_ is an \(\mathcal{F}\)-functor \(G\colon\mathcal{S}\to\mathcal{F}\), that we can identify with a pair of \(2\)-functors \(G_{\tau}\colon\mathcal{S}_{\tau}\to\operatorname{\mathcal{CAT}}\) and \(G_{\lambda}\colon\mathcal{S}_{\lambda}\to\operatorname{\mathcal{CAT}}\) together with a \(2\)-natural transformation
whose components are all full embeddings;
a loose morphism_ \(G\underset{\text{\rm loose}}{\Longrightarrow}H\) is a \(2\)-natural transformation \(\alpha_{\lambda}\colon G_{\lambda}\Rightarrow H_{\lambda}\colon\mathcal{S}_{ \lambda}\to\mathcal{C}\mathcal{A}\mathcal{T}\); _a tight morphism_ \(G\Rightarrow H\) is a loose one with tight components, that precisely means that it induces a \(2\)-natural transformation \(\alpha_{\tau}\colon G_{\tau}\Rightarrow H_{\tau}\) such that _a \(2\)-cell_ is a modification between the loose morphisms.
Whereas the oplax version \(\left[\mathcal{S},\mathcal{F}\right]^{\mathcal{F}}_{\text{\rm oplax}}\) can be characterized as follows: _an object_ is an object of \(\left[\mathcal{S},\mathcal{F}\right]^{\mathcal{F}}\), that we keep on viewing as a triangle above (in the description of \(\left[\mathcal{S},\mathcal{F}\right]^{\mathcal{F}}\)); _a loose morphism_\(G\underset{\text{\rm loose}}{\Longrightarrow}H\) is an oplax natural transformation \(\alpha_{\lambda}\colon G_{\lambda}\to H_{\lambda}\) that is (strictly) \(2\)-natural on tight morphisms, meaning that \(\alpha_{\lambda}\ast J_{\mathcal{S}}\) is \(2\)-natural; we call them _oplax normal_; _a tight morphism_\(G\Rightarrow H\) is a loose one with tight components, that precisely means that it induces a \(2\)-natural transformation \(\alpha_{\tau}\colon G_{\tau}\Rightarrow H_{\tau}\) such that _a \(2\)-cell_ is modification between the loose morphisms.
**Remark 3.4**.: The \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{F}\)-enriched Grothendieck construction of a \(2\)-functor \(W\colon\mathcal{A}^{\text{\rm op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) has a canonical structure of \(\mathcal{F}\)-functor \(\mathcal{G}(W)\colon\int\!W\to\mathcal{A}\). Indeed any \(2\)-category \(\mathcal{A}\) can be seen as an \(\mathcal{F}\)-category by taking every morphism to be tight. And we can give \(\int\!W\) a natural \(\mathcal{F}\)-category structure taking the loose part to be itself (as a \(2\)-category) and as tight morphisms the the morphisms of the kind \((f,\text{id})\) (i.e. the morphisms of the cleavage). Then \(\left[\left(\int\!W\right)^{\text{\rm op}},\mathcal{C}\mathcal{A}\mathcal{T} \right]_{\text{\rm oplax}^{\text{\rm n}}}\) coincides with the loose part of \(\left[\left(\int\!W\right)^{\text{\rm op}},\mathcal{F}\right]^{\mathcal{F}}_{ \text{\rm oplax}}\), justifying even more the use of the oplax normal natural transformations to work with the \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{F}\)-enriched (but also the usual) Grothendieck construction.
Since \(\mathcal{E}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal
_That is, given a marking \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small, an \(\mathcal{F}\)-diagram \(H\colon\,\int\!W\to\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In this section, we prove (Theorem 4.11) that having a _right semi-lax right \(\mathcal{F}\)-adjoint_ is enough to guarantee the preservation of all _tight strict/oplax \(F\)-colimits_ (see Definition 4.6). Furthermore, having only a _right semi-lax loose right \(\mathcal{F}\)-adjoint_ is enough to guarantee the preservation of a big class of \(2\)-colimits.
We then prove that \(\operatorname{dom}\) has such a (tight) \(\mathcal{F}\)-categorical right adjoint (Theorem 4.13), and thus preserves all _tight strict/oplax \(F\)-colimits_, as well as other (more loose) \(2\)-colimits.
In Section 5, we will prove that also the \(2\)-functor of change of base along a split Grothendieck opfibration between lax slices has a suitable \(\mathcal{F}\)-categorical right adjoint (so that also such \(2\)-functor preserves a big class of \(2\)-colimits).
We begin recalling the concept of lax adjunction and the universal mapping property that characterizes it, for which we take as references Gray's [6] and Bunge's [3].
**Definition 4.1**.: A lax adjunction is, for us, what Gray calls a strict weak quasi-adjunction in [6]. That is, a _lax adjunction_ from a \(2\)-functor \(F\colon\mathcal{A}\to\mathcal{B}\) to a \(2\)-functor \(U\colon\mathcal{B}\to\mathcal{A}\) is given by a lax natural unit \(\eta\colon\operatorname{Id}\Rightarrow U\circ F\), a lax natural counit \(\varepsilon\colon F\circ U\Rightarrow\operatorname{Id}\) and modifications
that express lax triangular laws, such that both the swallowtail modifications
are identities.
A _right semi-lax adjunction_ is a lax adjunction in which the counit \(\varepsilon\) is strictly \(2\)-natural and the modification \(s\) is the identity.
We call a lax adjunction _strict_ when \(s\) and \(t\) are both identities, making the triangular laws to hold strictly.
**Recall 4.2**.: Using the lax comma objects, firstly introduced in Gray's [6] and refined in our [11], we can reduce the study of lax adjunctions to ordinary adjunctions between homsets. Indeed, according to Gray, a lax adjunction is equivalently given by homomorphic \(2\)-adjoint functors
over \(\mathcal{A}\times\mathcal{B}\) with unit \(\chi\colon\operatorname{id}\Rightarrow T\circ S\) and counit \(\xi\colon S\circ T\Rightarrow\operatorname{id}\) (that are automatically \(2\)-natural if assumed natural) over \(\mathcal{A}\times\mathcal{B}\). Here \(S\) and \(T\) homomorphic means that they are given uniquely by lax natural \(\eta\) and \(\varepsilon\) (it can be defined precisely as in 1.5.10 of Gray's [6], asking for example \(T\) to transform precomposition with cells in \(\mathcal{A}\) into precomposition with the image through \(F\) of those cells; see also below how we produce \(T\) from \(\varepsilon\)).
Strictness corresponds to
\[\chi*i_{F}=\operatorname{id}\quad\text{ and }\quad\xi*i_{U}=\operatorname{id},\]
where \(i_{F}\colon\mathcal{A}\to F\operatorname{\not|_{\text{\rm{ax}}}}\mathcal{B}\) is the \(2\)-functor induced by the identity on \(F\) and analogously for \(i_{U}\).
Such a \(2\)-adjunction \(S\dashv T\) means, in particular, that we have ordinary adjunctions between homsets
for every \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\). And we can rephrase such ordinary adjunctions in terms of having universal units. The global adjunction \(S\dashv T\) corresponds, then, to such units satisfying a broader universal property, that captures the possibility of \(h\colon F(A)\to B\) to vary in the whole lax comma object \(F\operatorname{\not|_{\text{\rm{ax}}}}\mathcal{B}\) rather than in just \(\mathcal{B}\left(F(A),B\right)\). This is the idea behind Proposition 4.3, that shows the universal mapping property that characterizes lax adjunctions, except that in general the lax right-adjoint produced is only oplax functorial. In our examples, however, such characterization will produce a strict (right semi-) lax adjunction between \(2\)-functors.
Before that, it is helpful to see explicitly how a lax adjunction \((F,U,\eta,\varepsilon,s,t)\) produces the adjunctions \((S,T,\chi,\xi)\) between the homsets on \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\), as we will use this later. \(S\) and \(T\) are defined as usual as
\[S=(-\circ\eta_{A})\circ U\]
\[T=(\varepsilon_{B}\circ-)\circ F\]
And the lax naturality of \(\eta\) and \(\varepsilon\) gives \(\chi\) and \(\xi\); precisely, given \(h\colon F(A)\to B\) in \(\mathcal{B}\) and \(k\colon A\to U(B)\) in \(\mathcal{A}\)
In particular, we see that for a right semi-lax adjunction we obtain \(\chi=\operatorname{id}\) and then \(T\circ S=\operatorname{Id}\), whence the name comes from.
**Proposition 4.3** (dual of Gray's Proposition I,7.8.2 in [6] and of Theorem 4.1 in Bunge's [3]).: _Let \(F\colon\mathcal{A}\to\mathcal{B}\) be a \(2\)-functor. Suppose that for every \(B\in\mathcal{B}\) there is an object \(U(B)\in\mathcal{A}\)_
_and a morphism \(\varepsilon_{B}\colon F(U(B))\to B\) in \(\mathcal{B}\) that is universal in the following sense: for every \(h\colon F(A)\to B\) in \(\mathcal{B}\) there is an \(\overline{h}\colon A\to U(B)\) in \(\mathcal{A}\) and a \(2\)-cell_
_in \(\mathcal{B}\) such that, given any other \(g\colon A\to U(B)\) and \(\sigma\colon h\Rightarrow\varepsilon_{B}\circ F(g)\), there is a unique \(\delta\colon\overline{h}\Rightarrow g\) such that_
_Assume then that, for every \(h\colon F(A)\to B\) in \(\mathcal{B}\), we have \(\overline{h}=\overline{h\circ\varepsilon_{F(A)}}\circ\overline{\mathrm{id}_{F (A)}}\) and_
(4)
_and also that, for every \(B\in\mathcal{B}\), we have \(\overline{\varepsilon_{B}}=\mathrm{id}\) and \(\lambda_{\varepsilon_{B}}=\mathrm{id}\)._
_Then \(U\) extends to an oplax functor, \(\varepsilon\) extends to a lax natural transformation and there exist a lax natural transformation \(\eta\) and modifications \(s,t\) such that \(U\) is a lax right-adjoint to \(F\), except that in general \(U\) is only an oplax functor \((\)and the swallowtail identities need to be slightly modified accordingly\()\)._
_In particular, if \(\lambda_{h}=\mathrm{id}\) for every \(h\colon F(A)\to B\), we obtain a right semi-lax adjunction._
_Proof_ (_constructions_).: Given \(g\colon B\to B^{\prime}\) in \(\mathcal{B}\), define \(U(g)\coloneqq\overline{g\circ\varepsilon_{B}}\) and \(\varepsilon_{g}\coloneqq\lambda_{g\circ\varepsilon_{B}}\). Given a \(2\)-cell \(\mu\colon g\Rightarrow g^{\prime}\) in \(\mathcal{B}\), define \(U(\mu)\) as the unique \(2\)-cell induced by \(\varepsilon_{g^{\prime}}\circ\mu\varepsilon_{B}\). Given composable morphisms \(g\) and \(g^{\prime}\) in \(\mathcal{B}\), pasting \(\varepsilon_{g}\) and \(\varepsilon_{g^{\prime}}\) induces a unique coassociator for \(U\), while the identity \(2\)-cell induces a unique countor.
We then define \(\eta_{A}\coloneqq\overline{\mathrm{id}_{F(A)}}\) and \(s_{A}\coloneqq\lambda_{\mathrm{id}_{F(A)}}\).
And for every \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) we take \(\eta_{f}\) to be the unique \(2\)-cell that is induced from
considering \(s_{A^{\prime}}*F(f)\), thanks to the assumption in equation (4). Finally, for every \(B\in\mathcal{B}\), we induce \(t_{B}\) from the identity \(2\)-cell on \(\varepsilon_{B}\), using again the assumption in equation (4), with \(h=\varepsilon_{B}\).
**Remark 4.4**.: In our two examples, i.e. with \(F\) equal to \(\operatorname{dom}\colon\operatorname{\mathcal{F}}/_{\!\!\mathrm{lax}}M\to \operatorname{\mathcal{F}}\) (Theorem 4.13) and to the \(2\)-functor of pullback along a split Grothendieck opfibration between lax slices of \(\operatorname{\mathcal{C}\!\mathcal{A}\!\mathcal{T}}\) (Theorem 5.3), Proposition 4.3 will produce a strict right semi-lax adjunction between \(2\)-functors. But this would not be enough to guarantee preservation of colimits. It is enough if the right semi-lax adjunction is \(\operatorname{\mathcal{F}}\)-categorical, as we prove in Theorem 4.11. But we have to restrict the attention to the (_tight_) _strict/oplax \(\operatorname{\mathcal{F}}/_{\!\!\mathrm{l}ax}\)_-colimits_, defined in Definition 4.6 (that we believe are the suitable colimits to consider in this context). It might be helpful to look at Recall 3.3 (recall about \(\operatorname{\mathcal{F}}\)-category theory).
The concept of lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{adjunction}\) appears in Walker's [15], but in a pseudo/lax version and with the stronger request that \(s\) and \(t\) are isomorphisms. Moreover, only what for us is the tight version is considered there, asking the unit \(\eta\) and the counit \(\varepsilon\) to be (tight) pseudo/oplax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-natural}\) rather than loose ones. The latter request means that \(\eta\) and \(\varepsilon\) are tight morphisms in some \([\mathcal{S},\mathcal{S}]^{\mathcal{F}}_{\!\!\mathrm{olax}}\) of Recall 3.3 rather than loose ones. Such request is not necessary to guarantee the preservation of the "loose part" of tight strict/oplax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-colimits}\). Moreover, it is not satisfied by our example of change of base along a split Grothendieck opfibration between lax slices.
**Definition 4.5**.: A _loose lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-adjunction}\)_ is a lax adjunction \((F,U,\eta,\varepsilon,s,t)\) between the loose parts in which \(F\) and \(U\) are \(\operatorname{\mathcal{F}}/\!\!\mathrm{-functors}\) and \(\eta\) and \(\varepsilon\) are loose strict/lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-natural}\) transformations (i.e. loose morphisms in \([\mathcal{S},\mathcal{S}]^{\mathcal{F}}_{\!\!\mathrm{lax}}\) of Recall 3.3 for suitable \(\mathcal{S}\)).
A (_tight_) _lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-adjunction}\)_ is a loose one such that \(\eta\) and \(\varepsilon\) are (tight) strict/lax \(\mathcal{F}/\!\!\mathrm{-natural}\) transformations (that is, have tight components).
A _right semi-lax loose \(\operatorname{\mathcal{F}}/\!\!\mathrm{-adjunction}\)_ is a loose lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-adjunction}\) such that \(\varepsilon\) is strictly \(2\)-natural (i.e. a loose morphism in \([\mathcal{S},\mathcal{S}]^{\mathcal{F}}\) of Recall 3.3) and \(s\) is the identity.
We call a loose lax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-adjunction}\)_strict_ if both \(s\) and \(t\) are identities.
**Definition 4.6**.: Let \(\mathcal{A}\) be a small \(\operatorname{\mathcal{F}}/\!\!\mathrm{-category}\) and consider \(\operatorname{\mathcal{F}}/\!\!\mathrm{-functors}\)\(W\colon\operatorname{\mathcal{A}}^{\mathrm{op}}\to\operatorname{\mathcal{F}}\) (the weight) and \(H\colon\operatorname{\mathcal{A}}\to\mathcal{S}\) (the \(\operatorname{\mathcal{F}}/\!\!\mathrm{-diagram}\)). The _strict/oplax \(\operatorname{\mathcal{F}}/\!\!\mathrm{-c}\)olimit of \(H\) weighted by \(W\)_, denoted as \(\operatorname{oplax}^{\operatorname{\mathcal{F}}}/\!\!\mathrm{-c}\mathrm{olim }^{W}H\), is (if it exists) an object \(C\in\mathcal{S}\) together with an isomorphism in \(\operatorname{\mathcal{F}}/\!\!\mathrm{-natural}\)
\[\mathcal{S}\left(C,Q\right)\cong[\operatorname{\mathcal{A}}^{\mathrm{op}}, \operatorname{\mathcal{F}}]^{\mathcal{F}}_{\!\!\mathrm{olax}}\left(W,\mathcal{S }\left(H(-),Q\right)\right)\]
\(\operatorname{\mathcal{F}}/\!\!\mathrm{-natural}\) in \(Q\in\mathcal{S}\), where \([\operatorname{\mathcal{A}}^{\mathrm{op}},\operatorname{\mathcal{F}}]^{ \mathcal{F}}_{\!\!\mathrm{olax}}\) is the \(\operatorname{\mathcal{F}}/\!\!\mathrm{-category}\) defined in Recall 3.3.
**Remark 4.7**.: The natural isomorphism of Definition 4.6 is equivalently a \(2\)-natural isomorphism between the loose parts, that is,
\[\mathcal{S}_{\lambda}\left(C,Q\right)\cong\left[\mathcal{A}_{\lambda}^{\mathrm{ op}},\mathcal{CAT}\right]_{\mathrm{oplax}^{\mathrm{n}}}\left(W_{\lambda},\mathcal{S}_{ \lambda}\left(H_{\lambda}(-),Q\right)\right) \tag{5}\]
which restricts to a \(2\)-natural isomorphism between the tight parts. Such tight parts are respectively \(\mathcal{S}_{\tau}\left(C,Q\right)\) and those oplax normal natural transformations \(\alpha_{\lambda}\) that restrict to \(2\)-natural ones \(\alpha_{\tau}\colon W_{\tau}\Rightarrow\mathcal{S}_{\tau}\left(H_{\tau}(-),Q\right)\), i.e. those forming a commutative square
for every \(A\in\mathcal{A}\) (where \(J_{\mathcal{S}}\colon\mathcal{S}_{\tau}\left(-,Q\right)\Rightarrow\mathcal{S}_ {\lambda}\left(-,Q\right)\circ J_{\mathcal{S}}\))
Remember that identities are always tight and tight morphisms are closed under composition. So the request that the \(2\)-natural isomorphism of equation (5) restricts to one between the tight parts equivalently means that the universal oplax normal \(2\)-cocylinder \(\mu^{\lambda}\) (corresponding to \(\mathrm{id}_{C}\)) satisfies the following two conditions. For every \(A\in\mathcal{A}\) and \(X\in W_{\tau}(A)\), the morphism
\[\mu^{\lambda}_{A}(X)\colon H(A)\to C\]
is tight, and, for every \(q\colon C\to Q\) in \(\mathcal{S}\), if \(q\circ\mu^{\lambda}_{A}(X)\) is tight for every \(A\in\mathcal{A}\) and \(X\in W_{\tau}(A)\) then \(q\) needs to be tight. We say that the \((\)_cocylinder\()\)\(\tau\)-components_\(\mu^{\lambda}_{A}(X)\)'s are tight and _jointly detect tightness_.
**Proposition 4.8**.: _Let \(\mathcal{A}\) be a small \(\mathcal{F}\)-category and consider \(\mathcal{F}\)-functors \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{F}\)\((\)the weight\()\) and \(H\colon\mathcal{A}\to\mathcal{S}\)\((\)the \(\mathcal{F}\)-diagram\()\). The strict/oplax \(\mathcal{F}\)-colimit of \(H\) weighted by \(W\) is, equivalently, an object \(C\in\mathcal{S}\) together with an oplax normal \(2\)-cocylinder_
\[\mu^{\lambda}\colon W_{\lambda}\xrightarrow[\mathrm{oplax}^{\mathrm{n}}]{ \mathcal{S}_{\lambda}\left(H_{\lambda}(-),C\right)}\]
_that is universal in the \(2\)-categorical sense, giving a \(2\)-natural isomorphism_
\[\mathcal{S}_{\lambda}\left(C,Q\right)\cong\left[\mathcal{A}_{\lambda}^{ \mathrm{op}},\mathcal{CAT}\right]_{\mathrm{oplax}^{\mathrm{n}}}\left(W_{ \lambda},\mathcal{S}_{\lambda}\left(H_{\lambda}(-),Q\right)\right),\]
_and has \(\tau\)-components that are tight and jointly detect tightness._
Proof.: The proof is clear after Remark 4.7. Since the loose limit is \(2\)-categorical, it can indeed be characterized as having a universal oplax normal \(2\)-cocylinder.
**Definition 4.9**.: We call a strict/oplax \(\mathcal{F}\)-colimit _tight_ if it is exhibited by an oplax normal \(2\)-cocylinder \(\mu^{\lambda}\) as in Proposition 4.8 such that all _cocylinder \(\lambda\)-components_\(\mu^{\lambda}_{A}(X)\), for \(A\in\mathcal{A}\) and \(X\in W_{\lambda}(A)\), are tight. Notice that this condition is automatic in the case of oplax normal \(2\)-cocones, that is the one we are mostly interested in (as every weighted \(2\)-cocylinder can be reduced to one of this form).
**Remark 4.10**.: We are now ready to prove that having a right semi-lax right \(\mathcal{F}\)-adjoint guarantees the preservation of all tight strict/oplax \(\mathcal{F}\)-colimits. We will actually see that the property of the universal oplax normal \(2\)-cocylinder to have \(\tau\)-components that jointly detect tightness is preserved when we have a right semi-lax (tight) left \(\mathcal{F}\)-adjoint, but not necessary to prove the preservation of the rest of the structure, for which a loose adjunction is enough.
The following theorem does not seem to appear in the literature.
**Theorem 4.11**.: _Right semi-lax loose left \(\mathcal{F}\)-adjoints preserve all the universal oplax normal \(2\)-cocylinders for an \(\mathcal{F}\)-diagram which have tight \(\lambda\)-components \((\)i.e. in some sense the "loose part" of all the tight strict/oplax \(\mathcal{F}\)-colimits, even if the \(\tau\)-components do not jointly detect tightness\()\)._
_Right semi-lax \((\)tight\()\) left \(\mathcal{F}\)-adjoints preserve all tight strict/oplax \(\mathcal{F}\)-colimits._
Proof.: Let \((F,U,\eta,\varepsilon,s,t)\) be a right semi-lax loose \(\mathcal{F}\)-adjunction between \(\mathcal{F}\)-categories \(\mathcal{S}\) and \(\mathds{Z}\). That is, a lax adjunction between the loose parts where \(F\) and \(U\) are \(\mathcal{F}\)-functors, \(\eta\) is a loose strict/lax \(\mathcal{F}\)-natural transformation, \(\varepsilon\) is strictly \(2\)-natural and \(s\) is the identity. Let then \(\mathcal{A}\) be a small \(\mathcal{F}\)-category and consider \(\mathcal{F}\)-functors \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{F}\) and \(H\colon\mathcal{A}\to\mathcal{S}\) such that the strict/oplax \(\mathcal{F}\)-colimit of \(H\) weighted by \(W\) exists in \(\mathcal{S}\) and is tight. Call \(C\) such colimit; we want to show that \(F\) preserves it. By Proposition 4.8, it suffices to consider the universal oplax normal \(2\)-cocylinder
\[\mu^{\lambda}\colon W_{\lambda}\xRightarrow{\overbrace{\mathrm{oplax}^{n}}} \mathcal{S}_{\lambda}\left(H_{\lambda}(-),C\right)\]
with tight \(\lambda\)-components that exhibits \(C=\mathrm{oplax}^{\mathcal{F}}\operatorname{-colim}^{W}H\) and the oplax normal \(2\)-cocylinder
\[W_{\lambda}\xRightarrow{\mu^{\lambda}\overbrace{\mathrm{oplax}^{n}}}^{ \mu^{\lambda}}\mathcal{S}_{\lambda}\left(H_{\lambda}(-),C\right)\xRightarrow{ F}\mathds{Z}_{\lambda}\left(\left(F_{\lambda}\circ H_{\lambda}\right)(-),F(C)\right)\]
obtained applying \(F\) to the former.
We prove that \(F\circ\mu^{\lambda}\) is universal in the \(2\)-categorical sense and such that the \(F(\mu^{\lambda}_{A}(X))\)'s, for \(A\in\mathcal{A}\) and \(X\in W_{\lambda}(A)\), are all tight, without using that the \(\tau\)-components jointly detect tightness. Moreover, we show that if \(\eta\) and \(\varepsilon\) have tight components, giving a right semi-lax (tight) \(\mathcal{F}\)-adjunction, then having \(\tau\)-components that jointly detect tightness is preserved as well.
Since a right semi-lax loose \(\mathcal{F}\)-adjunction is in particular a right semi-lax adjunction between the loose parts, we know by Recall 4.2 that \((F,U,\eta,\varepsilon,\mathrm{id},t)\) induces an adjunction between homsets
for every \(Y\in\mathcal{S}\) and \(Z\in\mathds{Z}\) with unit the identity, showing \(T\circ S=\mathrm{Id}\), and counit \(\xi\colon S\circ T\xRightarrow{\mathrm{id}}\). The strategy will be to make use of the equality \(T\circ S=\mathrm{Id}\) to move back and forth between \(\mathds{Z}\) and \(\mathcal{S}\), recovering, after \(T\circ S\), the original starting data of \(\mathds{Z}\) but with new information gathered in \(\mathcal{S}\).
The \(\lambda\)-components \(F(\mu^{\lambda}_{A}(X))\)'s are surely tight, since \(F\) is an \(\mathcal{F}\)-functor. Take then \(q\colon F(C)\to Z\) in \(\mathds{Z}\) such that \(q\circ F(\mu^{\lambda}_{A}(X))\) is tight for every \(A\in\mathcal{A}\) and every \(X\in W_{\tau}(A)\). We show that if \(\eta\) and \(\varepsilon\) have tight components, then \(q\) needs to be tight as well. Notice that \(S_{-,Z}\) is oplax normal natural in \(Y\in\mathcal{S}^{\mathrm{op}}_{\lambda}\), with structure \(2\)-cell on \(y\colon Y\longleftarrow Y^{\prime}\) in \(\mathcal{S}_{\lambda}\) given by \((-\ast\eta_{y})\ast U\), since \(\eta\) is loose strict/lax \(\mathcal{F}\)-natural. Moreover, if \(\eta\) has tight components then \(S_{-,Z}\) is tight as well, since \(U\) is an \(\mathcal{F}\)-functor. Since \(\mu^{\lambda}_{A}(X)\colon H(A)\to C\)
is tight, we obtain \(S_{\mu_{A}^{\lambda}(X),Z}=\operatorname{id}\) and hence
So if \(\eta\) is tight then the left hand side of the equality here above is tight, and since the \(\mu_{A}^{\lambda}(X)\)'s jointly detect tightness we obtain that \(S_{C,Z}(q)\) is tight. If we also assume that \(\varepsilon\) is tight, then \(T_{C,Z}\) is tight, whence \(q=T(S(q))\) is tight.
We now prove that \(F\circ\mu^{\lambda}\) is universal, assuming only a right semi-lax loose \(\mathcal{F}\)-adjunction and never using that the \(\tau\)-components \(\mu_{A}^{\lambda}(X)\) jointly detect tightness. Everything below will be loose, so we abuse the notation dropping the loose subscripts. The following figure condenses the strategy.
(6)
Given an oplax normal \(2\)-cocylinder
\[\sigma\colon W\underset{\text{\rm oplax}^{\mu}}{\xrightarrow{\text{\rm oplax} ^{\mu}}}\,\mathcal{I}\left((F\circ H)(-),Z\right),\]
we want to prove that there is a unique \(\delta\colon F(C)\to Z\) in \(\mathcal{I}\) such that
\[(\delta\circ-)\circ F\circ\mu=\sigma.\]
Postcomposing \(\sigma\) with \(S_{H(-),Z}\), we obtain an oplax normal \(2\)-cocylinder for \(H\). Indeed \(S_{H(-),Z}\) is oplax normal natural in \(A\in\mathcal{A}^{\text{\rm op}}\) with structure \(2\)-cell on \(a\colon A\gets A^{\prime}\) in \(\mathcal{A}\) given by \((-*\eta_{H(a)})*U\), since \(\eta\) is loose strict/lax \(\mathcal{F}\)-natural and \(H\) is an \(\mathcal{F}\)-functor. So, by universality of \(\mu\), \(S\circ\sigma\) induces a unique \(\gamma\colon C\to U(Z)\) in \(\mathcal{S}\) such that
\[(\gamma\circ-)\circ\mu=S\circ\sigma.\]
Notice then that the right square of the figure in equation (6) is commutative, since it is equivalent to
\[(\varepsilon_{Z}\circ F(\gamma))\circ F(-)=\varepsilon_{Z}\circ F(\gamma \circ-),\]
that holds since \(F\) is a \(2\)-functor. So \(\delta\coloneqq T(\gamma)\) in \(\mathcal{I}\) is such that
\[(\delta\circ-)\circ F\circ\mu=T\circ(\gamma\circ-)\circ\mu=T\circ S\circ \sigma=\sigma.\]
We now show the uniqueness of \(\delta\). So consider another \(\delta^{\prime}\colon F(C)\to Z\) in \(\mathcal{I}\) such that \((\delta^{\prime}\circ-)\circ F\circ\mu=\sigma\). Postcomposing with \(S\) we obtain
\[S\circ(\delta^{\prime}\circ-)\circ F\circ\mu=S\circ\sigma.\]
But we notice that
\[S\circ(\delta^{\prime}\circ-)\circ F\circ\mu=(S(\delta^{\prime})\circ-)\circ\mu\]
as oplax normal natural transformations. Indeed, given \(A\in\mathcal{A}\) and \(\alpha\colon X\to X^{\prime}\) in \(W(A)\),
\[(S\circ(\delta^{\prime}\circ-)\circ F\circ\mu)_{A}(X)=U\left(\delta^{\prime} \circ F\left(\mu_{A}(X)\right)\right)\circ\eta_{H(A)}=U(\delta^{\prime})\circ U \left(F\left(\mu_{A}(X)\right)\right)\circ\eta_{H(A)}.\]
Since \(\mu_{A}(X)\) is tight and \(\eta\) is loose strict/lax \(\mathcal{F}\)-natural, \(\eta_{\mu_{A}(X)}=\mathrm{id}\) and hence
\[U(\delta^{\prime})\circ U\left(F\left(\mu_{A}(X)\right)\right)\circ\eta_{H(A)}=U (\delta^{\prime})\circ\eta_{C}\circ\mu_{A}(X)=S(\delta^{\prime})\circ\mu_{A}(X).\]
And it works similarly for the images on \(\alpha\), using the \(2\)-dimensional property of \(\eta\) being oplax natural. Given \(a\colon A\gets A^{\prime}\) in \(\mathcal{A}\) and \(X\in W(A)\),
\[\left(S\circ(\delta^{\prime}\circ-)\circ F\circ\mu\right)_{a,X}=U\left(\delta ^{\prime}\circ F\left(\mu_{A}(X)\right)\right)*\eta_{H(a)}\circ U\left(\delta ^{\prime}*F\left(\mu_{a,X}\right)\right)*\eta_{H(A^{\prime})}.\]
Considering \(\mu_{a,X}\colon\mu_{A^{\prime}}(W(a)(X))\Rightarrow\mu_{A}(X)\circ H(a)\) in \(\mathcal{S}\), since \(\eta\) is loose strict/lax \(\mathcal{F}\)-natural and both \(\mu_{A}(X)\) and \(\mu_{A^{\prime}}(W(a)(X))\) are tight, we obtain
\[U\left(F\left(\mu_{A}(X)\right)\right)*\eta_{H(a)}\circ U\left(F\left(\mu_{a,X }\right)\right)*\eta_{H(A^{\prime})}=\eta_{C}*\mu_{a,X},\]
whence we conclude that
\[S\circ(\delta^{\prime}\circ-)\circ F\circ\mu=(S(\delta^{\prime})\circ-)\circ\mu\]
Therefore, we have
\[(S(\delta^{\prime})\circ-)\circ\mu=S\circ\sigma=(\gamma\circ-)\circ\mu,\]
and by universality of \(\mu\) we conclude that \(S(\delta^{\prime})=\gamma\), whence
\[\delta^{\prime}=T(S(\delta^{\prime}))=T(\gamma)=\delta.\]
It only remains to prove the \(2\)-dimensional universal property of \(F\circ\mu\). Given a modification
we want to prove that there is a unique \(\Delta\colon\delta\Rightarrow\delta^{\prime}\colon F(C)\to Z\) in \(\mathds{E}\) such that
\[(\Delta*-)*(F\circ\mu)=\Sigma.\]
By universality of \(\mu\), whiskering \(\Sigma\) with \(S\) on the right induces a unique \(\Gamma\colon\gamma\Rightarrow\gamma^{\prime}\) in \(\mathcal{S}\) such that
\[(\Gamma*-)*\mu=S*\Sigma.\]
Notice then that
\[T*(\Gamma*-)=(T(\Gamma)*-)*F\]
because \(F\) is a \(2\)-functor, and thus \(\Delta\coloneqq T(\Gamma)\) in \(\mathds{E}\) is such that
\[(\Delta*-)*(F\circ\mu)=T*(\Gamma*-)*\mu=T*S*\Sigma=\Sigma.\]
To show the uniqueness of \(\Delta\), take another \(\Delta^{\prime}\) such that \((\Delta^{\prime}*-)*(F\circ\mu)=\Sigma\). Whiskering with \(S\) on the right, we obtain
\[S*(\Delta^{\prime}*-)*(F\circ\mu)=S*\Sigma.\]
But notice that
\[S*(\Delta^{\prime}*-)*(F\circ\mu)=(S(\Delta^{\prime})*-)*\mu\]
Indeed it suffices to check it on components, where it holds since \(\mu_{A}(X)\) is tight and hence \(\eta_{\mu_{A}(X)}=\mathrm{id}\) (analogously to what we have shown for the \(1\)-dimensional universal property). So
\[(S(\Delta^{\prime})*-)*\mu=S*\Sigma=(\Gamma*-)*\mu,\]
whence \(S(\Delta^{\prime})=\Gamma\) by universality of \(\mu\) and thus
\[\Delta^{\prime}=T(S(\Delta^{\prime}))=T(\Gamma)=\Delta.\]
Therefore \(F\circ\mu\) is universal in the \(2\)-categorical sense.
**Remark 4.12**.: We can now conclude the generalization of the \(1\)-dimensional Theorem 1.1 to dimension \(2\), by showing that \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\) has a strict right semi-lax (tight) right \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\)-adjoint (Theorem 4.13) and hence preserves all tight strict/oplax \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\)-colimits. More importantly in the context of this paper, Theorem 4.11 then also guarantees that \(\operatorname{dom}\) preserves all the universal oplax normal \(2\)-cocones for an \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm diagram}}\right.\) which have tight components, where normal is with respect to the Grothendieck construction. Remember that any weighted \(2\)-colimit can be reduced to an oplax normal conical one.
So after Theorem 4.13 we will have proved that, considering a marking \(W\colon\operatorname{\mathcal{A}}\!\left/{}_{\!\!\text{\rm cap}}\to\operatorname {\mathcal{C}}\!\left/{}_{\!\!\text{\rm\rm 1}}\right.\) small and an \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm diagram}}\right.\)\(H\colon\int\!W\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\) (that is, a \(2\)-functor that sends every morphism of the kind \((f,\operatorname{id})\) to a triangle filled with an identity), if
\[\zeta\colon\Delta 1\xrightarrow{\text{\rm oplax}^{\text{\rm n}}}\operatorname{ \mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}(H(-),C)\right.\]
is a universal oplax normal \(2\)-cocone for \(H\) on \(q\in\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\) exhibiting \(q=\operatorname{oplax}^{\text{\rm n}}\operatorname{-colim}^{\Delta 1}H\) such that \(\zeta_{(A,X)}\) is a tight morphism for every \((A,X)\in\int\!W\) (which means that it is a triangle filled with an identity), then \(\operatorname{dom}\circ\zeta\) is universal as well, exhibiting
\[\operatorname{dom}(q)=\operatorname{oplax}^{\text{\rm n}}\operatorname{-colim }^{\Delta 1}(\operatorname{dom}\circ H).\]
**Theorem 4.13**.: _Let \(\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\]_
_As a consequence, by Theorem 4.11, \(\operatorname{dom}\) preserves all tight strict/oplax \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm 1}}\right.\)-colimits, but also all the universal oplax normal \(2\)-cocylinders for an \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm 1}}\right.\)-diagram which have tight \(\lambda\)-components \((\)see Remark 4.12 for what this means explicitly in practice\()\)._
Proof.: First of all, notice that \(\operatorname{dom}\) is surely an \(\operatorname{\mathcal{F}}\!\left/{}_{\!\!\text{\rm 1}}\right.\)-functor, as every morphism of \(\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm 1}}\right.\) is tight (see Remark 3.4). We use the universal mapping property Proposition 4.3 that characterizes a lax adjunction to build a right semi-lax right adjoint \(U\) to \(\operatorname{dom}\colon\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\to\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm 1}}\right.\). For every \(E\in\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm 1}}\right.\), we define \(U(E)\coloneqq(M\times E\xrightarrow{\text{\rm pr}_{1}}E)\) and \(\varepsilon_{E}\colon M\times E\xrightarrow{\text{\rm pr}_{2}}E\), that is tight in \(\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm 1}}\right.\), remembering that in dimension \(1\) the domain functor from \(\operatorname{\mathcal{C}}\!\left/{}_{\!\!\text{\rm 1}}\right.\) is left adjoint to \(M\times-\).
We show that such counit is universal in the lax sense. Given \(h\colon\operatorname{dom}(K\xrightarrow{t}M)\to E\) in \(\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm, take }\overline{h}\coloneqq((t,h),\operatorname{id})\colon(K \xrightarrow{t}M)\to(M\times E\xrightarrow{\text{\rm pr}_{1}}M)\), which is tight in \(\operatorname{\mathcal{E}}\!\left/{}_{\!\!\text{\rm lax }M}\right.\) (see Remark 3.4), and \(\lambda_{h}\coloneqq\operatorname{id}\).
This guarantees that we will find a right semi-lax adjunction in the end (see Proposition 4.3). Given then another morphism
in \(\mbox{${\mathcal{E}}\,/\!\!\!\!\!\!\!/_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}{{\rm{{\rm{\rm{{\rm{{\rm{{\rm{\rm{}}}}} {{\rm{\rm{{\rm{\rm{{\rm{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in \(\mathcal{F}\!\!\!/_{\!\!\mathrm{ax}\ M}\), we find that
whence it is clear that \(\eta\) is (tight) strict/lax \(\mathcal{F}\!\!\!/\)-natural (since \(\eta_{(\widehat{\gamma},\mathrm{id})}=\mathrm{id}\)). Finally, \(t=\mathrm{id}\), giving a strict right semi-lax adjunction.
We have also already checked that \(\mathrm{dom}\colon\mathcal{F}\!\!\!/_{\!\!\mathrm{ax}\ M}\to\mathcal{F}\!\!\!/\) and \(M\times-\) are \(\mathcal{F}\!\!\!/\)-functors, \(\eta\) is (tight) strict/lax \(\mathcal{F}\!\!\!/\)-natural and \(\varepsilon\) has tight components, giving a strict right semi-lax (tight) \(\mathcal{F}\!\!\!/\)-adjunction.
**Remark 4.14**.: We can actually obtain a sharper result of preservation of \(2\)-colimits for \(\mathrm{dom}\colon\mathcal{F}\!\!\!/_{\!\!\mathrm{ax}\ M}\to\mathcal{F}\!\!\!/\), as we show in Proposition 4.15. Namely, we can omit the assumption that the universal oplax normal \(2\)-cocones for an \(\mathcal{F}\!\!\!/\)-diagram have tight \(\lambda\)-components. Indeed, in the proof of Theorem 4.11, the preservation of the universal oplax normal \(2\)-cocone uses the assumption that the \(\mu_{A}^{\lambda}(X)\)'s are tight only to guarantee the uniqueness part of the \(1\)- and \(2\)-universal property. But we can prove both uniqueness results in another way, taking advantage of the simple description of the strict right semi-lax right \(\mathcal{F}\!\!\!/\)-adjoint \(U=M\times-\) of \(\mathrm{dom}\).
**Proposition 4.15**.: _Let \(\mathcal{F}\!\!\!/\) be a \(2\)-category and let \(M\in\mathcal{F}\!\!\!/\). Then \(\mathrm{dom}\colon\mathcal{F}\!\!\!/_{\!\!\mathrm{ax}\ M}\to\mathcal{F}\!\!\!/\) preserves all the universal oplax normal \(2\)-cocones for an \(\mathcal{F}\!\!\!/\)-diagram \((\)without assuming them to have tight \(\lambda\)-components"\()\)._
Proof.: We only need to prove the uniqueness part of the \(1\)- and \(2\)-dimensional universal property, by Remark 4.14. Following the proof of Theorem 4.11 with \(F=\mathrm{dom}\) and considering \(\delta^{\prime}\colon\mathrm{dom}(K\xrightarrow{t}M)\to Z\) in \(\mathcal{F}\!\!\!/\) such that \((\delta^{\prime}\circ-)\circ\mathrm{dom}\circ\mu=\sigma\), rather than considering \(S(\delta^{\prime})=((t,\delta^{\prime}),\mathrm{id})\), we define
Then \(\gamma^{\prime}\) satisfies \((\gamma^{\prime}\circ-)\circ\mu=S\circ\sigma\), by the universal property of the product, since \(\gamma\) satisfies the analogous equation and \((\delta^{\prime}\circ-)\circ\mathrm{dom}\circ\mu=\sigma\). By the uniqueness of \(\gamma\), we obtain that \(\gamma^{\prime}=\gamma\) and hence \(\delta^{\prime}=\mathrm{pr}_{2}\circ\mathrm{dom}(\gamma)=T(\gamma)=\delta\).
Analogously, we can prove also the uniqueness of the \(2\)-dimensional universal property, producing from \(\Delta^{\prime}\) the \(2\)-cell \((\mathrm{pr}_{1}*\mathrm{dom}(\Gamma),\Delta^{\prime})\) between the two suitable triangles \(\gamma^{\prime}\) here above. We indeed obtain \(\Delta^{\prime}=\mathrm{pr}_{2}*\mathrm{dom}(\Gamma)=T(\Gamma)=\Delta\).
## 5. Change of base between lax slices
In dimension \(1\), the concept of change of base between slices is definitely helpful. And it is well known that the pullback perfectly realizes such a job. For \(\mathcal{C}\!\!\!/\), given a functor \(\tau\colon\mathcal{F}\!\!\!/\to\mathcal{B}\), it is still a good idea to consider the pullback \(2\)-functor \(\tau^{*}\colon\mathcal{C}\!\!\!/\mathcal{B}\to\mathcal{C}\!\!\!/\mathcal{A} \!\!\!/\mathcal{F}\!\!\!/\mathcal{F}\!\!\!/\mathcal{F}\!\!\!/\mathcal{F}\!\!\!/\) between strict slices. And it is well known that such change of base functor has a right adjoint \(\tau_{*}\) (and automatically a right \(2\)-adjoint) precisely when \(\tau\) is a Conduc
functor (as the latter functors are the exponentiable morphisms in \(\mathcal{CAT}\)). So that when \(\tau\) is Conduche then \(\tau^{*}\) is nicely behaved, preserving all the weighted \(2\)-colimits.
However, Section 2 showed that, in order to generalize the calculus of colimits in \(1\)-dimensional slices to dimension \(2\), one needs to consider lax slices. And it is then helpful to have a change of base \(2\)-functor between lax slices of a finitely complete \(2\)-category. We believe that the most natural way to achieve this is by calculating comma objects rather than pullbacks. This is also connected to the construction of the category of elements, as we have described in our [11], but also, in general, to the concept of \(2\)-dimensional elementary topos (see Weber's [16]). Equivalently to calculating comma objects, we can take pullbacks along split Grothendieck opfibrations (that serve as a kind of fibrant replacement), see Proposition 5.1. Such a point of view is preferable in the context of this section, since Grothendieck opfibrations in \(\mathcal{CAT}\) are always Conduche and we can generalize the ideas for finding a right adjoint to the pullback functor \(\tau^{*}\colon\mathcal{CAT}\left/\mathcal{B}\right.\to\mathcal{CAT}\left/ \mathcal{E}\right.\) (see Conduche's [4]) to lax slices.
We take Street's [13] and Weber's [16] as main references for Grothendieck opfibrations in a general \(2\)-category. We prove that if \(\tau\colon\mathcal{E}\to\mathcal{B}\) is a split Grothendieck opfibration in a \(2\)-category \(\mathcal{K}\), then pulling back along \(\tau\) extends to a \(2\)-functor \(\tau^{*}\colon\mathcal{K}_{/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/ \!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\! \!/\!\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\! \!/\!\!/\!/\!\!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\! \!/\!/\!\!/\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\
_Moreover, considering the canonical \(\mathcal{F}\)-category structure on the lax slice described in Remark 3.4_ (_that is, the loose part of the lax slice is itself and its tight part is given by the strict slice_), \(\tau^{*}\) is an \(\mathcal{F}\)-functor._
Proof.: Given a morphism \(F\colon\mathcal{A}\to\mathcal{B}\) in \(\mathcal{K}\), we define \(\tau^{*}F\) as the upper morphism of the chosen pullback square in \(\mathcal{K}\) on the left below. Given then a morphism in \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{B}\) as in the middle below, we can lift the \(2\)-cell in \(\mathcal{K}\) on the right below
along the Grothendieck opfibration \(\tau\), producing the chosen cartesian \(2\)-cell \(\tau^{*}\alpha\colon\tau^{*}F\Rightarrow V\colon\mathcal{P}\to\mathcal{E}\) (in the cleavage) with \(\tau\circ V=F^{\prime}\circ\widehat{\alpha}\circ F^{*}\tau\) and \(\tau*\tau^{*}\alpha=\alpha*F^{*}\tau\). Using then the universal property of the pullback \(\mathcal{P}^{\prime}\) of \(\tau\) and \(F^{\prime}\) we can factorize \(V\) through \(\tau^{*}F^{\prime}\), obtaining a morphism \(\widehat{\tau^{*}\alpha}\colon\mathcal{P}\to\mathcal{P}^{\prime}\). We define \(\tau^{*}\alpha\) to be the upper triangle in the following commutative solid:
It is straightforward to check that \(\tau^{*}\) is functorial, since \(\tau\) is a split Grothendieck opfibration. For this, remember that a cleavage is the choice of a left adjoint to \(\eta_{\tau}\colon\mathcal{E}\to\tau\)/\(\mathcal{B}\), where the latter is the morphism induced by the identity \(2\)-cell on \(\tau\). Such a choice then determines the liftings of the Grothendieck opfibrations \((\tau\circ-)\colon\mathcal{K}(\mathcal{X},\mathcal{E})\to\mathcal{K}( \mathcal{X},\mathcal{B})\) in \(\mathcal{C}\mathcal{A}\mathcal{T}\) that we have for every \(\mathcal{X}\in\mathcal{K}\), by using the universal property of \(\tau\)/\(\mathcal{B}\) (to factorize the \(2\)-cells we want to lift). So notice that taking \((\widehat{\alpha},\alpha)\colon F\to F^{\prime}\) as above and \((\widehat{\beta},\beta)\colon F^{\prime}\to F^{\prime\prime}\) in \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{B}\) we have that the chosen cartesian lifting of \(\beta*(\widehat{\alpha}\circ F^{*}\tau)=(\beta*(F^{\prime})^{*}\tau)*\widehat{ \tau^{*}\alpha}\) needs to coincide with \(\tau^{*}\beta*\widehat{\tau^{*}\alpha}\).
Given a \(2\)-cell \(\delta\colon(\widehat{\alpha},\alpha)\to(\widehat{\beta},\beta)\colon F\to F ^{\prime}\) in \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{B}\), we define \(\tau^{*}\delta\) to be the chosen cartesian lifting of the \(2\)-cell \(\delta*F^{*}\tau\) along the Grothendieck opfibration \((F^{\prime})^{*}\tau\), where the latter has the cleavage induced by the cleavage of \(\tau\). Using then that \(\tau\) is split, together with the definition of \(2\)-cell in \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{B}\) and the universal property of the pullback \(\mathcal{P}^{\prime}\), we obtain that the codomain of \(\tau^{*}\delta\) is indeed \(\widehat{\tau^{*}\beta}\) and that \(\tau^{*}\delta\) is a \(2\)-cell in \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{E}\) from \(\tau^{*}\alpha\) to \(\tau^{*}\beta\). It is then straightforward to check that \(\tau^{*}\) is a \(2\)-functor, using that \((F^{\prime})^{*}\tau\) is split.
Finally, consider on both \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{B}\) and \(\mathcal{K}\)/\({}_{\text{\rm\!lax}}\)\(\mathcal{E}\) the canonical \(\mathcal{F}\)-category structure described in Remark 3.4. Since the lifting of an identity \(2\)-cell through a split Grothendieck opfibration is always an identity, then \(\tau^{*}\) is an \(\mathcal{F}\)-functor.
**Theorem 5.3**.: _Let \(\tau\colon\mathcal{E}\to\mathcal{B}\) be a split Grothendieck opfibration in \(\mathcal{C}\mathcal{A}\mathcal{T}\). Then the \(\mathcal{F}\)-functor_
\[\tau^{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}} \mathcal{B}\to\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}} \mathcal{E}\]
_has a strict right semi-lax loose right \(\mathcal{F}\)-adjoint._
_As a consequence, by Theorem 4.11, \(\tau^{*}\) preserves all the universal oplax normal \(2\)-cocylinders for an \(\mathcal{F}\)-diagram which have tight \(\lambda\)-components._
Proof.: We use Proposition 4.3 (universal mapping property that characterizes a lax adjunction) to build a right semi-lax right adjoint \(\tau_{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}} \mathcal{E}\to\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}} \mathcal{B}\) to \(\tau^{*}\). We will generalize the ideas of the construction of a right adjoint to the pullback between strict slices (see Conduche's [4] and Palmgren's [12]), using that \(\tau\) is Conduche (being a Grothendieck opfibration). To suit the lax context, we will fill the relevant triangles with general \(2\)-cells.
So, given a morphism \(f\colon X\to X^{\prime}\) in \(\mathcal{B}\), we will need to consider the following pullbacks in \(\mathcal{C}\mathcal{A}\mathcal{T}\)
Notice that \(\tau^{-1}(X)\) is the fibre of \(\tau\) over \(X\). Whereas \(\tau^{-1}(f)\) has three kinds of morphisms, namely the morphisms in \(\mathcal{E}\) over \(\mathrm{id}_{X}\), those over \(\mathrm{id}_{X^{\prime}}\) and those over \(f\colon X\to X^{\prime}\).
Given a functor \(H\colon\mathcal{D}\to\mathcal{E}\), we define \(\tau_{*}H\) as the projection on the first component \(\mathrm{pr}_{1}\colon\mathcal{H}\to\mathcal{B}\), where the category \(\mathcal{H}\) is defined as follows:
_an object_ is a pair \((X,(\widehat{\alpha},\alpha))\) with \(X\in\mathcal{B}\) and \((\widehat{\alpha},\alpha)\) a morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}}\mathcal{E}\)
_a morphism_ \((X,(\widehat{\alpha},\alpha))\to(X^{\prime},(\widehat{\beta},\beta))\) is a pair \((f,(\widehat{\Phi},\Phi))\) with \(f\colon X\to X^{\prime}\) in \(\mathcal{B}\) and \((\widehat{\Phi},\Phi)\) a morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}\mathbin{/_{\!\mathrm{lax}}}\mathcal{E}\) as on the left below such that \(\Phi*\widetilde{0}=\alpha\) and \(\Phi*\widetilde{1}=\beta\)
So the only data of \((\widehat{\Phi},\Phi)\) that are not already determined by its domain and its codomain are the assignments of \(\widehat{\Phi}\) on the morphisms of \(\tau^{-1}(f)\) that correspond to morphisms \(g\colon E\to E^{\prime}\) in \(\mathcal{E}\) over \(f\colon X\to X^{\prime}\), and we have that such assignments produce a morphism in \(\mathcal{H}\) precisely when they organize into a functor \(\widehat{\Phi}\) such that
for every \(g\colon E\to E^{\prime}\) in \(\mathds{Z}\) over \(f\colon X\to X^{\prime}\) the following square is commutative:
_the identity on \((X,(\widehat{\alpha},\alpha))\)_ is the pair \((\operatorname{id}_{X},(\widehat{\operatorname{id}_{\alpha}},\operatorname{id} _{\alpha}))\) determined by
\[\widehat{\operatorname{id}_{\alpha}}(0\xrightarrow{}1,g)=\widehat{\alpha}(g);\]
_the composition of \((f,(\widehat{\Phi},\Phi))\) and \((f^{\prime},(\widehat{\Phi}^{\prime},\Phi^{\prime}))\)_ has first component \(f^{\prime}\circ f\) and second component determined by sending \(g\colon E\to E^{\prime}\) over \(X\xrightarrow{f}X^{\prime}\xrightarrow{f^{\prime}}X^{\prime\prime}\) to
\[\widehat{\Phi}^{\prime}(1\xrightarrow{}2,g_{2})\circ\widehat{\Phi}(0 \xrightarrow{}1,g_{1})\]
where \(E\xrightarrow{g_{1}}Z\xrightarrow{g_{2}}E^{\prime}\) is a factorization of \(g\) over \(X\xrightarrow{f}X^{\prime}\xrightarrow{f^{\prime}}X^{\prime\prime}\) obtained by the fact that \(\tau\) is a Conduche functor. Notice that such assignment is independent from the choice of the factorization because \(\widehat{\Phi}\) and \(\widehat{\Phi}^{\prime}\) need to agree on any morphism \((1\mathrel{\hbox to 0.0pt{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}1,h)\), since the codomain of the former, that equals the domain of the latter, determines their images. Moreover it is immediate to check that this gives a morphism in \(\mathcal{H}\), pasting the two commutative squares for \(g_{1}\) and \(g_{2}\).
It is straightforward to check that \(\mathcal{H}\) is a category, and \(\tau_{*}H\) is then surely a functor.
We define the counit \(\varepsilon\) on \(H\) as the morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}/_{\operatorname{lax}}\mathds{Z}\)
given by the evaluation, as follows. An object of \(\mathcal{N}\) is a pair \(((X,(\widehat{\alpha},\alpha)),E)\) with \((X,(\widehat{\alpha},\alpha))\in\mathcal{H}\) and \(E\in\tau^{-1}(X)\), whereas a morphism in \(\mathcal{N}\) is a pair \(((f,(\widehat{\Phi},\Phi)),g)\) with \((f,(\widehat{\Phi},\Phi))\) a morphism in \(\mathcal{H}\) and \(g\colon E\to E^{\prime}\) in \(\mathds{Z}\) over \(f\). We define
\[\widehat{\varepsilon_{H}}((f,(\widehat{\Phi},\Phi)),g)\coloneqq\widehat{ \Phi}(0\xrightarrow{}1,g)\]
\[(\varepsilon_{H})_{((X,(\widehat{\alpha},\alpha)),E)}\coloneqq\alpha_{E}\]
Then \(\widehat{\varepsilon_{H}}\) is readily seen to be a functor, and \(\varepsilon_{H}\) is a natural transformation thanks to the commutative square that a morphism in \(\mathcal{H}\) needs to satisfy. Notice, however, that \(\varepsilon_{H}\) is not tight, so that we can only hope to obtain a loose adjunction.
We prove that \(\varepsilon_{H}\) is universal in the lax sense. So take a functor \(F\colon\mathcal{A}\to\mathcal{B}\) and a morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}/_{\operatorname{lax}}\mathds{Z}\)
Wishing to obtain a right semi-lax loose \(\mathcal{F}\)-adjunction, we search for a tight morphism in \(\mathcal{C}\!\mathcal{A}\!T\,\mbox{$\not\!\!/$}_{\mbox{\scriptsize{lax}}}\,\mathcal{B}\) as on the left below that satisfies the equality of diagrams on the right
(7)
so that we can take \(\lambda_{(\widehat{\gamma},\gamma)}=\mathrm{id}\). Given \(a\colon A\to A^{\prime}\) in \(\mathcal{A}\), we have \(\widehat{\gamma}(A)=(F(A),(\widehat{\alpha},\alpha))\) and \(\widehat{\gamma}(a)=(F(a),(\widehat{\Phi},\Phi))\) with
And given \(g\colon E\to E^{\prime}\) in \(\mathcal{E}\) over \(F(a)\colon F(A)\to F(A^{\prime})\), then \((a,g)\colon(A,E)\to(A^{\prime},E^{\prime})\) is a morphism in \(\mathcal{P}\). So we want to define
\[\widehat{\alpha}(E)=\widehat{\varepsilon}_{H}(\widehat{\gamma}(A),E)= \widehat{\varepsilon_{H}}(\widehat{r^{*}\widehat{\gamma}}(A,E))=\widehat{ \gamma}(A,E)\]
\[\widehat{\Phi}(0\xrightarrow{}1,g)=\widehat{\varepsilon_{H}}(\widehat{ \gamma}(a),g)=\widehat{\varepsilon_{H}}(\widehat{r^{*}\widehat{\gamma}}(a,g) )=\widehat{\gamma}(a,g)\]
\[\alpha_{E}=(\varepsilon_{H})_{(\widehat{\gamma}(A),E)}=(\varepsilon_{H})_{( \widehat{r^{*}\widehat{\gamma}}(A,E))}=\gamma_{(A,E)}\]
Taking a morphism \(g^{\prime}\colon E\to E^{\prime}\) in \(\tau^{-1}(F(A))\),
\[\widehat{\alpha}(g^{\prime})=\widehat{\mathrm{id}_{\alpha}}(0\xrightarrow{}1,g^{\prime})=\widehat{\varepsilon_{H}}(\widehat{\gamma}(\mathrm{id}_{A}),g^ {\prime})=\widehat{\varepsilon_{H}}(\widehat{r^{*}\widehat{\gamma}}(\mathrm{id }_{A},g^{\prime}))=\widehat{\gamma}(\mathrm{id}_{A},g^{\prime})\]
It is straightforward to check that this defines a functor \(\widehat{\gamma}\) as in the left part of equation (7); \(\widehat{\gamma}\) satisfies the equality in the right part of the same equation by construction. Take then \(\lambda_{(\widehat{\gamma},\gamma)}=\mathrm{id}\).
Given another morphism in \(\mathcal{C}\!\mathcal{A}\!T\,\mbox{$\not\!\!/$}_{\mbox{\scriptsize{lax}}}\, \mathcal{B}\)
and a 2-cell \(\Xi\colon(\widehat{\gamma},\gamma)\xrightarrow{}(\widehat{\varepsilon_{H}}, \varepsilon_{H})\circ(\widehat{r^{*}\xi},\tau^{*}\xi)\) in \(\mathcal{C}\!\mathcal{A}\!T\,\mbox{$\not\!\!/$}_{\mbox{\scriptsize{lax}}}\, \mathcal{E}\), we prove that there is a unique 2-cell \(\delta\colon(\widehat{\gamma},\mathrm{id})\xrightarrow{}(\widehat{\xi},\xi)\) in \(\mathcal{C}\!\mathcal{A}\!T\,\mbox{$\not\!\!/$}_{\mbox{\scriptsize{lax}}}\, \mathcal{B}\) such that
\[(\widehat{\varepsilon_{H}},\varepsilon_{H})*\tau^{*}\delta\circ\mathrm{id}=\Xi. \tag{8}\]
In order for \(\delta\) to be a 2-cell \((\widehat{\gamma},\mathrm{id})\xrightarrow{}(\widehat{\xi},\xi)\),
\[\tau_{*}H*\delta=\xi.\]
Whereas the request of equation (8) translates as
\[\widehat{\varepsilon_{H}}*\tau^{*}\delta=\Xi.\]
So, for every \(A\in\mathcal{A}\), the component \(\delta_{A}\colon\widehat{\mathfrak{T}}(A)\to\widehat{\xi}(A)\) needs to be the morphism in \(\mathcal{H}\) with first component \(\xi_{A}\) and second component
given as follows. For every \(g\colon E\to E^{\prime}\) over \(\xi_{A}\), factorizing \(g\) as the cartesian morphism \(\operatorname{\mathrm{Cart}}\left(\xi_{A},E\right)\) in the cleavage of the Grothendieck opfibration \(\tau\) over \(\xi_{A}\) to \(E\) followed by the unique induced vertical morphism \(g_{\mathrm{vert}}\),
\[\widehat{\delta_{A}}(0\to 1,g)=\widehat{\delta_{A}}(1 \Longrightarrow 1,g_{\mathrm{vert}})\circ\widehat{\delta_{A}}(0\to 1, \operatorname{\mathrm{Cart}}\left(\xi_{A},E\right))=\] \[=\widehat{\widehat{\xi}(A)}(g_{\mathrm{vert}})\circ\widehat{ \varepsilon_{H}}(\delta_{A},\operatorname{\mathrm{Cart}}\left(\xi_{A},E \right))=\widehat{\widehat{\xi}(A)}(g_{\mathrm{vert}})\circ\widehat{ \varepsilon_{H}}\left((\tau^{*}\delta)_{A,E}\right)=\] \[=\widehat{\widehat{\xi}(A)}(g_{\mathrm{vert}})\circ\Xi_{A,E}\]
It is straightforward to prove that \(\delta_{A}\) is a morphism in \(\mathcal{H}\) and that \(\delta\) is a natural transformation, using the uniqueness of the morphisms induced by cartesian liftings. \(\delta\) is then a \(2\)-cell in \(\mathcal{C}\mathcal{A}T\mathbin{/_{\mathrm{lax}}}\mathcal{B}\) such that
\[(\widehat{\varepsilon_{H}},\varepsilon_{H})*\tau^{*}\delta\circ\operatorname {id}=\Xi\]
by construction.
Considering \((\widehat{\gamma},\gamma)=(\widehat{\varepsilon_{H}},\varepsilon_{H})\), we immediately see that we obtain \(\widehat{\widehat{\varepsilon_{H}}}=\operatorname{id}\), because \((\widehat{\varepsilon_{H}},\varepsilon_{H})\) is the evaluation.
Moreover, for every functor \(F\colon\mathcal{A}\to\mathcal{B}\) and morphism in \(\mathcal{C}\mathcal{A}T\mathbin{/_{\mathrm{lax}}}\mathcal{E}\)
we prove that
\[\overline{((\widehat{\gamma},\gamma)\circ(\widehat{\varepsilon_{H}}, \varepsilon_{H}))}\circ\overline{\operatorname{id}_{\tau^{*}F}}=(\widehat{ \gamma},\gamma). \tag{9}\]
\(\overline{\operatorname{id}_{\tau^{*}F}}=(\widehat{\eta_{F}},\operatorname{id})\), that will be the unit \(\eta_{F}\), is such that, for every \(a\colon A\to A^{\prime}\) in \(\mathcal{A}\), morphism \(g^{\prime}\) in \(\tau^{-1}(F(A))\) and \(g\colon E\to E^{\prime}\) in \(\mathcal{E}\) over \(F(a)\colon F(A)\to F(A^{\prime})\),
\[\widehat{\widehat{\eta_{F}}(A)}(E)=(A,E)\qquad\qquad\widehat{\widehat{\eta_ {F}}(A)}(g^{\prime})=(\operatorname{id}_{A},g^{\prime})\]
\[\widehat{\widehat{\eta_{F}}(a)}(0\to 1,g)=(a,g)\qquad\qquad\widehat{\eta_{F}}(A) _{E}=\operatorname{id}\]
Whereas for a general \((\widehat{\psi},\psi)\colon G\to H\) in \(\mathcal{C}\mathcal{A}T\mathbin{/_{\mathrm{lax}}}\mathcal{E}\), the morphism
\[\overline{\left((\widehat{\psi},\psi)\circ(\widehat{\varepsilon_{H}}, \varepsilon_{H})\right)}=(\widehat{\tau_{*}\psi},\operatorname{id})\]
will be the action of \(\tau_{*}\) on the morphism \((\widehat{\psi},\psi)\), and is such that \(\widehat{\tau_{*}\psi}\) acts by postcomposing the triangles with \((\widehat{\psi},\psi)\). Thus equation (9) holds.
By Proposition 4.3, as \(\lambda\) is always the identity, then \(\tau_{*}\) extends to an oplax functor, that can be easily checked to be a \(2\)-functor (it acts by postcomposition), \(\varepsilon\) extends to a \(2\)-natural transformation, \(\eta\) extends to a lax natural transformation and there exists a
modification \(t\) such that \(\tau_{*}\) is a right semi-lax right adjoint to \(\tau^{*}\). It is easy to check that \(t\) is the identity.
Since \(\widehat{(\widehat{\gamma},\gamma)}\) is always tight, then \(\tau_{*}\) is an \(\mathcal{F}\)-functor and \(\eta\) has tight components. It remains to show that \(\eta\) is loose strict/lax \(\mathcal{F}\)-natural. Given a morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}\,/_{\!\!\mathrm{lax}}\mathcal{B}\)
the component on \(A\in\mathcal{A}\) of the structure \(2\)-cell \(\eta_{(\widehat{\sigma},\sigma)}\) is the morphism in the domain of \(\tau_{*}\tau^{*}F^{\prime}\) with first component \(\sigma_{A}\colon F(A)\to F^{\prime}(\widehat{\sigma}(A))\) in \(\mathcal{B}\) and second component given by
\[\widehat{\eta_{(\widehat{\sigma},\sigma),A}}(0\xrightarrow{}1,g)=\widehat{ \eta_{F^{\prime}}(\widehat{\sigma}(A))}(g_{\mathrm{vert}})=\left(\mathrm{id}_ {\widehat{\sigma}(A)},g_{\mathrm{vert}}\right).\]
When \((\widehat{\sigma},\sigma)\) is tight, so when the \(2\)-cell \(\sigma\) is the identity, then \(g=g_{\mathrm{vert}}\) because \(\tau\) has a normal cleavage, and we find \(\eta_{(\widehat{\sigma},\sigma)}=\mathrm{id}\). Thus \(\eta\) is strict/lax \(\mathcal{F}\)-natural. We conclude that \(\tau_{*}\) is a strict right semi-lax loose right \(\mathcal{F}\)-adjoint to \(\tau^{*}\).
**Remark 5.4**.: We have actually proved in Theorem 5.3 that \(\tau_{*}\) sends every morphism in \(\mathcal{C}\mathcal{A}\mathcal{T}\,/_{\!\!\mathrm{lax}}\mathcal{B}\) to a tight one in \(\mathcal{C}\mathcal{A}\mathcal{T}\,/_{\!\!\mathrm{lax}}\mathcal{B}\). So \(\tau_{*}\colon\mathcal{C}\mathcal{A}\mathcal{T}\,/_{\!\!\mathrm{lax}}\mathcal{ F}\to\mathcal{C}\mathcal{A}\mathcal{T}\,/_{\!\!\mathrm{lax}}\mathcal{B}\) is still an \(\mathcal{F}\)-functor if we take the trivial \(\mathcal{F}\)-category structure on the join, i.e. taking everything to be tight, and the canonical one in the codomain. Of course, with such a choice of \(\mathcal{F}\)-category structures, \(\tau^{*}\) remains an \(\mathcal{F}\)-functor and \(\eta\) remains with tight components. But \(\varepsilon\) becomes tight, having now tight components trivially.
So we find a strict right semi-lax (tight) \(\mathcal{F}\)-adjunction between \(\tau^{*}\) and \(\tau_{*}\). But Theorem 4.11 does not add anything to the preservation of \(2\)-colimits we have already proved in Theorem 5.3, since it would consider strict/olax \(\mathcal{F}\)-colimits in an \(\mathcal{F}\)-category with trivial \(\mathcal{F}\)-category structure.
**Remark 5.5**.: We now extend the result of preservation of \(2\)-colimits that we have proved for the \(2\)-functor
\[\tau^{*}\colon\mathcal{K}\,/_{\!\!\mathrm{lax}}\mathcal{B}\to\mathcal{K}\,/_{ \!\!\mathrm{lax}}\mathcal{E}\]
when \(\mathcal{K}=\mathcal{C}\mathcal{A}\mathcal{T}\) (Theorem 5.3) to \(\mathcal{K}=[\,\mathcal{L}^{\mathrm{op}},\mathcal{C}\mathcal{A}\mathcal{T}]\) a \(2\)-category of \(2\)-dimensional presheaves. Remember that any weighted \(2\)-colimit can be reduced to an oplax normal conical one.
**Proposition 5.6**.: _Let \(\mathcal{L}\) be a small \(2\)-category and let \(\tau\colon\mathcal{E}\to\mathcal{B}\) be a split Grothendieck opfibration in \([\,\mathcal{L}^{\mathrm{op}},\mathcal{C}\mathcal{A}\mathcal{T}]\). Then the \(\mathcal{F}\)-functor_
\[\tau^{*}\colon[\,\mathcal{L}^{\mathrm{op}},\mathcal{C}\mathcal{A}\mathcal{T}] \,/_{\!\!\mathrm{lax}}\mathcal{B}\to[\,\mathcal{L}^{\mathrm{op}},\mathcal{C} \mathcal{A}\mathcal{T}]\,/_{\!\!\mathrm{lax}}\mathcal{E}\]
(_which is such by Proposition 5.2) preserves all the universal oplax normal \(2\)-cocones for an \(\mathcal{F}\)-diagram which have tight components, where normal is with respect to the Grothendieck construction._
Proof.: Consider a marking \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) a small \(2\)-category and an \(\mathcal{F}\)-diagram \(H\colon\int\!W\to[\,\mathcal{L}^{\mathrm{op}},\mathcal{C}\mathcal{A}\mathcal{T} \,/_{\!\!\mathrm{lax}}\mathcal{B}\) (that is, a \(2\)-functor that sends every morphism of the kind \((f,\mathrm{id})\) to a triangle filled with an identity). Let then
\[\zeta\colon\Delta 1\xrightarrow[\,\mathrm{op},\,\mathcal{C}\mathcal{A}\mathcal{T} \,]\,/_{\!\!\mathrm{lax}}\mathcal{B}\,(H(-),C)\]
be a universal oplax normal \(2\)-cocone that exhibits \(C=\mathrm{oplax}^{\mathrm{n}}\,\)-\(\mathrm{colim}^{\Delta 1}H\) such that \(\zeta_{(A,X)}\) is tight for every \((A,X)\in\int\!W\) (which means that it is a triangle filled with
an identity). We want to prove that \(\tau^{*}\circ\zeta\) is universal as well, exhibiting \(\tau^{*}(C)=\operatorname{oplax}^{\operatorname{n}}\operatorname{-colim}^{ \Delta 1}(\tau^{*}\circ H)\).
Since the \(\zeta_{(A,X)}\)'s are all cartesian, as they are tight, and \(\tau^{*}\) is an \(\mathcal{F}\)-functor, by Theorem 2.20 (the domain \(2\)-functor from a lax slice is a \(2\)-colim-fibration), we know that \(\operatorname{dom}\colon[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]/_{ \operatorname{\!lax}}\not\!\!E\to[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\) reflects the universality of \(\tau^{*}\circ\zeta\). But the \(2\)-functors \((-)(L)\colon\,[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\to\mathcal{CAT}\) of evaluation on \(L\in\mathcal{L}\) are jointly reflective (\(2\)-colimits in \(2\)-presheaves are calculated pointwise). Therefore, in order to prove that \(\tau^{*}\circ\zeta\) is universal, it suffices to show that, for every \(L\in\mathcal{L}\), the oplax normal \(2\)-cocone \((-)(L)\circ\operatorname{dom}\circ\tau^{*}\circ\zeta\) is universal. Notice now that the diagram of \(2\)-functions
is commutative, where \((-)_{L}\) is the \(\mathcal{F}\)-functor that takes components on \(L\), because pullbacks in \([\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\) are calculated pointwise and the components of the liftings along \(\tau\) are the liftings of the components of \(\tau\). Indeed every component \(\tau_{L}\) of \(\tau\) is a split Grothendieck opfibration in \(\mathcal{CAT}\) because \(\tau\circ-\colon[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\,(\operatorname {y}(L),\not\!\!E)\to[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\,( \operatorname{y}(L),\not\!\!B)\) is so, taking on the former the cleavage induced by the latter. And since a cleavage for \(\tau\) is the choice of a left adjoint to \(\eta_{\tau}\colon\not\!\!E\to\tau/\not\!\!B\) (where the latter is the morphism induced by the identity \(2\)-cell on \(\tau\)), the cleavages determined on the Grothendieck opfibrations \((\tau\circ-)\colon[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\,(\not\!\!E )\to[\mathcal{L}^{\operatorname{op}},\mathcal{CAT}]\,(\not\!\!E,\not\!B)\) in \(\mathcal{CAT}\) (by applying the universal property of \(\tau/\not\!\!B\)) are compatible.
We prove that \(\operatorname{dom}\circ(\tau_{L})^{*}\circ(-)_{L}\circ\zeta\) is universal. We have that \((-)_{L}\circ\zeta\) is universal because it suffices to check that \(\operatorname{dom}\circ(-)_{L}\circ\zeta=(-)(L)\circ\operatorname{dom}\circ\zeta\) is so, by Theorem 2.20, as \(\zeta\) has tight components and \((-)_{L}\) is an \(\mathcal{F}\)-functor. And \(\operatorname{dom}\) preserves the universality of \(\zeta\) by Theorem 4.13 (thanks to the hypothesis), while \((-)(L)\) preserves every \(2\)-colimit. Then \(\operatorname{dom}\circ(\tau_{L})^{*}\circ(-)_{L}\circ\zeta\) is universal applying Theorem 5.3 and Theorem 4.13, thanks to the hypothesis and to the fact that both \((-)_{L}\) and \(\left(\tau_{L}\right)^{*}\) are \(\mathcal{F}\)-functors.
**Remark 5.7**.: We conclude extending again the result of preservation of \(2\)-colimits for \(\tau^{*}\colon\not\!\!K\!/_{\operatorname{\!lax}}\not\!\!B\to\not\!\!K\!/_{ \operatorname{\!lax}}\not\!\!E\), to \(\not\!\!K\) any finitely complete \(2\)-category with a dense generator. For this, we will need to restrict to relatively absolute \(2\)-colimits. But remember that any object of \(\not\!\!K\) can be expressed as a relatively absolute \(2\)-colimit of dense generators, so that our assumption is not much restrictive in practice. We take Kelly's [8] as the main reference for dense functors.
**Theorem 5.8**.: _Let \(\not\!\!K\!\) be a finitely complete \(2\)-category and let \(J\colon\mathcal{L}\to\not\!\!K\!\) be a fully faithful dense \(2\)-functor. Consider \(\tau\colon\not\!\!E\to\not\!\!B\) a split Grothendieck opfibration in \(\not\!\!K\!\). Then the \(\mathcal{F}\)-functor_
\[\tau^{*}\colon\not\!\!K\!/_{\operatorname{\!lax}}\not\!\!B\to\not\!K\!/_{ \operatorname{\!lax}}\not\!\!E\]
(_which is such by Proposition 5.2_) preserves all the universal oplax normal \(2\)-cocones for an \(\mathcal{F}\)-diagram which have tight components and whose domain is \(J\)-absolute, where normal is with respect to the Grothendieck construction._
Proof.: Let \(\not\!\!A\) be a small \(2\)-category and consider a marking \(W\colon\not\!\!A^{\operatorname{op}}\to\mathcal{CAT}\) and an \(\mathcal{F}\)-diagram \(H\colon\,\int\!\!W\to\not\!\!K\!/_{\operatorname{\!lax}}\not\!\!B\). Let then
\[\zeta\colon\Delta 1\xrightarrow[\operatorname{oplax}^{\operatorname{n}}]{\operatorname{ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
be a universal oplax normal \(2\)-cocone that exhibits \(C=\operatorname{oplax}^{\operatorname{n}}\operatorname{-colim}^{\Delta 1}H\) such that \(\zeta_{(A,X)}\) is tight for every \((A,X)\in\int\!W\). Assume also that \(\operatorname{dom}\circ\zeta\) is \(J\)-absolute, i.e. preserved by \(\widetilde{J}\colon\,\mathcal{K}\to[L^{\operatorname{op}},\,\mathcal{C} \mathcal{A}\!T]\). Notice that \(\operatorname{dom}\circ\zeta\) is indeed universal by Theorem 4.13. We want to prove that \(\tau^{*}\circ\zeta\) is universal as well, exhibiting \(\tau^{*}(C)=\operatorname{oplax}^{\operatorname{n}}\operatorname{-colim}^{ \Delta 1}(\tau^{*}\circ H)\).
Since the \(\zeta_{(A,X)}\)'s are all cartesian (as they are tight) and \(\tau^{*}\) is an \(\mathcal{F}\)-functor, by Theorem 2.20, we know that \(\operatorname{dom}\colon\,\mathcal{K}/_{\!\!\operatorname{lax}}\,\,\mathcal{ L}\to\mathcal{K}\) reflects the universality of \(\tau^{*}\circ\zeta\). Moreover, by definition of dense functor, \(\widetilde{J}\) is fully faithful and hence reflects any \(2\)-colimit; and the \(2\)-functors \((-)(L)\colon\,[L^{\operatorname{op}},\mathcal{C}\mathcal{A}\!T]\to\mathcal{C} \mathcal{A}\!T\) of evaluation on \(L\in\mathcal{L}\) are jointly reflective. Therefore, in order to prove that \(\tau^{*}\circ\zeta\) is universal, it suffices to show that, for every \(L\in\mathcal{L}\), the oplax normal \(2\)-cocone \((-)(L)\circ\widetilde{J}\circ\operatorname{dom}\circ\tau^{*}\circ\zeta\) is universal.
Notice now that the diagram of \(2\)-functors
is commutative, where \(\widetilde{J}/_{\!\!\operatorname{lax}}\,\,\) is the \(\mathcal{F}\)-functor that applies \(\widetilde{J}\) on morphisms and triangles. Indeed \(\,\mathcal{K}\!\left(J(L),-\right)\) preserves pullbacks, and since a cleavage for \(\tau\) is the choice of a left adjoint to \(\eta_{\tau}\colon\,\mathcal{E}\to\tau\,/\!\!\mathcal{B}\), the cleavages determined on the Grothendieck opfibrations \((\tau\circ-)\colon[L^{\operatorname{op}},\mathcal{C}\mathcal{A}\!T]\left(X, \mathcal{E}\right)\to[L^{\operatorname{op}},\mathcal{C}\mathcal{A}\!T]\left(X, \mathcal{B}\right)\) in \(\mathcal{C}\mathcal{A}\!T\) (by applying the universal property of \(\tau/\!\!\mathcal{B}\)) are compatible.
We prove that \(\operatorname{dom}\circ(\tau\circ-)^{*}\circ(-)_{L}\circ\widetilde{J}/_{\! \!\operatorname{lax}}\,\circ\zeta\) is universal. We have that \(\widetilde{J}/_{\!\!\operatorname{lax}}\,\circ\zeta\) is universal, since it suffices to check that \(\operatorname{dom}\circ\widetilde{J}/_{\!\!\operatorname{lax}}\,\circ\zeta\) is so, by Theorem 2.20, as \(\zeta\) has tight components and \(\widetilde{J}/_{\!\!\operatorname{lax}}\,\,\) is an \(\mathcal{F}\)-functor. And \(\operatorname{dom}\circ\widetilde{J}/_{\!\!\operatorname{lax}}\,\circ\zeta= \widetilde{J}\circ\operatorname{dom}\circ\zeta\) is universal because \(\operatorname{dom}\circ\zeta\) is \(J\)-absolute by hypothesis. Then \((-)_{L}\) preserves the universality of \(\widetilde{J}/_{\!\!\operatorname{lax}}\,\circ\zeta\) because \(\operatorname{dom}\circ(-)_{L}=(-)(L)\circ\operatorname{dom}\) does so. Finally, we obtain that \(\operatorname{dom}\circ(\tau\circ-)^{*}\circ(-)_{L}\circ\widetilde{J}/_{\! \!\operatorname{lax}}\,\circ\zeta\) is universal applying Theorem 5.3 and Theorem 4.13, thanks to the hypothesis and to the fact that \(\widetilde{J}/_{\!\!\operatorname{lax}}\,\,,\,(-)_{L}\) and \((\tau\circ-)^{*}\) are all \(\mathcal{F}\)-functors.
### Acknowledgements
I would like to thank Charles Walker for suggesting the possible use of \(\mathcal{F}\)-categorical techniques to justify my work. Part of this research has been conducted while visiting the University of Manchester.
|
2308.07019 | Positive maps and Entanglement Witnesses in different dimensions | We present a continuous, multiparameter family of positive maps between
spaces of differing dimensions. This framework facilitates the construction of
Entanglement Witnesses (EWs) specifically designed for systems in $d_1\times
d_2$ dimensions. We derive a simple, closed-form criterion for detecting
entanglement in general density matrices based on these witnesses. To
demonstrate the effectiveness of this criterion, we apply it to a range of
Positive Partial Transpose (PPT) entangled states, revealing that the parameter
regions where these states exhibit entanglement are larger than previously
reported. Furthermore, we prove that non-unital EWs, corresponding to
non-unital maps, are not more powerful than unital EWs, thus supporting the
focus on unital positive maps in recent studies. Our method complements
existing approaches to separability criteria for density matrices in different
dimensions. | Vahid Jannesary, Vahid Karimipour | 2023-08-14T09:07:21Z | http://arxiv.org/abs/2308.07019v5 | # A simple construction of Entanglement Witnesses for arbitrary and different dimensions
###### Abstract
We present a simple approach for generation of a diverse set of positive maps between spaces of different dimensions. The proposed method enables the construction of Entanglement Witnesses tailored for systems in \(d_{1}\times d_{2}\) dimensions. With this method, it is possible to construct Entanglement Witnesses that consist solely of a chosen set of desired measurements. We demonstrate the effectiveness and generality of our approach using concrete examples. It is also demonstrated in two examples, how an appropriate entanglement witness can be identified for witnessing the entanglement of a given state, including a case when the given state is a Positive Partial Transpose (PPT) entangled state.
## Introduction
Entanglement stands out as one of the most remarkable distinctions between the quantum and classical realms, playing a pivotal role in numerous quantum protocols, for review papers see [1, 2, 3]. As a result, the creation, manipulation, application, and identification of entanglement have profound
significance. The identification of entangled states, in particular, presents a formidable challenge, underscored by its classification as an NP-Hard problem in mathematical terms [4]. Hence, the pursuit of viable techniques for detecting entangled states takes on paramount importance.
Entangled states are referred to as states that are not separable, and separable states refer to states that can be written as follows [5]:
\[\rho=\sum p_{i}\rho_{i}^{(1)}\otimes\rho_{i}^{(2)}, \tag{1}\]
where \(0\leq p_{i}\leq 1\) and \(\sum p_{i}=1\). One strategy for identifying entanglement involves investigating whether a state can be expressed in the aforementioned form. This, however, poses a formidable challenge. As a result, there is a pressing need for more effective methodologies for the detection of entanglement. Over time, diverse techniques have emerged for this purpose. The Peres criterion is considered one of the primary methods in this regard [6]: It's relatively straightforward to see that the partial transpose of all separable states is positive. This is seen simply by noting that for any state of the form (1), one has
\[(I\otimes T)(\rho)\geq 0, \tag{2}\]
where \(T\) is the transpose, hence the name partial transpose. Therefore, if the partial transpose of a quantum state is not positive, it must be entangled. However, this criterion alone cannot identify all types of quantum entangled states because there are entangled states known as PPT, whose partial transpose are positive. In fact separable states are a subset of PPT states and only when \(d_{1}d_{2}\leq 6\), these two sets are equal. Here \(d_{1}\) and \(d_{2}\) are the dimensions of the two quantum systems. The effectiveness of the Peres criteria (2) hinges on a basic property of the transpose map \(T\) which is a positive and yet not completely positive map. Therefore it was natural to search for more general types of positive maps lacking the property of complete positivity and thus constructing more general _Entanglement Witnesses_ than the simple partial transpose. In fact, it has been known that any entanglement witness \(W\) can be written in the form
\[W(M)=(I\otimes M)|\phi^{+}\rangle\langle\phi^{+}|, \tag{3}\]
where \(M\) is a positive map and \(|\phi^{+}\rangle=\frac{1}{\sqrt{d}}\sum_{\mu=0}^{d-1}|\mu\mu\rangle\), is a maximally entangled state. In other words, \(W(M)\) is the Choi-matrix of the map \(M\), which
due to its lack of complete positivity can have negative eigenvalue. Such a witness has the property
\[\mathrm{Tr}(W\sigma)\geq 0,\quad\forall\sigma\in S, \tag{4}\]
where \(S\) is the set of all separable states. Alternatively, one can think of \(W\) as a block-positive operator, i.e.
\[\langle\psi,\phi|W|\psi,\phi\rangle,\ \forall\ \psi\in H_{1}\,\forall\ \phi\in H_{2},\]
which has negative eigenvalue when acting on the whole Hilbert space \(H_{1}\otimes H_{2}\). Thus the problem of constructing entanglement witnesses becomes equivalent to constructing positive but not completely positive maps. Based on this equivalence, entanglement witnesses were introduced in many works, see [7, 8, 9, 10] and references therein. For more recent works, see [11, 12, 13]. In practical situations, where the two particles are far apart in remote laboratories, application of any entanglement witnesses found so far, requires a common reference frame between the two parties holding the particles. It was recently shown in [14] that by a process called incomplete teleportation, all these entangled states are still able to detect entanglement of remote parties, even in the absence of a common frame of reference.
In contrast with completely positive maps which due to the Kraus theorem can be easily constructed, construction of positive maps is not a straightforward task, and appropriate methods need to be proposed for this purpose. In addition to the finding new classes of witnesses, the relation of EW's and positive maps, have led to many new insights on both [8]. For example, a basic distinction refers to decomposable witnesses which are of the form
\[W=P+Q^{\Gamma},\]
where \(P,Q\) are positive operators and \(\Gamma\) is the operator of partial transposition. Any witness that cannot be written in the above form is called indecomposable. It can be proved that decomposable witnesses cannot identify PPT states. In this sense these are somewhat weaker than indecomposable witnesses. One can also characterize witnesses by their optimality and extremality [8], a subject which we will not deal with in this work. The line that we follow is inspired by a number of recent works [15, 16, 17, 18, 19, 20] in which specific types of positive maps have been defined and the corresponding witnesses
have been constructed.
It is worth noting that in all mentioned articles, the entanglement witnesses constructed have been designed for two systems of equal dimensions. In this paper, we propose a simple method for identification of entanglement in systems with dimensions of \(d_{1}\times d_{2}\). This dimension-specific applicability enhances the versatility of the method, making it a valuable tool in different practical scenarios. Often, due to practical constraints in laboratory setups, certain measurements may be more accessible or accurate than others. Our method elegantly accommodates this by allowing for the construction of entanglement witnesses based solely on those restricted measurements which are feasible in the given experimental context(See examples).
The structure of this paper is as follows: In Section 1, we examine the main idea of this article, which is to construct a new class of positive maps. In Section 2, an alternative method is utilized to obtain the same results that presented in Section 1. In section 3, we employ singular value decomposition to simplify the presentation of the obtained results. In Section 4, we present several examples each of which elaborate one particular advantage of this construction. Finally, in section 5, we demonstrate how one can construct entanglement witnesses for an arbitrary state using the method described in this paper. We end up the paper with a conclusion.
## 1 Construction of positive maps for arbitrary different dimensions
Consider the Hilbert space \(H_{d_{1}}\) of dimension \(d_{1}\) and let a pure state \(X\in D(H_{d_{1}})\), (the space of states on \(H_{d_{1}}\)) be written as
\[X=\frac{1}{d_{1}}\mathcal{I}_{d_{1}}+\sum_{i=1}^{d_{1}^{2}-1}x_{i}\Gamma_{i}, \tag{5}\]
where \(\{\Gamma_{i},\quad i=1\cdots d_{1}^{2}-1\}\) are a set of orthonormal traceless Hermitian operators \(\in L(H_{d_{1}})\), i.e.
\[\operatorname{Tr}(\Gamma_{i}\Gamma_{j})=\delta_{ij},\quad\operatorname{Tr} \Gamma_{i}=0. \tag{6}\]
Purity of the state \(X\) (i.e. \(\operatorname{Tr}X^{2}=1\)) and the above normalization conditions restricts the norm of the vector \(\mathbf{x}=(x_{1},\cdots x_{d_{1}^{2}-1})\) to
\[\mathbf{x}\cdot\mathbf{x}=\frac{d_{1}-1}{d_{1}}, \tag{7}\]
where \(\mathbf{x}\cdot\mathbf{x}:=\sum_{i=1}^{d_{1}^{2}-1}x_{i}^{2}\).
Let \(\Phi:D(H_{d_{1}})\longrightarrow D(H_{d_{2}})\) be unital and trace-preserving map which acts as follows:
\[\Phi(\frac{1}{d_{1}}\mathcal{I}_{d_{1}})=\frac{1}{d_{2}}\mathcal{I}_{d_{2}} \quad\quad,\quad\Phi(\Gamma_{i})=\frac{1}{d_{2}}\sum_{k=1}^{d_{2}^{2}-1}\Lambda _{ik}\Omega_{k}. \tag{8}\]
In fact, \(\Lambda_{ik}\) are real numbers representing the characterization of the map \(\Phi\). Here, our definition remains consistent with the trace-preserving property of the map, in contrast to the definition presented in [18], which is grounded in the property of such a map that takes an identity matrix in \(D(H_{d_{1}})\) to a identity matrix in \(D(H_{d_{2}})\). The factor \(\frac{1}{d_{2}}\) in the second equation is written for later convenience, and \(\{\Omega_{k},\quad k=1\cdots d_{2}^{2}-1\}\) are a set of orthonormal traceless Hermitian operators \(\in L(H_{d_{2}})\), i.e.
\[\operatorname{Tr}(\Omega_{k}\Omega_{l})=\delta_{kl},\quad\operatorname{Tr} \Omega_{k}=0. \tag{9}\]
Then \(\Phi\) maps \(X\) to
\[\Phi(X)=\frac{1}{d_{2}}\mathcal{I}_{d_{2}}+\frac{1}{d_{2}}\sum_{i=1}^{d_{1}^{ 2}-1}\sum_{k=1}^{d_{2}^{2}-1}x_{i}\Lambda_{ik}\Omega_{k}. \tag{10}\]
**Remark 1**: _Up to now we have assumed that \(X\) is a pure state. This can be extended to arbitrary operatores \(X\in L(H_{d})\) as follows:_
\[\Phi(X)=\frac{\operatorname{Tr}(X)}{d_{2}}\mathcal{I}_{d_{2}}+\frac{1}{d_{2}} \sum_{i=1}^{d_{1}^{2}-1}\sum_{k=1}^{d_{2}^{2}-1}x_{i}\Lambda_{ik}\Omega_{k}. \tag{11}\]
To ensure positivity of the map \(\Phi\), we now use Mehta's Lemma [21] which we state below:
**Mehta's Lemma (1989):** Let \(A\) be a Hermitian matrix of dimension \(D\), if
\[\mbox{Tr}(A^{2})\leq\frac{(\mbox{Tr}\,A)^{2}}{D-1}, \tag{12}\]
then \(A\) is positive.
In view of linearity of the map \(\Phi\), to prove positivity of the map \(\Phi\), it is sufficient to prove that \(\Phi(X)\) is positive for any pure state \(X\). That is the map \(\Phi\) is positive if the following condition holds
\[\mbox{Tr}(\Phi(X)^{2})\leq\frac{(\mbox{Tr}\,\Phi(X))^{2}}{d_{2}-1}=\frac{1}{d_ {2}-1}. \tag{13}\]
To this end, we calculate \(\mbox{Tr}(\Phi(X)^{2})\), which in view of orthonormality condition (8), turns out to be
\[\mbox{Tr}(\Phi(X)^{2})=\frac{1}{d_{2}}+\frac{1}{d_{2}^{2}}\mbox{\bf x}^{T} \Lambda\Lambda^{T}\mbox{\bf x}, \tag{14}\]
where the matrix \(\Lambda\) is composed of \(\Lambda_{ij}\) elements. In view of (12), we demand that
\[\frac{1}{d_{2}}+\frac{1}{d_{2}^{2}}\mbox{\bf x}^{T}\Lambda\Lambda^{T}\mbox{ \bf x}\leq\frac{1}{d_{2}-1},\hskip 28.452756pt\forall\mbox{\bf x}. \tag{15}\]
We can now replace this with an even stronger inequality, namely
\[\frac{1}{d_{2}}+\frac{1}{d_{2}^{2}}\lambda_{max}(\Lambda\Lambda^{T})\mbox{\bf x }\cdot\mbox{\bf x}\leq\frac{1}{d_{2}-1}, \tag{16}\]
where \(\lambda_{max}:=\lambda_{max}(\Lambda\Lambda^{T})\), is the largest eigenvalue of the matrix \(\Lambda\Lambda^{T}\), or equivalently \(\lambda_{max}\) is the square of the largest singular value of the matrix \(\Lambda^{T}\). Inserting (16) in (13), yields the following constraint on \(\lambda_{max}\)
\[\lambda_{max}\leq\frac{d_{1}d_{2}}{(d_{1}-1)(d_{2}-1)}. \tag{17}\]
The EW corresponding to the map (10) is given by [7]
\[W=({\cal I}_{d_{1}}\otimes\Phi)|\beta\rangle\langle\beta|, \tag{18}\]
where \(|\beta\rangle=\frac{1}{\sqrt{d}}\sum_{\mu=0}^{d_{1}-1}|\mu,\mu\rangle\) is a maximally entangled state. To do this calculation, we extend the orthonormal sets \(\{\Gamma_{i}\}\) and \(\{\Omega_{k}\}\) respectively to
orthonormal bases for the space of matrices \(M_{d_{1}}\) and \(M_{d_{2}}\), namely to \(\{\Gamma_{\mu}\}=\{\Gamma_{0}=\frac{\mathcal{I}}{\sqrt{d_{1}}},\Gamma_{i}\}\) and \(\{\Omega_{\mu}\}=\{\Omega_{0}=\frac{\mathcal{I}}{\sqrt{d_{2}}},\Omega_{i}\}\) and taking remark 1 into account, we write
\[W = \frac{1}{d_{1}}\sum_{\mu,\nu}|\mu\rangle\langle\nu|\otimes\Phi(| \mu\rangle\langle\nu|) \tag{19}\] \[= \frac{1}{d_{1}}\sum_{\mu,\nu}|\mu\rangle\langle\nu|\otimes\sum_{ \beta}\Lambda_{\beta\alpha}\operatorname{Tr}(|\mu\rangle\langle\nu|\Gamma_{ \beta})\Omega_{\alpha}\] \[= \frac{1}{d_{1}}\sum_{\alpha,\beta}\Lambda_{\beta\alpha}\Gamma_{ \beta}^{T}\otimes\Omega_{\alpha}.\]
After multiplication by an overall factor \(d_{1}d_{2}\), this leads to the following witness
\[W=\mathcal{I}_{d_{1}}\otimes\mathcal{I}_{d_{2}}+\sum_{i=1}^{d_{1}^{2}-1}\sum _{k=1}^{d_{2}^{2}-1}\Lambda_{ik}\Gamma_{i}^{T}\otimes\Omega_{k}. \tag{20}\]
Since the set \(\Gamma_{i}^{T}\) is also an orthonormal set, we can rename it to \(\Gamma_{i}\). Thus, the following matrix can be a entanglement witness too:
\[W=\mathcal{I}_{d_{1}}\otimes\mathcal{I}_{d_{2}}+\sum_{i=1}^{d_{1}^{2}-1}\sum _{k=1}^{d_{2}^{2}-1}\Lambda_{ik}\Gamma_{i}\otimes\Omega_{k}. \tag{21}\]
Therefore for any two dimensions, any operator of the above form can be an entanglement witness, provided that the condition (17) holds. That condition in fact proves Block positivity of this Witness. Whether or not a \(W\) constructed in this way, has also a negative eigenvalue depends on the matrix \(\Lambda\) that we choose. In all previous papers that were mentioned in the introduction, \(\Lambda\) should be orthogonal. These comprise a much smaller class compared to the \(\Lambda\) matrices in this paper. Moreover, finding orthogonal matrices in high dimensions is not a straightforward task. In this paper, one can use any matrix with any properties to construct entanglement witnesses, as long as the largest eigenvalue of \(\Lambda\Lambda^{T}\) satisfies eq. (17). Furthermore, to demonstrate the efficacy of the proposed method, we presented an algorithmic approach in section (5), which enables the straightforward construction of entanglement witnesses for desired states.
In the next section, we derive the same result of eq. (21) in an alternative way by focusing on the EW itself, rather than the positive map which leads to it via its Choi-matrix. In view of this alternative construction, one can
first choose the local operators (according to the experimental restrictions) and then tuning the coefficients for defining the final entanglement witness. This is a significant advantage compared to methods where a positive map may lead to a witness which requires non-feasible measurements.
## 2 An alternative construction of Entanglement Witnesses
In this section we directly apply Mehta's Lemma to a form of EW which we take as follows:
\[W=I_{d_{1}}\otimes I_{d_{2}}+\sum_{i=1}^{d_{1}^{2}-1}\sum_{j=1}^{d_{2}^{2}-1} \Lambda_{ij}\Gamma_{i}\otimes\Omega_{j}. \tag{22}\]
We demand that \(W\) be block-positive. This means that for any vector \(|v\rangle\in H_{d_{2}}\), the following matrix should be positive
\[B\equiv\langle v|W|v\rangle=I_{d_{1}}+\sum_{i=1}^{d_{1}^{2}-1}\sum_{j=1}^{d_{2 }^{2}-1}\Lambda_{ij}\Gamma_{i}\langle v|\Omega_{j}|v\rangle. \tag{23}\]
Now we use the following:
\[\mbox{Tr}(B)=d_{1}, \tag{24}\]
and
\[\mbox{Tr}(B^{2})=d_{1}+\sum_{i,j=1}^{d_{2}^{2}-1}(\Lambda^{T}\Lambda)_{ij} \langle v|\Omega_{i}|v\rangle\langle v|\Omega_{j}|v\rangle, \tag{25}\]
where we have used the relations \(\mbox{Tr}(\Gamma_{i})=0,\;\;\mbox{Tr}(\Gamma_{i}\Gamma_{j})=\delta_{ij}\). We now resort to the Mehta's Lemma and demand that the following condition hold for all the vectors \(|v\rangle\)
\[\mbox{Tr}(B^{2})\leq\frac{\mbox{Tr}(B)^{2}}{d_{1}-1}. \tag{26}\]
This leads to
\[d_{1}+\sum_{i,j=1}^{d_{2}^{2}-1}(\Lambda^{T}\Lambda)_{ij}\langle v|\Omega_{i}| v\rangle\langle v|\Omega_{j}|v\rangle\leq\frac{d_{1}^{2}}{d_{1}-1}\;\;\;\;\;\;\; \;\;\forall\;v. \tag{27}\]
Defining the vector \({\bf q}\) with components \(q_{i}:=\langle v|\Omega_{i}|v\rangle\), this can be written as
\[d_{1}+{\bf q}^{T}\Lambda\Lambda^{T}{\bf q}\leq\frac{d_{1}^{2}}{d_{1}-1}\ \ \ \ \ \ \ \forall\ v, \tag{28}\]
or in a stronger form
\[d_{1}+\lambda_{max}(\Lambda^{T}\Lambda){\bf q}.{\bf q}\leq\frac{d_{1}^{2}}{d_{ 1}-1}. \tag{29}\]
It remains to calculate \({\bf q}.{\bf q}\), and this is easily found by noting that
\[{\bf q}.{\bf q}=\sum_{i=1}^{d_{2}^{2}-1}\langle v|\Omega_{i}|v\rangle\langle v |\Omega_{i}|v\rangle=\sum_{i=1}^{d_{2}^{2}-1}\langle v,v|\Omega_{i}\otimes \Omega_{i}|v,v\rangle.\]
Using the identity
\[\frac{1}{d}{\cal I}+\sum_{i}\Omega_{i}\otimes\Omega_{i}=P, \tag{30}\]
where \(P\) is the permutation operator (look at appendix A for the proof of this equation). We find that
\[{\bf q}.{\bf q}=\langle v,v|P-\frac{1}{d_{2}}{\cal I}|v,v\rangle=1-\frac{1}{d _{2}}. \tag{31}\]
Inserting this in (29), leads to the final form for the upper bound of \(\lambda_{max}\), namely
\[\lambda_{max}(\Lambda^{T}\Lambda)\leq\frac{d_{1}d_{2}}{(d_{1}-1)(d_{2}-1)}, \tag{32}\]
which is identical to the bound obtained by the positive map in section 1. However we should also ensure that \(W\) is not trivially positive, that is we should remove the possibility that all its eigenvalues be positive. We again use Mehta's lemma. To this end we use
\[{\rm Tr}(W)=d_{1}d_{2}, \tag{33}\]
and
\[{\rm Tr}(W^{2})=d_{1}d_{2}+{\rm Tr}(\Lambda^{T}\Lambda), \tag{34}\]
and ensure that the following inequality be satisfied
\[{\rm Tr}(W^{2})>\frac{{\rm Tr}(W)^{2}}{d_{1}d_{2}-1}, \tag{35}\]
\[\frac{d_{1}d_{2}}{d_{1}d_{2}-1}<\mbox{Tr}(\Lambda^{T}\Lambda). \tag{36}\]
Combining (32) and (36), we arrive at the final condition for the parameters of the matrix \(\Lambda\), in order that \(W\) can be an entanglement witness:
\[\frac{d_{1}d_{2}}{d_{1}d_{2}-1}<\mbox{Tr}(\Lambda^{T}\Lambda),\quad\mbox{ and}\quad\lambda_{max}\leq\frac{d_{1}d_{2}}{(d_{1}-1)(d_{2}-1)}. \tag{37}\]
Note that the second condition is _sufficient_ for the block-positivity of \(W\), while the first condition is _necessary_ for \(W\), ensuring that it is not a positive matrix.
## 3 Singular value decomposition of \(\Lambda\)
The matrix \(\Lambda\) can be singular-value decomposed in the form \(\Lambda=RDS\), where \(R\) and \(S\) are orthogonal matrices, \(R\in\mbox{SO}(d_{1}^{2}-1)\) and \(S\in\mbox{SO}(d_{2}^{2}-1)\) and \(D\) is a diagonal matrix consisting of real values \(\{c_{1},c_{2},\cdots\}\) on its main diagonal. This decomposition allows us to write \(W\) in the form
\[W={\cal I}_{d_{1}}\otimes{\cal I}_{d_{2}}+\sum_{i=1}^{min(d_{1}^{2}-1,d_{2}^{ 2}-1)}c_{i}\Gamma_{i}^{\prime}\otimes\Omega_{i}^{\prime}, \tag{38}\]
where \(\Gamma_{i}^{\prime}=\sum_{j=1}^{d_{1}^{2}-1}R_{ij}\Gamma_{j}\) and \(\Omega_{i}^{\prime}=\sum_{m=1}^{d_{2}^{2}-1}S_{im}\Omega_{m}\) are new orthonormal bases. Hereafter we work with these new bases (or assume that off-diagonal elements of \(\Lambda\) are zero from the very beginning). Hence, for the sake of simplicity, as \(\Omega_{i}^{\prime}\) and \(\Gamma_{i}^{\prime}\) are new orthonormal bases, we can rename them as \(\Omega_{i}\) and \(\Gamma_{i}\), respectively, and write \(W\) as
\[W={\cal I}_{d_{1}}\otimes{\cal I}_{d_{2}}+\sum_{i=1}^{min(d_{1}^{2}-1,d_{2}^{ 2}-1)}c_{i}\Gamma_{i}\otimes\Omega_{i}, \tag{39}\]
where according to (37), the coefficients \(c_{i}\) are subject to the following condition
\[\frac{d_{1}d_{2}}{d_{1}d_{2}-1}<\sum_{i=1}c_{i}^{2}\quad\mbox{ and }\quad c_{max}^{2}\leq\frac{d_{1}d_{2}}{(d_{1}-1)(d_{2}-1)}. \tag{40}\]
**Remark 2**: _It should be noted that for each basis of orthonormal operators that we choose in (39), we obtain a different EW, even if the coefficients \(c_{i}\) are identical. Only in the special case of \(d_{1}=d_{2}=2\), the EW is entirely characterized by the parameters \(c_{i}\). This is due to the fact that in two dimensions, we can use the isomorphism \(SU(2)\sim SO(3)\) to implement the transformations_
\[U\Gamma_{i}U^{\dagger}=\Gamma^{\prime}_{i},\hskip 28.452756ptV\Omega_{i}V^{ \dagger}=\Omega^{\prime}_{i},\hskip 14.226378pt\forall\hskip 8.535827pti, \tag{41}\]
_and prove that an EW can always be put into the canonical form \(W_{c}\) by the transformation_
\[W=(U\otimes V)W_{c}(U^{\dagger}\otimes V^{\dagger}). \tag{42}\]
_In the special case where \(d_{1}=2\), but \(d_{2}\) is arbitrary, such a transformation works only for the first part, and we can always assume that our EW has the form_
\[W=\mathcal{I}_{2}\otimes\mathcal{I}_{d_{2}}+a_{x}\sigma_{x}\otimes\Omega_{x} +a_{y}\sigma_{y}\otimes\Omega_{y}+a_{z}\sigma_{z}\otimes\Omega_{z}, \tag{43}\]
_where \(\sigma_{x,y,z}\) are Pauli matrices and \(\{\Omega_{x},\Omega_{y},\Omega_{z}\}\) are orthonormal operators in the space \(H_{d_{2}}\). In this case, both the parameters \(c_{i}\) and the operators \(\Omega_{i}\) characterize the EW. Note that we are restricting ourselves to those EW's which arise from Unital positive maps._
## 4 Examples
In this section, we consider a few examples with different dimensions and determine the range of which \(W\) is actually a witness. Note that each example is meant to emphasize a particular aspect of our construction.
### Example 1:
Consider a scenario where measurement devices in a laboratory are limited. This assumption is highly reasonable, as in practical conditions, one cannot expect all measurement equipment to be available in a single laboratory. For instance, for two different dimensions, namely \(d_{1}=2,d_{2}=3\), imagine that one can only perform the following measurements in a laboratory:
\[M=\{\sigma_{1}\otimes J_{1},\sigma_{2}\otimes J_{2},\sigma_{3}\otimes J_{3}\}. \tag{44}\]
As a result, \(W\) will take the following form
\[W=\mathcal{I}_{2}\otimes\mathcal{I}_{3}+a\sigma_{1}\otimes J_{1}+b\sigma_{2} \otimes J_{2}+c\sigma_{3}\otimes J_{3}, \tag{45}\]
where for simplicity the parameters \(a,\ b\) and \(c\) are assumed to be positive, and \(\sigma_{i}\) are normalized Pauli operators, so that \(\mathrm{Tr}(\sigma_{i}\sigma_{j})=\delta_{ij}\), i.e. \(\sigma_{1}=\frac{1}{\sqrt{2}}\sigma_{x},\ \ etc\), and
\[J_{1}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&0\\ 0&0&-i\\ 0&i&0\end{pmatrix},\ \ \ \ J_{2}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&-i\\ 0&0&0\\ i&0&0\end{pmatrix},\ \ \ \ J_{3}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&-i&0\\ i&0&0\\ 0&0&0\end{pmatrix}, \tag{46}\]
are normalized generators of angular momentum. It turns out that \(W\) takes the form
\[W_{1}=\frac{1}{2}\begin{pmatrix}2&-ic&0&0&0&b\\ ic&2&0&0&0&-ia\\ 0&0&2&-b&ia&0\\ 0&0&-b&2&ic&0\\ 0&0&-ia&-ic&2&0\\ b&ia&0&0&0&2\end{pmatrix}. \tag{47}\]
Condition (37) restricts the parameters to
\[\frac{6}{5}<a^{2}+b^{2}+c^{2},\qquad a,\ b,\ c<\sqrt{3}. \tag{48}\]
For general a,b,c, the eigenvalues do not have simple form. But we can consider three special cases:
**Case 1, \(a=b=c\):** This is the isotropic case, and the smallest eigenvalue of \(W\) is given by
\[\lambda_{-}=2(1-a), \tag{49}\]
which shows that for \(1<a\leq\sqrt{3}\), \(W\) is actually an entanglement witness.
**Case 2: \(c=0\):** In this case the eigenvalues have a simple analytical form. The smallest eigenvalue is given by
\[\lambda_{-}=1-\frac{\sqrt{a^{2}+b^{2}}}{2}, \tag{50}\]
which gives a witness for \(a^{2}+b^{2}>4\).
**Case 3: \(a=b\):** This is the case where we have rotational symmetry around the third axis. In this case the lowest eigenvalue is given by
\[\lambda_{-}=\frac{4-c-\sqrt{8a^{2}+c^{2}}}{4}. \tag{51}\]
Here the region where \(W\) is a witness, is shown in figure 1.
### Example 2:
This example is provided to emphasize the point stated in Remark 2. Again we take \(d_{1}=2,d_{2}=3\), but let \(W\) be of the form
\[W_{2}=\mathcal{I}_{2}\otimes\mathcal{I}_{3}+a\sigma_{1}\otimes K_{1}+b\sigma_ {2}\otimes K_{2}+c\sigma_{3}\otimes K_{3}, \tag{52}\]
where
\[K_{1}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix},\hskip 28.452756ptK_{2}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0& 1\\ 0&0&0\\ 1&0&0\end{pmatrix},\hskip 28.452756ptK_{3}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&1&0 \\ 1&0&0\\ 0&0&0\end{pmatrix}, \tag{53}\]
Figure 1: The colored regions indicate values of \(a=b,\ c\) for which the matrix (47) is a EW.
are the three different basis elements of \(L(H_{d_{2}})\). Now \(W\) takes the form
\[W_{2}=\frac{1}{2}\begin{pmatrix}2&c&0&0&0&-ib\\ c&2&0&0&0&a\\ 0&0&2&-ib&a&0\\ 0&0&ib&2&-c&0\\ 0&0&a&-c&2&0\\ ib&a&0&0&0&2\end{pmatrix}. \tag{54}\]
Obviously the range of parameters is the same as in the previous example, i.e. equation (48), but now the eigenvalues of the entanglement witness can be obtained in closed form, in fact we find
\[\mbox{Eigenvalues of }W_{2}=\{1,1,1\pm\frac{\sqrt{a^{2}+b^{2}+c^{2}}}{2}\}, \tag{55}\]
where the multiplicity of the last two eigenvalues are each equal to two. This example shows that the nature of EW is determined not by the parameters \(a,\ b,\ c\), but also by the operators in the two spaces. The region of parameters where \(W_{2}\) is actually a witness, i.e. it has a negative eigenvalues, is determined by the intersection of the region defined by (48) and the region defined by \(a^{2}+b^{2}+c^{2}>4\).
### Example 3:
This example illustrates how embedding an entanglement witness into higher dimensions does not undermine its block-positivity and only leads to modifications in its coefficient bounds. We now take the dimensions to be \(d_{1}=2,\ d_{2}=5\). Curiously enough we take the same form of operators for this higher dimensional case too, namely
\[K_{1}=\frac{1}{\sqrt{2}}(|2\rangle\langle 3|+|3\rangle\langle 2|),\ \ \ K_{2}=\frac{1}{\sqrt{2}}(|3 \rangle\langle 1|+|1\rangle\langle 3|),\ \ \ K_{3}=\frac{1}{\sqrt{2}}(|1\rangle \langle 2|+|2\rangle\langle 1|), \tag{56}\]
in a Hilbert space \(H_{5}\) which is spanned by the vectors \(\{|1\rangle,|2\rangle,|3\rangle,|4\rangle,|5\rangle\}\). This means that the EW for this case is nothing but an embedding of the previous example in a larger matrix. If we denote the EW for the case \(d_{1}=2,d_{2}=3\) by \(W^{(2,3)}\) and the present EW by \(W^{(2,5)}\), then obviously we have \(W^{(2,5)}=W^{(2,3)}\oplus I_{4}\), where \(I_{4}\) is a \(4\times 4\) identity matrix acting on the subspace spanned by \(\{|4\rangle,|5\rangle\}\). Therefore the negative eigenvalues of both
witnesses are the same. But the range of parameters \(a,\ b\) and \(c\) for this case is given by
\[\frac{10}{9}\leq a^{2}+b^{2}+c^{2}<10\ \ \ \ a,\ b,\ c\leq\sqrt{\frac{5}{2}}, \tag{57}\]
which is larger than the range of parameters in the previous case. In other words, the region of block-positivity of \(W^{(2,5)}\) is larger than that of \(W^{(2,3)}\). To see the reason for this, we note that \(W^{(2,5)}=W^{(2,3)}\oplus\mathcal{I}_{4}\), i.e.
\[W^{(2,5)}=\begin{pmatrix}W^{(2,3)}&\mathbf{0}^{T}\\ \mathbf{0}&I_{4}\end{pmatrix}. \tag{58}\]
We now show that for any range of parameter that \(W^{(2,3)}\) is block-positive, \(W^{(2,5)}\) is also block-positive. Let us write an arbitrary vector in \(H_{5}\) (where the subscript indicates the dimension of the Hilbert space) as \(|v^{(5)}\rangle=\begin{pmatrix}\alpha|v^{(3)}\rangle\\ \beta|v^{(2)}\rangle\end{pmatrix}\) where \(|\alpha|^{2}+|\beta|^{2}=1\) and \(|v^{(3)}\rangle\in H_{3}\) and \(|v^{(2)}\rangle\in H_{2}\). Then any product vector \(|u^{(2)},v^{(5)}\rangle\in H_{2}\otimes H_{5}\) will be of the form \(|u^{(2)},v^{(5)}\rangle=\begin{pmatrix}\alpha|u^{(2)},v^{(3)}\rangle\\ \beta|u^{(2)},v^{(2)}\rangle\end{pmatrix}\) and hence
\[\langle u^{(2)},v^{(5)}|W^{(2,5)}|u^{(2)},v^{(5)}\rangle=|\alpha|^{2}\langle u ^{(2)},v^{(3)}|W^{(2,3)}|u^{(2)},v^{(3)}\rangle+\beta^{2}\geq 0, \tag{59}\]
which proves the assertion.
### Example 4:
In this example, we show that as long as the party holding the qubit makes only one kind of measurement, they cannot witness the entanglement of any state using our approach. This is in contrast with previous examples, where multiple measurements of the qubit, with appropriate measurements of the other side can reveal entanglement. Let again \(d_{1}=2\) and \(d_{2}=3\). It may happen that one side can do only one type of measurement, say \(\sigma_{z}\), but the other side is free to measure any observable. The most general EW related to a unital map is given by
\[W_{4}=I_{2}\otimes I_{3}+\sigma_{3}\otimes\sum_{i=1}^{8}c_{i}\Gamma_{i}, \tag{60}\]
where the parameters \(\{c_{i}\}\) are subject to the block-positivity of \(W_{4}\) which will be specified later. Here \(\sigma_{3}=\frac{1}{\sqrt{2}}\sigma_{z}\) and \(\{\Gamma_{i},\ i=1\cdots 8\}\) are Gellmann matrices [22] with normalization \(\mbox{Tr}(\Gamma_{i}\Gamma_{j})=\delta_{ij}\).
This example is instructive in the sense that it shows that the first condition of (40), is only a necessary and not a sufficient condition for an operator to be an entanglement witness. The condition of Block-positivity is from (40), given by
\[\sum_{i=1}^{8}c_{i}^{2}\leq 3.\]
This is easily found by noting that \(\Lambda\Lambda^{T}=\begin{pmatrix}\sum_{i}c_{i}^{2}&0&0\\ 0&0&0\\ 0&0&0\\ \end{pmatrix}\). Moreover the necessary condition (40) for the operator \(W_{4}\) not to be positive is given by
\[\frac{6}{5}<\sum_{i=1}^{8}c_{i}^{2}.\]
We now show that, no matter how we choose the parameters \(\{c_{i}\}\), \(W_{4}\) cannot have any negative eigenvalue and hence cannot be a witness.
To prove that \(W_{4}\) is always positive, we denote \(\sum_{i=1}^{8}\Gamma_{i}\) by \(\Gamma\), denote the eigenvalue of the matrix \(\sigma_{3}\otimes\Gamma\) by \(\lambda(\sigma_{3}\otimes\Gamma)\), and show that \(|\lambda(\sigma_{3}\otimes\Gamma)|\leq 1\). In view of the fact that \(W_{4}=I_{2}\otimes I_{4}+\sigma_{3}\otimes\Gamma\) this will prove that \(W_{4}\) cannot have any negative eigenvalue. Note that
\[|\lambda(\sigma_{3}\otimes\Gamma)|=\frac{1}{\sqrt{2}}|\lambda(\Gamma)|=\frac{ 1}{\sqrt{2}}|\langle v|\Gamma|v\rangle|, \tag{61}\]
where \(|v\rangle\) is the eigenvector of \(\Gamma\) with eigenvalue \(\lambda\). we now note that
\[|\langle v|\Gamma|v\rangle|=\left|\sum_{i=1}^{8}c_{i}\langle v|\Gamma_{i}|v \rangle\right|\leq\sqrt{\sum c_{i}^{2}}\sqrt{\sum v_{i}^{2}}\leq\sqrt{3}\sqrt {\sum v_{i}^{2}},\]
where \(v_{i}=\langle v|\Gamma_{i}|v\rangle\). However, from (7), we have \(\sum v_{i}^{2}=\frac{d_{2}-1}{d_{2}}=\frac{2}{3}\). Putting everything together, we find
\[|\lambda(\sigma_{3}\otimes\Gamma)|\leq\frac{1}{\sqrt{2}}\sqrt{3}\sqrt{\frac{2 }{3}}=1, \tag{62}\]
which proves the assertion.
### Example 5:
We now take the two dimensions to be equal to \(d_{1}=d_{2}=3\). In principle, one can define an EW in the form \(W=I_{3}\otimes I_{3}+\sum_{i}c_{i}\Gamma_{i}\otimes\Gamma_{i}\) subject to the conditions in (40) or
\[\frac{9}{8}<\sum_{i=1}^{8}c_{i}^{2},\hskip 28.452756pt|c_{i}|\leq\frac{3}{2}, \tag{63}\]
and then investigate in which subregion of the above, \(W\) has a negative eigenvalue, which can be done by numerical calculations. However in order to confine ourselves to analytical treatement we restrict ourselves to the following class,
\[W_{5}=I_{3}\otimes I_{3}+aJ_{1}\otimes J_{1}+bJ_{2}\otimes J_{2}+bJ_{3}\otimes J _{3}, \tag{64}\]
where \(J_{i}\)'s are proportional to the angular momentum operators as in (46). The condition (40) now are given by
\[\frac{9}{8}<a^{2}+2b^{2},\hskip 28.452756pt|a|,|b|\leq\frac{3}{2}. \tag{65}\]
The smallest eigenvalue of \(W_{5}\) is given by \(\lambda_{-}=1-\frac{a+\sqrt{a^{2}+8b^{2}}}{4}\). In Figure 2, the regions in which this matrix has negative eigenvalues have been depicted.
## 5 Construction of EW for particular entangled states
In this section, our objective is to illustrate how the entanglement of an arbitrary state can be determined using the method outlined in this paper. We will elucidate the approach for creating the desired entanglement witness through two examples. The first example focuses on identifying Bell-Diagonal states, while the second pertains to a PPT entangled state.
Figure 2: The region of block-positivity (outside the ellipse) overlaps with the region of negative eigenvalues for \(W_{5}\), which is indicated by the colored regions in the right corners. Consequently, \(W_{5}\) exhibits negative eigenvalues within this overlapping region.
### Example 1:
The Bell-diagonal states have the following form:
\[\rho=\sum_{i,j=0,1}p_{ij}|\psi_{ij}\rangle\langle\psi_{ij}|,\]
where
\[|\psi_{00}\rangle =\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle),\quad|\psi_{10}\rangle =\frac{1}{\sqrt{2}}(|00\rangle-|11\rangle),\] \[|\psi_{01}\rangle =\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle),\quad|\psi_{11}\rangle =\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle).\]
This state in the normalized Pauli basis is written as:
\[\rho=\frac{1}{4}(I\otimes I+2\sum_{i=1}^{3}t_{i}\Gamma_{i}\otimes\Gamma_{i}), \tag{66}\]
where \(\Gamma_{i}=\frac{1}{\sqrt{2}}\Sigma_{i}\), and
\[t_{1} =p_{00}+p_{01}-p_{10}-p_{11},\] \[t_{2} =-p_{00}+p_{01}+p_{10}-p_{11},\] \[t_{3} =p_{00}-p_{01}+p_{10}-p_{11}.\]
We know that all separable Bell-Diagonal states lies within region \(\sum_{i=1}^{3}|t_{i}|\leq 1\)[23]. To identify the entanglement of this state, considering the non-zero coefficients in eq. (66), we take into account the following EW:
\[W=I\otimes I+\sum_{i=1}^{3}c_{i}\Gamma_{i}\otimes\Gamma_{i}. \tag{67}\]
To determine the coefficients \(c_{i}\) in such a way that the introduced entanglement witness can identify the Bell-Diagonal state, we need to minimize the value of \(\mathrm{Tr}(W\rho)\):
\[\mathrm{Tr}(W\rho)=1+\frac{1}{2}\sum_{i=1}^{2}c_{i}t_{i}.\]
Applying conditions (37) to the coefficients \(c_{i}\) leads to:
\[c_{i}^{2}\leq 4.\]
To obtain the minimum value of \(\mbox{Tr}(W\rho)\), each of the \(c_{i}t_{i}\) must take on the most possible negative value. For this purpose, we choose the coefficients \(c_{i}\) as follows:
\[c_{i}=-2\ \mbox{sgn}(t_{i}),\]
which leads to \(c_{i}t_{i}=-2|t_{i}|\). Thus we have
\[\mbox{Tr}(W\rho)=1-\sum_{i=1}^{3}|t_{i}|.\]
This equation will be negative if \(\sum_{i=1}^{3}|t_{i}|>1\), which completely indicates the entangled Bell-Diagonal states.
### Example 2:
In this example, we demonstrate that the method explained in this paper can identify PPT entangled states, and thus, it can generate indecomposable EWs. It has been shown in [24] that the following state is PPT entangled state when \(x\neq 1\) (for \(x=1\) this is a separable state):
\[\rho=\frac{1}{N}\left(\begin{array}{cccccccccc}1&0&0&0&1&0&0&0&1\\ 0&\frac{1}{x}&0&0&0&0&0&0&0\\ 0&0&x&0&0&0&0&0&0\\ 0&0&0&x&0&0&0&0\\ 1&0&0&0&1&0&0&1\\ 0&0&0&0&0&\frac{1}{x}&0&0\\ 0&0&0&0&0&0&\frac{1}{x}&0&0\\ 0&0&0&0&0&0&x&0\\ 1&0&0&0&1&0&0&0&1\end{array}\right).\]
Here \(N\) is the normalization factor and \(x>0\). As the coefficient \(N\) doesn't impact the identification of entanglement, we can disregard it. The above
state can be written in the basis of normalized Gell-Mann matrices as follows:
\[\begin{split}\rho_{x}=&\left(\frac{x}{3}+\frac{1}{3x}+ \frac{1}{3}\right)I\otimes I+\Gamma_{1}\otimes\Gamma_{1}+\Gamma_{2}\otimes \Gamma_{2}+\Gamma_{3}\otimes\Gamma_{3}-\Gamma_{4}\otimes\Gamma_{4}-\Gamma_{5} \otimes\Gamma_{5}-\Gamma_{6}\otimes\Gamma_{6}\\ &+\left(-\frac{x}{2}-\frac{1}{2x}+1\right)\Gamma_{7}\otimes \Gamma_{7}+\left(\frac{\sqrt{3}}{2x}-\frac{\sqrt{3}x}{2}\right)\Gamma_{7} \otimes\Gamma_{8}\\ &+\left(\frac{\sqrt{3}x}{2}-\frac{\sqrt{3}}{2x}\right)\Gamma_{8} \otimes\Gamma_{7}+\left(-\frac{x}{2}-\frac{1}{2x}+1\right)\Gamma_{8}\otimes \Gamma_{8}.\end{split} \tag{68}\]
Now, we want to examine for which values of x we can identify the entanglement of this state using the method explained in the previous example. We take the EW as follows:
\[W=I\otimes I+\sum_{i=1}^{6}c_{i}\Gamma_{i}\otimes\Gamma_{i}+c_{7}\Gamma_{7} \otimes\Gamma_{8}+c_{8}\Gamma_{8}\otimes\Gamma_{7},\]
in which, according to conditions 12, we have \(|c_{i}|\leq\frac{3}{2}\;\;i=1\cdots 8\;\). Note that if we consider our EW as \(W=I\otimes I+\sum_{i=1}^{8}c_{i}\Gamma_{i}\otimes\Gamma_{i}\), no entangled state can be detected. Similar to the reasoning provided in the previous example, the minimum value of \(\mathrm{Tr}(W\rho)\) is obtained when \(c_{i}r_{i}=-\frac{3}{2}|r_{i}|\), where \(r_{i}\) are the coefficients of the Gell-Mann matrices in eq. (68). Hence the minimum value of \(\mathrm{Tr}(W\rho)\) would be
\[\mathrm{Tr}(W\rho)=-6+3\left(x+\frac{1}{x}\right)-\frac{3\sqrt{3}}{2}\left|x- \frac{1}{x}\right|.\]
It is clear that there can be two cases here:
* \(x>1\) Considering \(\mathrm{Tr}(W\rho)<0\), the allowed values for this case is \(x>1\).
* \(x<1\) Considering \(\mathrm{Tr}(W\rho)<0\), the allowed values for this case is \(0.154701<x<1\).
Thus, we have demonstrated that the entanglement witness constructed using the method in this paper can effectively identify the entanglement of \(\rho_{x}\), a PPT entangled state, for almost all values of x except for the narrow range \(0<x<0.154701\). This indicates that this approach can also produce indecomposable EW.
Conclusion
In this paper, a general and practical method for constructing EWs for two particle systems with different dimensions was presented. We used two alternative methods for this construction both of which are based on the Mehta's lemma for positive matrices. Our construction offers a straightforward and flexible approach to construct EW's when one has limited experimental setups. In this approach, it is sufficient to select the desired measurements and then construct EW based on arbitrary coefficients for each measurement. We presented several examples each emphasizing a different aspect of our construction. In particular, it was demonstrated that the proposed method has the capability to generate indecomposable entanglement witnesses, making it a promising candidate for identifying PPT entangled states. We have only taken the first steps toward a construction of EW's in different dimensions. Explorations of many of their properties, like optimality, extremality, exposedness [8] and their classification remain for our future investigations.
|
2304.10868 | Gravitationally modulated quantum correlations: Discriminating classical
and quantum models of ultra-compact objects with Bell nonlocality | We investigate the relation between quantum nonlocality and gravity at the
astrophysical scale, both in the classical and quantum regimes. Considering
particle pairs orbiting in the strong gravitational field of ultra-compact
objects, we find that the violation of Bell inequality acquires an angular
modulation factor that strongly depends on the nature of the gravitational
source. We show how such gravitationally-induced modulation of quantum
nonlocality readily discriminates between black holes (both classical and
inclusive of quantum corrections) and string fuzzballs, i.e., the true quantum
description of ultra-compact objects according to string theory. These findings
promote Bell nonlocality as a potentially key tool in comparing different
models of classical and quantum gravity and putting them to the test. | Luciano Petruzziello, Fabrizio Illuminati | 2023-04-21T10:31:23Z | http://arxiv.org/abs/2304.10868v2 | Gravitationally modulated quantum correlations: Discriminating classical and quantum models of ultra-compact objects with Bell nonlocality
###### Abstract
We investigate the relation between quantum nonlocality and gravity at the astrophysical scale, both in the classical and quantum regimes. Considering particle pairs orbiting in the strong gravitational field of ultra-compact objects, we find that the violation of Bell inequality acquires an angular modulation factor that strongly depends on the nature of the gravitational source. We show how such gravitationally-induced modulation of quantum nonlocality readily discriminates between black holes (both classical and inclusive of quantum corrections) and string fuzzballs, i.e., the true quantum description of ultra-compact objects according to string theory. These findings promote Bell nonlocality as a potentially key tool in comparing different models of classical and quantum gravity and putting them to the test.
## I Introduction
The development of a consistent and predictive theory of quantum gravity is one of the main unresolved conundrums in contemporary physics [1]. Relentless efforts in the attempt to reconcile quantum mechanics and general relativity have produced a number of promising candidate models, including the class of non-perturbative quantum field theory approaches such as asymptotic safety [2] and causal dynamical triangulations [3], as well as more non-canonical frameworks such as non-commutative geometry [4], loop quantum gravity [5], doubly special relativity [6] and string theory [7]. All the aforementioned theoretical schemes have their own characteristics and predictions which make them profoundly different among each other. Despite that, it is still possible to recognize similar aspects which are thus likely to be part of a general treatment of quantum gravity. A prominent example of a feature foreseen by many of the above models is the existence of a minimal length at the Planck scale and the ensuing modifications of the canonical commutation relations of quantum mechanics and the associated Heisenberg uncertainty principle [8; 9].
The notion of a minimum spatial resolution can be deduced also from gedanken experiments involving large [10] and micro [11] black holes, in proximity of which quantum gravitational effects are expected to become dominant. As a matter of fact, the strong gravity regime near a black hole prevents the use of any known approximation in the study of quantum systems. Success in addressing these difficulties would yield major progress towards a viable theory of quantum gravity able to settle the information paradox and the singularity problem that arise in the context of classical and semi-classical approaches to gravitational phenomena.
An interesting resolution for both of the above issues in the framework of superstring theory is represented by the fuzzball proposal [12; 13], according to which the supposed black hole is in fact conceived as a massive object made of a very large number of microscopic strings which, by definition, feature a minimal length extension qualitatively of the order of the Planck scale. Even though the original arguments leading to the fuzzball solution were purely theoretical, it has been recently pointed out that concrete realizations of fuzzballs lead to a phenomenology that might be accessible, for instance via the observational investigation of gravitational waves [14].
In a parallel development, the community active in quantum information science, atomic physics and quantum optics has picked up in recent years on the original ideas by Bronstein and Feynman [15; 16; 17], suggesting to test the hypothetical quantum nature of gravity in the laboratory by measuring witnesses of the bipartite entanglement between two test masses induced by a quantised gravitational field, i.e. a quantum gravitational mediator [18; 19].
Motivated by the above considerations, in the present work we address the broader question of gravitationally-induced modifications of nonlocal quantum correlations. Given that a classical gravitational mediator cannot induce any form of quantum nonlocality, be it, in ascending hierarchical order, entanglement, steering, or Bell nonlocality, we investigate whether classical and quantum gravity can have different effects on _already_ existing quantum correlations, previously established by other physical interactions on pairs of test masses. To this end, we study the dynamics of the Bell nonlocality of particle pairs in the gravitational field generated by ultra-compact objects of diverse nature, such
as black holes and fuzzballs, in order to assess whether and how different gravitational sources affect the dynamical evolution of quantum nonlocality. Historically, establishing a relation between cosmological objects and quantum entanglement was the central result of a celebrated paper by Maldacena and Susskind [20], where it was conjectured that the entanglement shared by two particles can be interpreted as a non-traversable wormhole; such a correspondence may be viewed as a precondition for the unification of quantum and gravitational effects.
Here, instead, by relying on Einstein-Podolsky-Rosen (EPR) nonlocal correlations [21] shared by particle pairs orbiting around ultra-compact objects, we investigate what insights Bell nonlocality, rather than entanglement, can provide on the nature and properties of gravitational structures. In this respect, it is important to recall once more that nonlocality and entanglement are distinct concepts that stand in a hierarchical relation: whilst a violation of Bell inequality always implies entanglement, the opposite implication does not necessarily hold, a notorious counterexample being that of the Werner mixed two-qubit states [22], which can be entangled without violating Bell inequality.
Proceeding to evaluate explicitly the amount of Bell nonlocality in an extreme astrophysical scenario, we resort to the physically transparent Clauser-Horne-Shimony-Holt (CHSH) form of Bell inequality [23; 24; 25] for massive spin-1/2 particle pairs, and we find that gravity in the strong-field regime affects significantly the quantum nonlocality shared by the test particles. Indeed, the overall degree of violation of the CHSH inequality is modulated by an angular factor that strictly depends on the nature of the ultra-compact object under consideration.
This result, which is completely general and may be adapted also to different frameworks, is elucidated by focusing on three relevant cases: the classical Schwarzschild black hole, the Schwarzschild black hole within a quantum-corrected treatment at leading (perturbative) order and the string fuzzball solution. We find that the gravitational modulation of bipartite Bell nonlocality discriminates unambiguously between all of them. In order to proceed in our investigation, we make use of some recently introduced techniques that allow to evaluate EPR correlations in different gravitational scenarios [26; 27; 28]; as a side result of the main analysis, we generalize such techniques and extend their range of validity to include any static and spherically symmetric spacetime whose metric tensor is expressed in isotropic coordinates.
The paper is organized as follows: in Sec. II we introduce the necessary mathematical tools based on the concept of Wigner rotation in curved spacetime, as it is needed in the analysis of the EPR correlations shared by two spin-1/2 particles in the gravitational field of an ultra-compact object. Section III is devoted to the explicit computation of the Wigner rotation in various regimes; this result is then applied to the explicit evaluation of the EPR correlations in Sec. IV. In Sec. V we discuss and compare three relevant instances of static and spherically symmetric ultra-compact gravitational objects, i.e. the string fuzzball, the classical Schwarzschild black hole and the quantum-corrected Schwarzschild black hole, and we show how for each of them the orbiting particle pairs feature a different degree of Bell nonlocality. Finally, in Sec. VI we comment on our results and perspectives on future research.
Throughout the manuscript, we adopt Planck units (\(c=\hbar=G=1\)) and the mostly positive signature for the metric, i.e., \(\eta_{ab}=\text{diag}(-,+,+,+)\).
## II Wigner rotation in curved spacetime
For a consistent treatment of spin-1/2 particles in curved spacetime it is necessary to make use of the tetrad (or vierbein) formalism; for a comprehensive introduction on this subject, the interested reader can consult Ref. [29]. A tetrad field \(e^{\mu}_{a}\) evaluated at a spacetime point \(x\) is completely characterized by the relation
\[g_{\mu\nu}(x)e^{\mu}_{a}(x)e^{\nu}_{b}(x)=\eta_{ab}\,, \tag{1}\]
where the summation over repeated indexes is understood, \(g_{\mu\nu}(x)\) is the metric tensor defined on the Riemannian manifold and \(\eta_{ab}\) is the Minkowski metric acting on the flat plane tangent to the manifold in the point \(x\). Henceforth, to discriminate between the indexes of the manifold and of the tangent bundle, we employ Greek letters for the former and Latin letters for the latter.
The expression (1) is essential to analyze spin-1/2 particle states in curved backgrounds, since they are defined as the states that belong to the spin-1/2 representation of the local Lorentz transformation (LLT) group, while general relativity is based upon invariance under diffeomorphisms. Precisely in order to build a bridge between these two notions, one can introduce tetrads, as they allow to "project" diffeomorphism-covariant tensors of the differentiable manifold onto local Lorentz-covariant quantities defined on a flat tangent plane.
By virtue of this procedure, a generic spin state with four-momentum \(k^{\mu}=mu^{\mu}\) (where \(u^{\mu}u_{\mu}=-1\)) at the spacetime point \(x\) can be unambiguously labeled with \(|k^{a},\sigma;x\rangle\)[26], where \(k^{a}=e^{a}_{\mu}k^{\mu}\) and \(\sigma=\uparrow,\downarrow\) is the third component of the spin. Naturally, the field \(e^{a}_{\mu}\) is the inverse of the one appearing in Eq. (1); consequently, the following identities hold:
\[e^{\mu}_{a}e^{a}_{\nu}=\delta^{\mu}_{\nu}\,,\qquad e^{\mu}_{a}e^{b}_{\mu}= \delta^{b}_{a}\,. \tag{2}\]
If we now want to describe the dynamical evolution of a spin-1/2 particle moving in curved spacetime, we have to account for different flat tangent spaces, each of which is associated to a given point of the particle's trajectory. As
a first step, we consider what happens after an infinitesimal interval of proper time \(d\tau\), after which the particle is located at the new point \(x^{\prime\mu}=x^{\mu}+u^{\mu}d\tau\). Accordingly, the shift in momentum is given by
\[k^{a}(x^{\prime})=k^{a}(x)+\delta k^{a}(x)\,, \tag{3}\]
where the variation is made of two distinct contributions, namely1:
Footnote 1: When there is no need for disambiguation, the dependence on the spacetime position will be omitted.
\[\delta k^{a}=\delta k^{\mu}e^{a}_{\mu}+k^{\mu}\delta e^{a}_{\mu}\,. \tag{4}\]
By defining the four-acceleration \(a^{\mu}=u^{\nu}\nabla_{\nu}u^{\mu}\) originated by an external force [26] and recalling that \(k^{\mu}k_{\mu}=-m^{2}\) as well as \(k^{\mu}a_{\mu}=0\), it is straightforward to observe that the first variation of Eq. (4) becomes
\[\delta k^{\mu}=ma^{\mu}d\tau=-\frac{1}{m}\left(a^{\mu}k_{\nu}-k^{\mu}a_{\nu} \right)k^{\nu}d\tau\,. \tag{5}\]
On the other hand, the second factor of Eq. (4) can be rewritten by introducing the expression for the connection one-form [29], that is, \(\omega^{a}_{\mu b}=e^{a}_{\nu}\nabla_{\mu}e^{\nu}_{b}\), and hence
\[\delta e^{a}_{\mu}=-u^{\nu}\omega^{a}_{\nu b}e^{b}_{\mu}d\tau=\xi^{a}_{b}e^{b }_{\mu}d\tau\,. \tag{6}\]
In so doing, exploiting Eqs. (5) and (6) to rewrite Eq. (4), one can identify an infinitesimal local Lorentz transformation occurring for the quantity \(k^{a}\). As a matter of fact
\[\delta k^{a}=\lambda^{a}_{b}k^{b}d\tau\,, \tag{7}\]
where
\[\lambda^{a}_{b}=-\frac{1}{m}\left(a^{a}k_{b}-k^{a}a_{b}\right)+\xi^{a}_{b} \tag{8}\]
is an infinitesimal LLT. This means that the momentum of the particle as viewed by a local reference frame (i.e., the one belonging to the tangent space) undergoes the transformation [26; 27; 28]
\[k^{a}(x^{\prime})=\Lambda^{a}_{b}(x)k^{b}(x)\,,\qquad\Lambda^{a}_{b}=\delta^{a }_{b}+\lambda^{a}_{b}\,, \tag{9}\]
which is precisely a LLT. Consequently, the evolution of a spin-1/2 state must be described in terms of a representation of the spin-1/2 local Lorentz group. Bearing this in mind, we recall that, in the context of flat spacetime, under the action of a given Lorentz transformation \(\Lambda^{a}_{b}\), the spin-1/2 one-particle state \(|k^{a},\sigma\rangle\) transforms as follows [30; 31]:
\[U(\Lambda)|k^{a},\sigma\rangle=\sum_{\sigma^{\prime}}D^{(1/2)}_{\sigma^{ \prime}\sigma}\left(W\left(\Lambda,k\right)\right)|\Lambda k^{a},\sigma^{ \prime}\rangle\,, \tag{10}\]
with \(D^{(1/2)}_{\sigma^{\prime}\sigma}\left(W\left(\Lambda,k\right)\right)\) being a \(2\times 2\) unitary matrix that allows for the the Wigner rotation \(W^{a}_{b}(\Lambda,k)\) of the spin. The Wigner rotation [32] can be written as
\[W^{a}_{b}(\Lambda,k)=\left[L^{-1}(\Lambda k)\Lambda L(k)\right]^{a}_{b}\,, \tag{11}\]
where \(L^{a}_{b}\) is the Lorentz boost
\[L^{0}_{0}=\Xi\,,\quad L^{i}_{0}=L^{0}_{i}=\frac{k^{i}}{m}\,,\quad L^{i}_{j}= \delta_{ij}+(\Xi-1)\,\frac{k^{i}k^{j}}{|\vec{k}|^{2}}\,, \tag{12}\]
with \(\Xi=\sqrt{|\vec{k}|^{2}+m^{2}}\) and the indexes \(i,j=1,2,3\). A more detailed explanation for the same topic can be found in Refs. [28; 30; 31].
When generalizing to include the case of a curved spacetime, we have to resort to local Wigner rotations stemming from the LLTs described in Eq. (9). Accordingly, Eq. (11) becomes
\[U(\Lambda(x))|k^{a},\sigma;x\rangle=\sum_{\sigma^{\prime}}D^{(1/2)}_{\sigma^{ \prime}\sigma}\left(W(x)\right)|\Lambda k^{a},\sigma^{\prime};x\rangle\,. \tag{13}\]
Notice that a similar scenario holds true not only for spinors, but for Dirac bispinors as well; for recent applications of the latter, see Refs. [33] and references therein.
The form of the infinitesimal local Wigner rotation can be extracted from Eq. (9); indeed, one can verify that [26]
\[W^{a}_{b}=\delta^{a}_{b}+\vartheta^{a}_{b}d\tau\,,\qquad\vartheta^{i}_{\,j}= \lambda^{i}_{j}+\frac{\lambda^{i}_{0}k_{j}-\lambda_{j0}k^{i}}{k^{0}+m}\,, \tag{14}\]
where \(i,j=1,2,3\), as they are the only non-vanishing terms of \(\vartheta^{a}_{b}\).
## III Wigner rotation for a generic metric in isotropic coordinates
In the following, we compute the Wigner rotation angle for a general class of static and spherically symmetric spacetime solutions, thus going beyond the standard Schwarzschild case treated in Ref. [26] and the weak-field limit considered in Ref. [28]. To this aim, we make use of a generic line element that can be cast in isotropic spherical coordinates as follows:
\[ds^{2}=-f(r)dt^{2}+g(r)\left[dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \varphi^{2}\right]\,. \tag{15}\]
We note in passing that, by setting \(f(r)=(1-M/2r)^{2}/(1+M/2r)^{2}\) and \(g(r)=(1+M/2r)^{4}\), one recovers the results of Ref. [26] within a different coordinate system, while if \(f(r)=1+2\phi(r)\) and \(g(r)=1-2\psi(r)\), with \(\phi(r)\) and \(\psi(r)\) being weak gravitational potentials arising in extended theories of gravity, one recovers the findings of Ref. [28].
As the metric tensor is diagonal, we can compute the tetrads rather straightforwardly
\[e^{t}_{0}=\frac{1}{\sqrt{f}}\,,\quad e^{r}_{1}=\frac{1}{\sqrt{g}}\,,\quad e^{ \theta}_{2}=\frac{1}{r\sqrt{g}}\,,\quad e^{\varphi}_{3}=\frac{1}{r\sin\theta \sqrt{g}}\,. \tag{16}\]
From the above equation, it is possible to deduce the non-vanishing components of the connection one-form
\[\omega^{0}_{t1}=\omega^{1}_{t0}=\frac{f^{\prime}}{2\sqrt{gf}}\,,\qquad\omega^ {2}_{\theta 1}=-\omega^{1}_{\theta 2}=1+\frac{rg^{\prime}}{2g}\,,\qquad\omega^{3} _{\theta 3}=\cot\theta\,,\]
\[\omega^{3}_{\varphi 1}=-\omega^{1}_{\varphi 3}=\left(1+\frac{rg^{\prime}}{2g} \right)\sin\theta\,,\qquad\omega^{3}_{\varphi 2}=-\omega^{2}_{\varphi 3}=\cos \theta\,, \tag{17}\]
where the prime denotes derivation with respect to the coordinate \(r\).
Without loss of generality, we can investigate the circular motion of the entangled particles around the ultra-compact object by assuming that the dynamics takes place on the equatorial plane \(\theta=\pi/2\). Additionally, we let the EPR source be located at \(\varphi=0\) and the two observers performing the local spin measurements at \(\pm\varphi\). A sketch of the physical setup is shown in Fig. 1.
Due to the above requirements, the expression of the four-velocity is simplified, as
\[u^{\mu}(x)=\left(\frac{\cosh\zeta}{\sqrt{f}},0,0,\frac{\sinh\zeta}{r\sqrt{g}} \right)\,, \tag{18}\]
where \(\zeta\) denotes the rapidity in the local reference frame.
We recall that the motion under investigation is not a geodesic one; therefore, there must be an external force acting on the system that perfectly compensates the presence of gravity. Such a force produces the non-vanishing acceleration
\[a^{\mu}=\left(0,\frac{1}{g}\left[\frac{f^{\prime}}{2f}\cosh^{2}\zeta-\left( \frac{1}{r}+\frac{g^{\prime}}{2g}\right)\sinh^{2}\zeta\right],0,0\right)\,. \tag{19}\]
We can now compute the Wigner angle introduced in Eq. (14). First, by means of simple algebraic manipulations one determines the quantities \(\xi^{a}_{b}\) in Eq. (6), finding the following non-vanishing components:
\[\xi^{1}_{0}=\xi^{0}_{1}=-\frac{f^{\prime}}{2f\sqrt{g}}\cosh\zeta\,,\qquad\xi^ {1}_{3}=-\xi^{3}_{1}=\left(\frac{2g+rg^{\prime}}{2rg\sqrt{g}}\right)\sinh\zeta\,. \tag{20}\]
By virtue of the above expressions, we can write the infinitesimal LLT (8) explicitly. Indeed, recalling that \(k^{a}=mu^{a}\) and \(u^{a}=e^{a}_{\mu}u^{\mu}\), then \(u^{a}=(\cosh\zeta,0,0,\sinh\zeta)\). In a similar fashion, \(a^{a}=e^{a}_{\mu}a^{\mu}\), and thus the only non-vanishing component of \(a^{a}\) is \(a^{1}=e^{1}_{r}a^{r}=\sqrt{g}a^{r}\). Therefore,
\[\lambda^{1}_{0}=\lambda^{0}_{1}=-\frac{1}{r\sqrt{g}}\left[1+\frac{rg^{\prime}} {2g}-\frac{rf^{\prime}}{2f}\right]\cosh\zeta\sinh^{2}\zeta\,,\qquad\lambda^{1} _{3}=-\lambda^{3}_{1}=\frac{1}{r\sqrt{g}}\left[1+\frac{rg^{\prime}}{2g}-\frac {rf^{\prime}}{2f}\right]\cosh^{2}\zeta\sinh\zeta\,. \tag{21}\]
After the evaluation of the infinitesimal LLT, the only quantity left to compute is the infinitesimal Wigner rotation (14). Because of the choice of the physical setup summarized in Fig. 1, the only terms of \(\vartheta^{a}_{b}\) different from zero are
\[\vartheta^{3}_{1}=-\vartheta^{1}_{3}=-\frac{1}{r\sqrt{g}}\left[1+\frac{rg^{ \prime}}{2g}-\frac{rf^{\prime}}{2f}\right]\cosh\zeta\sinh\zeta\,. \tag{22}\]
Next, we consider the finite transformation as a Dyson series of infinitesimal ones [26], whose formal sum reads
\[W^{3}_{1}=T\exp\left[\int_{\tau_{i}}^{\tau_{f}}\vartheta^{3}_{1}\,d\tau^{ \prime}\right]=\exp\left[\vartheta^{3}_{1}\left(\tau_{f}-\tau_{i}\right) \right]\,, \tag{23}\]
where \(T\) denotes the time ordering operator. The calculation of the above quantity is in general non-trivial; however, in our setting the \(r\) parameter is fixed and therefore the infinitesimal Wigner rotation \(\vartheta^{3}_{1}\) is constant.
## IV Gravitationally induced modulation of Bell nonlocality
The EPR source emits a pair of particles, \(A\) and \(B\), moving away from the source in opposite directions with constant four-momenta \(k^{a}_{\pm}=(m\cosh\zeta,0,0,\pm m\sinh\zeta)\) after having been prepared in the maximally entangled spin singlet
\[|\psi\rangle=\frac{1}{\sqrt{2}}\left(|k^{a}_{+},\uparrow;\varphi=0\rangle_{A} |k^{a}_{-},\downarrow;\varphi=0\rangle_{B}-|k^{a}_{+},\downarrow;\varphi=0 \rangle_{A}|k^{a}_{-},\uparrow;\varphi=0\rangle_{B}\right)\,. \tag{24}\]
The CHSH inequality [23] and the associated CHSH measurements are a powerful toolbox to access and test the degree of quantum nonlocality in the correlations between two dichotomous variables; for the problem at hand, such variables are the spins of the entangled particles.
As a key ingredient, we need two sets of measurements \(\{\hat{A}_{1},\hat{A}_{2}\}\) and \(\{\hat{B}_{1},\hat{B}_{2}\}\) performed on parties \(A\) and \(B\), respectively, with the aim of detecting the orientation of the third component of the spin. If correlations of the spins
Figure 1: A pair of spin-1/2 particles initially sharing a perfect EPR correlation is produced at \(\varphi=0\). Particles travel along a circular orbit around the ultra-compact object in opposite directions. At the end of each propagation, the spins are rotated due to the presence of a non-trivial background spacetime.
in a given shared state are local in the sense of Bell theorem [24; 25], then the inequality [23; 25]
\[\mathcal{S}[|\Psi\rangle]=|\langle\hat{A}_{1}\hat{B}_{1}\rangle+\langle\hat{A}_{ 1}\hat{B}_{2}\rangle+\langle\hat{A}_{2}\hat{B}_{1}\rangle-\langle\hat{A}_{2} \hat{B}_{2}\rangle|\leq 2\,, \tag{25}\]
holds, where \(\langle\hat{A}_{i}\hat{B}_{j}\rangle=\langle\Psi|\hat{A}_{i}\hat{B}_{j}|\Psi\rangle\). If Eq. (25) is violated, spin correlations are nonlocal and local hidden-variable theories are falsified.
Together with the state described in Eq. (24), the employment of the observables
\[\hat{A}_{1}=\hat{\Sigma}^{(A)}_{x}\,,\quad\hat{A}_{2}=\hat{\Sigma}^{(A)}_{y}\,,\quad\hat{B}_{1}=-\frac{\hat{\Sigma}^{(B)}_{x}+\hat{\Sigma}^{(B)}_{y}}{\sqrt {2}}\,,\quad\hat{B}_{2}=-\frac{\hat{\Sigma}^{(B)}_{x}-\hat{\Sigma}^{(B)}_{y}} {\sqrt{2}}\,, \tag{26}\]
allows to reach the maximum violation of the inequality allowed by quantum mechanics, namely \(\mathcal{S}[|\psi\rangle]=2\sqrt{2}\), also known as the Tsirelson bound [34].
The maximally entangled initial state evolves in a curved spacetime, and because of the Wigner rotation the spins of the entangled particles undergo a precession motion that prevents the perfect EPR correlation of the initial state from being preserved. Clearly, we expect that whether and how much the propagation of the particles along a closed path will change the orientation of the spins and, in turn, the violation of the CHSH inequality, should depend on the nature of the gravitational object around which the particles are orbiting.
Now, assume that, after a finite proper time \(\tau_{f}-\tau_{i}=r\sqrt{g}\,\varphi/\sinh\zeta\), particles \(A\) and \(B\) have reached their respective detection points; in this proper time interval, the Wigner transformation can be viewed as a rotation about the 2-axis [26; 27; 28]
\[\mathbb{W}\left(\pm\varphi\right)=\begin{pmatrix}1&0&0&0\\ 0&\cos\Theta&0&\pm\sin\Theta\\ 0&0&1&0\\ 0&\mp\sin\Theta&0&\cos\Theta\end{pmatrix}\,. \tag{27}\]
where \(\Theta\) can be derived from Eq. (23)
\[\Theta=\frac{r\sqrt{g}\,\varphi}{\sinh\zeta}\vartheta^{1}_{3}=\varphi\cosh \zeta\left[1+\frac{rg^{\prime}}{2g}-\frac{rf^{\prime}}{2f}\right]\,. \tag{28}\]
The physical meaning of the rotation angle \(\Theta\) and the spin precession can be readily visualized from Fig. 1.
Having the explicit expression for the Wigner rotation, the transformation acting on the spin states can be computed as shown in Eq. (13). Specifically, one can verify that [26; 27; 28]
\[D^{(1/2)}_{\sigma^{\prime}\sigma}=e^{\mp i\frac{\sigma_{x}}{2}\Theta}\,, \tag{29}\]
with \(\sigma_{y}\) being the Pauli matrix with imaginary entries.
Crucially, we see that, as the particles progress travelling along the orbit that circles the ultra-compact object, the initial spin-singlet state gets embroiled in a linear superposition with the spin-triplet states, which implies that measurements of the spin along the same direction are no longer perfectly correlated in the local reference frame for \(\pm\varphi\)[26; 27; 28].
In order to preserve perfect correlation in the local reference frame, it is sufficient to rotate the bases by \(\mp\varphi\) while keeping the 2-axis fixed in the point that is denoted by \(\pm\varphi\). In so doing, we obtain
\[|k^{a}_{\pm},\uparrow;\pm\varphi\rangle^{\prime}=\cos\frac{\varphi}{2}|k^{a}_{ \pm},\uparrow;\pm\varphi\rangle\pm\sin\frac{\varphi}{2}|k^{a}_{\pm},\downarrow ;\pm\varphi\rangle\,, \tag{30}\]
and
\[|k^{a}_{\pm},\downarrow;\pm\varphi\rangle^{\prime}=\mp\sin\frac{\varphi}{2}|k^ {a}_{\pm},\uparrow;\pm\varphi\rangle+\cos\frac{\varphi}{2}|k^{a}_{\pm}, \downarrow;\pm\varphi\rangle\,, \tag{31}\]
so that the evolved state reads
\[|\psi^{\prime}\rangle=\frac{1}{\sqrt{2}}\Big{[}\!\cos\Delta\Big{(}|k^{a}_{+}, \uparrow;\varphi\rangle^{\prime}|k^{a}_{-},\downarrow;-\varphi\rangle^{ \prime}-|k^{a}_{+},\downarrow;\varphi\rangle^{\prime}|k^{a}_{-},\uparrow;- \varphi\rangle^{\prime}\Big{)}\!+\!\sin\Delta\Big{(}|k^{a}_{+},\uparrow; \varphi\rangle^{\prime}|k^{a}_{-},\uparrow;-\varphi\rangle^{\prime}\!+\!|k^{ a}_{+},\downarrow;\varphi\rangle^{\prime}|k^{a}_{-},\downarrow;-\varphi\rangle^{ \prime}\Big{)}\Big{]}\,, \tag{32}\]
where
\[\Delta=\Theta-\varphi=\varphi\left[\cosh\zeta\left(1+\frac{rg^{\prime}}{2g}- \frac{rf^{\prime}}{2f}\right)-1\right]. \tag{33}\]
Before we can evaluate the CHSH inequality (25) for the observables introduced in Eq. (26), the measurement operators must be rewritten in the new reference frame obtained as a result of the rotation, that is
\[\begin{split}\hat{A}^{\prime}_{1}=\cos\Theta\hat{\Sigma}^{(A)}_{x} -\sin\Theta\hat{\Sigma}^{(A)}_{z}\,,\quad\hat{A}^{\prime}_{2}=\hat{\Sigma}^{( A)}_{y}\,,\\ \hat{B}^{\prime}_{1}=-\frac{\cos\Theta\left(\hat{\Sigma}^{(B)}_{x }+\hat{\Sigma}^{(B)}_{y}\right)+\sin\Theta\hat{\Sigma}^{(B)}_{z}}{\sqrt{2}}\,, \quad\hat{B}^{\prime}_{2}=\frac{\cos\Theta\left(\hat{\Sigma}^{(B)}_{x}-\hat{ \Sigma}^{(B)}_{y}\right)+\sin\Theta\hat{\Sigma}^{(B)}_{z}}{\sqrt{2}}\,.\end{split} \tag{34}\]
Collecting all the above results, we finally obtain
\[\mathcal{S}^{\prime}[|\psi^{\prime}\rangle]=2\sqrt{2}\cos^{2}\Delta\,. \tag{35}\]
We can interpret Eq. (35) as follows: when a CHSH-like experiment is carried out after the observables have been rotated, the maximal initial violation of the CHSH inequality \(2\sqrt{2}\) becomes modulated by a factor \(\cos^{2}\Delta\).
Inspecting Eq. (33), we see that, in the presence of the gravitational interaction, the phase shift parameter \(\Delta\) responsible for the overall violation of the CHSH inequality acquires contributions that depend on the details of the spacetime in which the entangled particles propagate. Therefore, depending on the actual nature of the ultra-compact gravitational source being considered, we expect to find distinct and possibly significantly different modulations in the violation of the CHSH inequality.
## V Comparing models of ultra-compact objects with Bell nonlocality
The gravitational modulation of the Bell nonlocality derived in the previous Section, i.e., Eqs. (33) and (35), can be exploited to compare relevant alternative models of ultra-compact structures, such as the string fuzzball and the black hole (classical or with perturbative quantum corrections).
### String fuzzballs
Within string-theory inspired cosmology, fuzzballs [12; 13; 35] are spheres of strings of definite, finite volume that simulate the behavior of black holes, but having the two main problems plaguing the latter (the singularity and the information paradox) removed by the finite length extension of their microscopic components.
A concrete fuzzball solution amenable to quantitative investigation is obtained from \(\mathcal{N}=2\) four-dimensional supergravity, with a non-minimal coupling between gravity, four \(U(1)\) gauge fields and three complex scalars. This particular case allows for some explicit phenomenological predictions that might soon be tested via gravitational waves detection by studying ringdown (gravitational wave peak in merging events), quasi-normal modes, and spectroscopy [14].
In isotropic spherical coordinates, a four-dimensional fuzzball geometry can be described by the line element [14]
\[ds^{2}=-\frac{dt^{2}}{\sqrt{H_{1}H_{2}H_{3}H_{4}}}+\sqrt{H_{1}H_{2}H_{3}H_{4}} \left[dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\varphi^{2}\right]\,, \tag{36}\]
where \(H_{A}=1+Q_{A}/r\), \(A=1,2,3,4\), with \(Q_{A}\) being electric and magnetic charges. The total mass of the fuzzball is given by \(M=\left(Q_{1}+Q_{2}+Q_{3}+Q_{4}\right)/4\), and one recovers the extremal Reissner-Nordstrom black hole solution when all the charges are equal. A straightforward comparison with Eq. (15) yields the identification
\[f(r)=\frac{1}{g(r)}=\frac{1}{\sqrt{H_{1}H_{2}H_{3}H_{4}}}\,. \tag{37}\]
### Classical and quantum Schwarzschild black holes
Black holes can be investigated both in a general-relativistic classical context as well as in a quantum-corrected one. The last instance occurs when one considers gravity as an effective field theory, so that quantum gravitational radiative corrections influence the energy-momentum tensor appearing in Einstein equations. In turn, such a modification gives rise to long-range corrections appearing in the expression of the metric tensor \(g_{\mu\nu}\). In general, the magnitude of such corrections is extremely small and can be neglected, but in the proximity of a black hole they actively affect the metric and the ensuing gravitational phenomenology. Therefore, we can investigate the implications of the CHSH experiment in the strong-gravity regime, both in the classical and in the quantum-corrected framework.
Specifically, we are interested in the extrapolation of the Schwarzschild-like solution inclusive of quantum corrections in the isotropic coordinate system [36]. The line element associated with the quantum-corrected spacetime reads
\[ds^{2}=-\left[\frac{1-\frac{M}{2r}+\frac{31}{30\pi}\frac{M}{r^{3}}}{1+\frac{M}{2r }-\frac{31}{30\pi}\frac{M}{r^{3}}}\right]^{2}dt^{2}+\left(1+\frac{M}{2r}-\frac {7}{30\pi}\frac{M}{r^{3}}\right)^{4}\left[dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2 }\theta d\varphi^{2}\right]\,, \tag{38}\]
where \(M\) is the mass of the black hole. The standard Schwarzschild solution is recovered when the additive corrections that depend on \(1/r^{3}\) in Eq. (38) are removed. By comparing the metric of Eq. (38) with the one of Eq. (15), the following identification holds:
\[f(r)=\left[\frac{1-\frac{M}{2r}+\frac{31}{30\pi}\frac{M}{r^{3}}}{1+\frac{M}{2r }-\frac{31}{30\pi}\frac{M}{r^{3}}}\right]^{2}\,,\qquad g(r)=\left(1+\frac{M}{2 r}-\frac{7}{30\pi}\frac{M}{r^{3}}\right)^{4}\,. \tag{39}\]
### Comparison
We can now compare the different ultra-compact objects and establish if and how the fuzzball and black hole solutions differ in their response to the CHSH quantum nonlocality test.
Firstly, we observe that the two classes of objects already differ at the classical level in the behavior of the gravitational potential in regions sufficiently close to the event horizon, as illustrated in Fig. 2. Next, in order to estimate quantitatively the distinct predictions in the quantum regime, we need to evaluate the gravitational modulation parameter of Bell nonlocality \(\Delta\), i.e., Eq. (33), in each case. Since we have specialized the general line element (15) to the instances (36) and (38), we are left with the task of taking advantage of the expressions of \(f(r)\) and \(g(r)\) appearing in Eqs. (37) and (39) and derive the explicit form of \(\Delta\).
As a preliminary step, we determine the form \(\Delta_{CS}\) of the parameter \(\Delta\) holding for the isotropic, classical standard Schwarzschild solution, that is
\[\Delta_{CS}=\varphi\left\{\frac{\cosh\zeta}{2}\frac{M^{2}+10Mr-8r^{2}}{\left( M-2r\right)\left(M+2r\right)}-1\right\}\,. \tag{40}\]
Figure 2: The gravitational potential of an ultra-compact object as a function of the distance (radius) from the object center in units of the Schwarzschild radius for different spacetimes. Sample values are fixed at \(M=2.5\), \(Q_{1}=1\), \(Q_{2}=2\), \(Q_{3}=3\), and \(Q_{4}=4\).
Accounting for the quantum perturbative corrections appearing in Eq. (38), one can derive the form \(\Delta_{QS}\) of the parameter \(\Delta\) holding for the quantum-corrected Schwarzschild solution at leading order
\[\Delta_{QS}=\varphi\left\{\frac{\cosh\zeta}{2}\frac{\beta(r)}{\left[30\pi r^{3}+ M\left(31-15\pi r^{2}\right)\right]\left[30\pi r^{3}-M\left(31-15\pi r^{2} \right)\right]\left[30\pi r^{3}-M\left(7-15\pi r^{2}\right)\right]}-1\right\}\,, \tag{41}\]
where
\[\beta(r)=54000\pi^{3}r^{9}-M^{3}\left(31-15\pi r^{2}\right)^{2}\left(79+15\pi r ^{2}\right)+900M\pi^{2}r^{6}\left(451-45\pi r^{2}\right)-60M^{2}\pi r^{3}\left( 2263-3930\pi r^{2}+675\pi^{2}r^{4}\right)\,. \tag{42}\]
Finally, making use of the potentials \(f(r)\) and \(g(r)\) in Eq. (37), we obtain the form \(\Delta_{SF}\) of the parameter \(\Delta\) holding for the string fuzzball solution
\[\Delta_{SF}=\varphi\left\{\frac{\cosh\zeta}{2}\frac{r^{3}\left(Q_{3}+Q_{4}+2r \right)+Q_{2}\left(r^{2}-Q_{3}Q_{4}\right)r-Q_{1}\left[Q_{3}Q_{4}r-r^{3}+Q_{2} \left(2Q_{3}Q_{4}+Q_{3}r+Q_{4}r\right)\right]}{(Q_{1}+r)(Q_{2}+r)(Q_{3}+r)(Q_{4 }+r)}-1\right\}\,. \tag{43}\]
The three expressions \(\Delta_{CS}\), \(\Delta_{QS}\) and \(\Delta_{SF}\) differ significantly when evaluated along orbits sufficiently close to the ultra-compact objects, as illustrated in Fig. 3. We see that, in the strong-gravity regime, the specific nature of the gravitational source affects dramatically the degree of violation of the CHSH inequality; hence, the gravitational modulation of Bell nonlocality allows to distinguish different models of ultra-compact objects and to discriminate between the string theory and quantum field theory approaches to quantum gravity.
In Fig. 4, we plot the oscillatory modulation of Bell nonlocality as measured by the degree of violation of the CHSH inequality for orbits close to the event horizon, that is, for an interval of the radial coordinate that does not exceedingly deviate from the Schwarzschild radius. For a wider range of values of the radial coordinate, the frequency of the oscillations \(\cos^{2}\Delta\) grows very rapidly, thereby blurring the interpolation patterns. Moreover, as the quantities \(\Delta_{S}\), \(\Delta_{QS}\) and \(\Delta_{F}\) share the same behavior in the limit \(r\gg 2M\), the phase shifts of the CHSH correlations can no longer be resolved in this regime.
Figure 3: The magnitude of the gravitational modulation parameter \(\Delta\) as a function of the radius of the circular orbit in units of the Schwarzschild radius for different spacetimes. Blue solid line: \(\Delta_{SF}\), Eq. (43), corresponding to the string fuzzball solution. Red solid line: \(\Delta_{QS}\), Eq. (41), corresponding to the quantum-corrected Schwarzschild solution. Green solid line: \(\Delta_{SF}\), Eq. (40), corresponding to the classical Schwarzschild solution. Sample values are fixed at \(\varphi=\pi/4\), \(M=2.5\), \(Q_{1}=1\), \(Q_{2}=2\), \(Q_{3}=3\), \(Q_{4}=4\) and \(\zeta=10\).
## VI Discussion
We have discussed how investigating quantum nonlocality in strong gravitational fields leads to predictions that discriminate between different models of quantum gravity phenomenology, both perturbative and non-perturbative ones. CHSH nonlocality tests with entangled particle pairs on circular orbits near ultra-compact objects show that the spin precession occurring in curved spacetime is responsible for a modulation of the degree of quantum nonlocality. In the presence of a non-trivial spacetime background, the standard maximally allowed violation \(2\sqrt{2}\) of the CHSH inequality becomes \(2\sqrt{2}\cos^{2}\Delta\), with the angular modulation factor \(\Delta\) strictly dependent on the metric tensor components and heavily influenced by the conjectured underlying nature of the ultra-compact object considered. A simple measurement of quantum nonlocality can thus be used to validate or falsify some phenomenological models of quantum gravitational effects in the strong-field regime.
The scenario that we have described provides further evidence supporting the use of quantum information concepts like entanglement and Bell nonlocality as a key tool in the investigation of yet hypothetical quantum gravitational phenomena. In deriving the main result of our work, we have also generalized the formalism of Refs. [26; 28] on the study of quantum correlations in gravitational fields, so as to make it applicable in general, well beyond the Schwarzschild solution and weak-field limit in the isotropic coordinate system.
Concerning possible experimental tests probing the quantum nature of the gravitational field, while high-energy scattering processes are still too far from the experimental scales needed to detect coexisting quantum and gravitational effects, it is a largely shared belief that tiny signatures of such a coexistence might still be revealed with currently available means through satellite experiments, gravitational wave analysis or tabletop laboratory tests centered around foundational aspects of quantum mechanics. In this respect, it is worth remarking that credited proposals which aim at collecting a direct measurement of quantum gravity phenomenology are essentially based either upon decoherence/gravity-based wave function collapse models [37; 38; 39; 40; 41; 42; 43; 44; 45; 46] or, as already mentioned, the detection of gravitationally-induced quantum correlations by quantum gravitational mediators [47; 48; 49; 18; 50; 51; 52]. The hard challenge facing the experimental implementation of these table-top laboratory tests is that of realizing correlated and delocalized superpositions of large enough masses. So far, spatial superpositions have been observed with masses at most of the order of \(10^{-23}\) Kg (large molecules) [53], while the faintest gravitational field that can be currently measured is the one generated by masses of the order of \(10^{-4}\) Kg [54].
While one may hope to significantly improve these numbers by considering some amplification mechanisms, tabletop probing of gravitational effects on Bell inequality might turn out to be significantly less challenging and might open the way to laboratory simulations of extreme cosmological conditions suitable for the verification of quantum
Figure 4: CHSH correlation as a function of the orbit radius in units of the Schwarzschild radius for different spacetimes. Blue solid line: string fuzzball solution. Red solid line: quantum-corrected Schwarzschild solution. Green solid line: classical Schwarzschild solution. Sample values are fixed at \(\varphi=\pi/4\), \(M=2.5\), \(Q_{1}=1\), \(Q_{2}=2\), \(Q_{3}=3\), \(Q_{4}=4\) and \(\zeta=10\).
gravity-induced modulations of quantum nonlocality as described in the present work. In this respect, a particularly promising avenue might involve designing experimental tests of the CHSH inequality near the horizon of sonic and optical analogues of black holes [55; 56; 57; 58].
On a final note, our findings strongly suggest that resorting to the entire spectrum of quantum resources, from nonlocality, entanglement and steering to discord, coherence and complementarity may provide very useful insights in the investigation on the actual nature of gravity.
## Acknowledgements
The authors acknowledge support by MUR (Ministero dell'Universita e della Ricerca) via the project PRIN 2017 "Taming complexity via QUantum Strategies: a Hybrid Integrated Photonic approach" (QUSHIP) Id. 2017SRNBRK. L. P. acknowledges networking support by the COST Action CA18108 and financial support from the "Angelo Della Riccia" Foundation.
|
2307.09869 | Spatial imaging of proton via leading-twist non-skewed GPDs with basis
light-front quantization | The internal image of the proton is unveiled by examining the generalized
parton distributions (GPDs) at zero skewness, within the basis light-front
quantized environment. Several distributions emerge when a quark is sampled
with different currents depending upon the helicity arrangements of the active
quark and the proton target. We investigate six of the eight leading-twist
proton GPDs of the valence quarks, the helicity conserving distributions $(H,
E, \tilde{H})$ and the helicity non-conserving $(H_T,E_T,\tilde{H}_T)$
distributions at skewness set to zero ($\zeta=0$). We consider purely
transverse momentum transfer and, hence, obtain results describe only the
proton's two-dimensional structure in the transverse plane. We present the
Mellin moments of these distribution functions, where the first moment produces
a form factor and the second Mellin moments help extract the information on
partonic contributions to the hadronic angular momentum. We compare our results
for the Mellin moments with those from lattice QCD and other approaches where
available. We also present the GPDs in transverse position space. | Satvir Kaur, Siqi Xu, Chandan Mondal, Xingbo Zhao, James P. Vary | 2023-07-19T09:58:03Z | http://arxiv.org/abs/2307.09869v2 | # Spatial imaging of proton via leading-twist GPDs with basis light-front quantization
###### Abstract
The internal image of the proton is unveiled by examining the three-dimensional distribution functions, the generalized parton distributions (GPDs), within the basis light-front quantized environment. Several distributions emerge when a quark is sampled with different currents depending upon the helicity arrangements of the active quark and the proton target. We investigate all the leading-twist proton GPDs of the valence quarks, the helicity conserving distributions \((H,E,\tilde{H},\tilde{E})\) as well as the helicity non-conserving \((H_{T},E_{T},\tilde{H}_{T},\tilde{E}_{T})\) distributions. We present the Mellin moments of these distribution functions, where the first moment produces a form factor and the second Mellin moments help extract the information on partonic contributions to the hadronic angular momentum. We compare our results for the Mellin moments with those from lattice QCD and other approaches where available. We also present the GPDs in transverse position space.
Introduction
Probing the hadron's complex internal structure provides knowledge of the non-perturbative aspects of Quantum Chromodynamics (QCD) and insights into fundamental questions such as the nature of confinement. Generalized parton distributions (GPDs) are three-dimensional functions that convey structural details of the hadron. For example, from these functions, one can obtain information about the distribution of partons in the plane transverse to the direction in which the hadron is moving. Alternatively, one can deduce the distribution of the longitudinal momentum carried by the partons. These distribution functions have the potential to address one of the major issues in hadron physics - the proton spin problem. This is simply because the GPDs provide information on the orbital motion of the partons in conjunction with their spatial flavor distributions.
Multi-variable GPDs are functions of \((x,\zeta,t)\) where \(x\) is the longitudinal momentum fraction held by the parton, \(\zeta\) and \(t(=\Delta^{2})\) define the longitudinal momentum transfer from the initial to the final state of a hadron and the square of the total momentum transferred respectively. While they are not probabilistic functions, their two-dimensional (2-d) Fourier transforms from transverse momentum transfer to the impact-parameter plane in the absence of the longitudinal momentum transfer provide a probabilistic interpretation of the GPDs [1; 2].
Further value can be derived from GPDs by implementing certain limits that provide notable 1-d distributions. For instance, the first Mellin moments of different GPDs reproduce different form factors depending upon the helicity configurations of both quark and proton. Also, one can retrieve the parton distribution functions (PDFs) at the forward limit of the GPDs, i.e. when there is no momentum transfer from the initial to the final state of the proton. Further, at the \(t\to 0\) limit, one can find the connection of GPDs via second Mellin moments to the quark's and gluon's angular momentum distribution inside the hadron. An indirect connection has been stated between the basic mechanical properties of the proton, like pressure, shear distributions etc. and the GPDs [3]. It is worth noticing that the different helicity configurations of the active parton and the hadron give rise to different GPDs and they, in turn, provide a bounty of information on the hadron structure and its spin.
The GPDs are generally classified into two categories: the chiral-even GPDs and the
chiral-odd GPDs based on whether the quark helicity is preserved or not. At leading-twist, the chiral-even GPDs are further divided into unpolarized (\(H(x,\zeta,t),E(x,\zeta,t)\)) and helicity-dependent GPDs (\(\tilde{H}(x,\zeta,t),\tilde{E}(x,\zeta,t)\)), where \(H\) and \(\tilde{H}\) appear when the helicity of the proton is conserved in the initial and final states, which is not the case for \(E\) and \(\tilde{E}\). The GPD \(\tilde{E}\) can be evaluated by considering the momentum transfer in the longitudinal direction. Since we focus on extracting the GPDs at \(\zeta=0\), \(\tilde{E}\) is beyond the scope of this work. These GPDs are convoluted with other quantities when forming representations of amplitudes for hard exclusive processes, such as deeply virtual Compton scattering (DVCS) [4; 5; 6] and deeply virtual meson production (DVMP) [7].
Extensive experimental efforts have been undertaken to investigate GPDs. One can cite, for example, H1 [8; 9], ZEUS [10; 11], HERMES at DESY [12; 13; 14], Hall A [15; 16], CLAS at Jefferson Lab (JLab) [17; 18; 19], COMPASS at CERN [20]. Recently, these distributions have been determined by analysing the world electron scattering data [21; 22].
There are a total of four chiral-odd GPDs, also known as transversity GPDs: \(H_{T}(x,\zeta,t)\), \(E_{T}(x,\zeta,t)\), \(\tilde{H}_{T}(x,\zeta,t)\) and \(\tilde{E}_{T}(x,\zeta,t)\). The distribution \(\tilde{E}_{T}\) vanishes for \(\zeta=0\), since it appears to be an odd function under the transformation \(\zeta\to-\zeta\). Such GPDs are quite challenging to measure through hard exclusive processes. Nevertheless, it has been proposed that these GPDs could be probed in diffractive double meson electroproduction [23; 24; 25]. Theoretical efforts have shown the possibility of describing the hard exclusive electroproduction of pseudoscalar mesons by a hard scattering mechanism involving the leading-twist chiral-odd GPDs of the nucleon [26; 27; 28; 29; 30; 31]. The first evidence of the existence of these GPDs was given by the COMPASS collaboration where the exclusive production of \(\rho^{0}\) mesons was studied by scattering muons off transversely polarized protons [32]. Results from GPDs-based model calculations were found to be in agreement with the data. Further, results of exclusive \(\pi^{0}\) and \(\eta\) electroproduction by CLAS collaboration confirm the direct experimental accessibility of transversity GPDs [33; 34]. It is noteworthy that experiments are planned to extract GPDs at upcoming facilities such as the Electron-Ion-Collider (EIC), the EIC in China (EIcC) [35] and the 12 GeV upgrade program at JLab [36; 37].
Theoretically, the proton GPDs have drawn immense attention. Several QCD inspired models have been developed to understand the proton structure (see, for example, Refs. [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]), but deriving the theoretical links with QCD remains a challenge. For instance, numerous studies on moments which are related to the GPDs in certain limits are presented in
the lattice QCD approach [51; 52; 53; 54; 55; 56; 57; 58] and Dyson-Schwinger equations (DSE) approach [59; 60; 61; 62]. Unlike the Euclidean methods, the QCD observables are directly obtainable in Minkowski space-time provided by the basis light-front quantized (BLFQ) environment developed for solving the many-body bound state problems in quantum field theories (QFTs) [63; 64; 65; 66; 67; 68; 69; 70].
We adopt the BLFQ approach which is a convenient framework defined on the light-front to get the hadron spectra and its structure while obtaining the light-front wave functions (LFWFs) through diagonalization of the Hamiltonian. One can access the distributions of sea quarks and gluons in this approach by including the higher Fock sector representations of a hadron. This approach has successfully described the QCD bound states of mesons [66; 67; 71; 72; 73; 74; 75; 76; 77] and baryons [68; 69; 78; 79; 80]. Recently, these states have been expanded in the Fock space including one dynamical gluon component for the pion \(|q\bar{q}g\rangle\)[77] and the nucleon \(|qqqg\rangle\)[81]. Within this approach, one now has access to the gluon distributions based on coupling defined by QCD.
In this work, we employ the LFWFs to study the proton GPDs by taking into account the valence Fock sector \(|qqq\rangle\), where the chiral-even GPDs along with the other applications have already been studied [68; 69; 78]. Note that the proton form factors (FFs), such as electromagnetic FFs and axial-vector FFs have been evaluated in this approach and have been found consistent with the available experimental data [68; 69]. Further, the 1-d proton PDFs, particularly the unpolarized, helicity-dependent and transversity PDFs have been examined by comparing them with the available global fits and measured data [68; 69]. Overall, the results have been found in agreement with the data. In Ref. [78], the angular momentum distributions have been explored, which have been evaluated using the unpolarized and helicity-dependent GPDs. The previous results are encouraging and motivate us to extend this approach to study the chiral-odd GPDs, when the quark helicity flips unlike for the case of chiral-even GPDs. The 3-d chiral-odd GPDs are the extended version of the transversity PDF and tensor form factors, which provide the spatial tomography of the proton when the valence quarks are transversely polarized. In this way, the chiral-odd GPDs provide important details on the correlation between the angular momentum and spin of quarks inside the proton.
Our aim is to investigate the structure of the proton through its GPDs and other observables in greater detail in order to provide a better understanding using the BLFQ approach. Our selected observables include different Mellin moments (so-called generalized form fac
tors) and impact-parameter dependent GPDs.
## II Basis light-front quantization approach
In the BLFQ approach, an eigenvalue problem of the Hamiltonian, \(H_{\rm eff}\left|\Psi\right\rangle=M_{H}^{2}\left|\Psi\right\rangle\), is solved on the light-front (LF). The eigensolutions provide LFWFs, and the eigenvalues are recognized as the hadronic mass spectra (\(M_{H}\)). The former play crucial roles in understanding the detailed structure of QCD bound state systems.
The baryonic state on which the Hamiltonian operator would act, is expanded at fixed LF time as
\[\left|\Psi\right\rangle=\psi_{(qqq)}\left|qqq\right\rangle+\psi_{(qqqq\bar{q} )}\left|qqqq\bar{q}\right\rangle+\psi_{(qqgg)}\left|qqqg\right\rangle+\ldots, \tag{1}\]
where \(q,\bar{q}\) and \(g\) represent quark, antiquark and gluon Fock particles respectively. The significance of LFWFs \(\psi_{(qqq)},\psi_{(qqqq\bar{q})},\psi_{(qqqg)},...\) is to provide the probability amplitudes for the Fock states defined by \(\left|qqq\right\rangle,\left|qqqq\bar{q}\right\rangle,\left|qqqg\right\rangle\) and so on. In this work, we consider only the valence Fock state, i.e., the first term in Eq. (1).
The effective Hamiltonian of the baryonic systems in our chosen Fock space is defined as [68; 69; 78; 79; 80]
\[H_{\rm eff} = \sum_{i}\frac{{\bf k}_{\perp i}^{2}+m_{i}^{2}}{x_{i}}+\frac{1}{2 }\sum_{i\neq j}\kappa^{4}\left(x_{i}x_{j}({\bf r}_{\perp i}-{\bf r}_{\perp j}) ^{2}-\frac{\partial_{x_{i}}\big{(}x_{i}x_{j}\partial_{x_{j}}\big{)}}{(m_{i}+m_ {j})^{2}}\right) \tag{2}\] \[+ \frac{1}{2}\sum_{i\neq j}\frac{4\pi C_{F}\alpha_{s}}{Q_{ij}^{2}} \bar{u}_{s_{i}^{\prime}}(k_{i}^{\prime})\gamma^{\mu}u_{s_{i}}(k_{i})\bar{u}_{ s_{j}^{\prime}}(k_{j}^{\prime})\gamma^{\nu}u_{s_{j}}(k_{j})g_{\mu\nu}\;.\]
The first term in Eq. (2) expresses the kinetic energy with \(m_{i}\) being mass of the valence quark; \(x_{i}\) and \({\bf k}_{\perp i}\) symbolize the longitudinal momentum fraction and transverse momentum carried by \(i\)th constituent of the system with \(\sum_{i}x_{i}=1\) and \(\sum_{i}{\bf k}_{\perp i}=0\). The second term in Eq. (2) expresses the confining potential, which is separately defined in transverse and longitudinal directions. The third term in Eq. (2) refers to the one gluon exchange interactions with coupling constant \(\alpha_{s}\), which underlies the dynamical spin structure in the LFWFs. Here, \(u_{s_{i}}(k_{i})\) represents the Dirac spinor with \(s_{i}\) and \(k_{i}\) being the spin and momentum carried by the \(i\)th valence quark. \(C_{F}\) and \(g_{\mu\nu}\) define the color factor and the metric tensor respectively. Further, the square of the average four-momentum transfer is expressed as \(Q_{ij}^{2}=-q^{2}=-\frac{1}{2}\left((k_{i}^{\prime}-k_{i})^{2}+(k_{j}^{\prime} -k_{j})^{2}\right)\).
Our goal is to follow BLFQ and evaluate the Hamiltonian, defined in Eq. (2), in a suitably-truncated basis and diagonalize it to produce the baryon mass spectra and corresponding
wave functions. For the BLFQ basis, we choose the 2-d harmonic oscillator (HO) basis and the discretized plane-wave basis in transverse and longitudinal directions respectively to expand \(|\Psi\rangle\)[63; 64]. The ortho-normalized 2-d HO basis function in the transverse direction is given by
\[\phi_{n,m}(\mathbf{k}_{\perp};b)=\frac{\sqrt{2}}{b(2\pi)^{3/2}}\sqrt{\frac{n!} {(n+|m|)!}}e^{-k_{\perp}^{2}/2b^{2}}\left(\frac{|k_{\perp}|}{b}\right)^{|m|}L_ {n}^{|m|}\left(\frac{k_{\perp}^{2}}{b^{2}}\right)e^{im\theta}\;, \tag{3}\]
where \(b\) is the HO scale parameter. The quantum numbers \(n\) and \(m\) represent the radial excitation and angular momentum projection respectively of a particle in a 2-d HO. \(L_{n}^{|m|}\) represents the associated Laguerre polynomial.
In the discretized plane-wave basis, the longitudinal momentum fraction of \(i\)th particle is represented by \(x_{i}=\frac{p_{i}^{*}}{P^{*}}=\frac{k_{i}}{K}\) with the dimensionless quantity being \(k=\frac{1}{2},\frac{3}{2},\frac{5}{2},\cdot\cdot\;.\) The values of \(k\) are chosen to signify the choice of anti-periodic boundary conditions. Note that \(K=\sum_{i}k_{i}\). In addition, the total angular momentum projection is defined for many-body basis states as \(M_{J}=\sum_{i}(m_{i}+\lambda_{i})\) with \(\lambda\) being the quark helicity. The effective Hamiltonian in Eq. (2) conserves \(M_{J}\) which leads to efficiencies in numerical calculations. We select \(M_{J}=1/2\) to solve for the proton spectroscopy.
Apart from restricting the Fock space, a further truncation is necessary to limit the basis size within each Fock sector. With our chosen basis, further truncation can be achieved by specifying two basis parameters (\(K\)) and (\(N_{\rm max}\)). The former is conserved by the effective Hamiltonian, which is held fixed and controls the basis in the longitudinal direction. Meanwhile, \(N_{\rm max}\) limits a total transverse quantum number \(N_{\alpha}=\sum_{l}(2n_{l}+|m_{l}|+1)\) for multi-particle basis state \(|\alpha\rangle\) such that \(N_{\alpha}\leq N_{\rm max}\). This parameter acts as an ultaviolet (UV) and an infrared (IR) regulator for the LFWFs with \(\Lambda_{\rm UV}\approx b\sqrt{N_{\rm max}}\) and \(\Lambda_{\rm IR}\approx b/\sqrt{N_{\rm max}}\) respectively [64].
The resulting LFWFs in momentum space are expressed as
\[\Psi_{\{x_{i},\mathbf{k}_{ii},\lambda_{i}\}}^{\Lambda}=\langle P,\Lambda|\left\{ x_{i},\mathbf{k}_{\perp i},\lambda_{i}\right\}\rangle=\sum_{\{n_{i},m_{i}\}} \left(\psi_{\{x_{i},n_{i},m_{i},\lambda_{i}\}}^{\Lambda}\prod_{i}\phi_{n_{i},m _{i}}(\mathbf{k}_{\perp i};b)\right)\,, \tag{4}\]
with \(\psi_{\{x_{i},n_{i},m_{i},\lambda_{i}\}}^{\Lambda}=\langle P,\Lambda|\left\{ x_{i},n_{i},m_{i},\lambda_{i}\right\}\rangle\) being the LFWF in BLFQ's chosen basis and with \(P\) and \(\Lambda\) being the momentum and helicity of the target spin-1/2 composite system respectively.
To produce our LFWFs, the basis truncation parameters in the transverse and longitudinal directions are taken as \(N_{\rm max}=10\) and \(K=16.5\) respectively [68; 69]. Besides this, other model parameters are fixed in a way that they provide the known nucleon mass
and the electromagnetic form factors [68; 69], leading us to adopt \(\{m_{q/{\rm K.E.}},m_{q/{\rm OGE}},\kappa,\alpha_{s}\}=\{0.3~{}{\rm GeV},0.2~{}{ \rm GeV},0.34~{}{\rm GeV},1.1\pm 0.1\}\) where \(m_{q/{\rm K.E.}}\) and \(m_{q/{\rm OGE}}\) represent the quark masses in kinetic energy and OGE interaction terms with the HO scale parameter being \(b=0.6~{}{\rm GeV}\). The calculated LFWFs for the valence Fock sector using these parameters, which imply a model scale \(\mu_{0}^{2}=0.195\pm 0.020~{}{\rm GeV}^{2}\)[68; 69], are employed to provide physical observables and distribution functions of the valence quarks inside the proton. In this work, we specifically study the proton GPDs using these LFWFs.
## III Generalized parton distributions (GPDs)
The 3-d spatial distributions are categorized as chiral-even and chiral-odd GPDs and are defined through the non-forward matrix elements of the bilocal operators between hadronic states. The connection between the correlator functions and various GPDs strictly depends upon the bilocal operator. Note that, for the present work, we restrict ourselves to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) region, \(\zeta<x<1\), where the number of partons in the initial and the final states remains conserved. Also, the momentum transfer in the longitudinal direction is taken to be zero. Accordingly, these distributions are classified and parameterized as we now show.
The distributions where the quark does not transfer helicity are parameterized as [82]
\[\int\frac{{\rm d}z^{-}}{8\pi}e^{\iota xP^{*}z^{-}/2}\left\langle P ^{\prime},\Lambda^{\prime}\right|\bar{\varphi}(0)\gamma^{+}\varphi(z^{-}) \left|P,\Lambda\right\rangle\left|{}_{z^{+}={\bf z}_{i}=0} = \frac{1}{2\bar{P}^{+}}\bar{u}(P^{\prime},\Lambda^{\prime})\! \left(H^{q}(x,\zeta,t)\gamma^{+}\right.\right. \tag{5}\] \[\left.\left.+E^{q}(x,\zeta,t)\frac{\iota\sigma^{+\alpha}\Delta_{ \alpha}}{2M_{P}}\right)u(P,\Lambda)\;,\right.\] \[\left.\int\frac{{\rm d}z^{-}}{8\pi}e^{\iota xP^{*}z^{-}/2}\left\langle P ^{\prime},\Lambda^{\prime}\right|\bar{\varphi}(0)\gamma^{+}\gamma_{5}\varphi( z^{-})\left|P,\Lambda\right\rangle\left|{}_{z^{+}={\bf z}_{i}=0} = \frac{1}{2\bar{P}^{+}}\bar{u}(P^{\prime},\Lambda^{\prime})\! \left(\tilde{H}^{q}(x,\zeta,t)\gamma^{+}\gamma_{5}\right.\right.\] (6) \[\left.\left.+\tilde{E}^{q}(x,\zeta,t)\frac{\gamma_{5}\Delta^{+} }{2M_{P}}\right)u(P,\Lambda)\;.\right.\]
On the other hand, the transversity distributions where the quark transfers helicity are parameterized as [82]
\[\int\frac{{\rm d}z^{-}}{8\pi}e^{\iota xP^{*}z^{-}/2}\left\langle P ^{\prime},\Lambda^{\prime}\right|\bar{\varphi}(0)\sigma^{+j}\gamma_{5}\varphi (z^{-})\left|P,\Lambda\right\rangle\left|{}_{z^{+}={\bf z}_{i}=0}=\frac{1}{2 \bar{P}^{+}}\bar{u}(P^{\prime},\Lambda^{\prime})\!\left(H^{q}_{T}(x,\zeta,t) \sigma^{+j}\gamma_{5}\right.\right.\] \[\left.\left.+\tilde{H}^{q}_{T}(x,\zeta,t)\frac{\epsilon^{+j \alpha\beta}\Delta_{\alpha}\bar{P}_{\beta}}{M_{P}^{2}}+E^{q}_{T}(x,\zeta,t) \frac{\epsilon^{+j\alpha\beta}\Delta_{\alpha}\gamma_{\beta}}{2M_{P}}+\tilde{E }^{q}_{T}(x,\zeta,t)\frac{\epsilon^{+j\alpha\beta}\bar{P}_{\alpha}\gamma_{ \beta}}{M_{P}}\right)u(P,\Lambda)\;,\right. \tag{7}\]
with \(M_{P}\) being the mass of the target spin-1/2 composite system which is the proton in our case. We choose a frame such that the momenta of the target proton at the initial and final state, at \(\zeta=0\), become
\[P = \left(P^{+},\frac{M_{P}^{2}}{P^{+}},\mathbf{0}_{\perp}\right)\,, \tag{8}\] \[P^{\prime} = \left(P^{+},\frac{M_{P}^{2}+\mathbf{\Delta}_{\perp}^{2}}{P^{+}},- \mathbf{\Delta}_{\perp}\right)\,. \tag{9}\]
The overlap representation of the GPDs in terms of the LFWFs for \(\zeta=0\) are expressed as
\[H^{q}(x,0,t) = \sum_{\{\lambda_{i}\}}\int\left[\mathrm{d}\mathcal{X}\mathrm{d} \mathcal{K}_{\perp}\right]\Psi^{\uparrow*}_{\{x^{\prime}_{i},\mathbf{k}^{ \prime}_{ii},\lambda_{i}\}}\Psi^{\uparrow}_{\{x_{i},\mathbf{k}_{ii},\lambda_{ i}\}}\delta(x-x_{1})\;, \tag{10}\] \[E^{q}(x,0,t) = -\frac{2M}{(\Delta^{1}-\iota\Delta^{2})}\sum_{\{\lambda_{i}\}} \int\left[\mathrm{d}\mathcal{X}\mathrm{d}\mathcal{K}_{\perp}\right]\Psi^{ \uparrow*}_{\{x^{\prime}_{i},\mathbf{k}^{\prime}_{ii},\lambda_{i}\}}\Psi^{ \downarrow}_{\{x_{i},\mathbf{k}_{ii},\lambda_{i}\}}\delta(x-x_{1})\;,\] (11) \[\tilde{H}^{q}(x,0,t) = \sum_{\{\lambda_{i}\}}\int\left[\mathrm{d}\mathcal{X}\mathrm{d} \mathcal{K}_{\perp}\right]\lambda_{1}\Psi^{\uparrow*}_{\{x^{\prime}_{i}, \mathbf{k}^{\prime}_{ii},\lambda_{i}\}}\Psi^{\uparrow}_{\{x_{i},\mathbf{k}_{ii },\lambda_{i}\}}\delta(x-x_{1})\;,\] (12) \[E^{q}_{T}(x,0,t) + 2\tilde{H}^{q}_{T}(x,0,t)=\sum_{\{\lambda^{\prime}_{i},\lambda_{ i}\}}\int\left[\mathrm{d}\mathcal{X}\mathrm{d}\mathcal{K}_{\perp}\right] \Psi^{\uparrow*}_{\{x^{\prime}_{i},\mathbf{k}^{\prime}_{ii},\lambda^{\prime}_{ i}\}}\Psi^{\uparrow}_{\{x_{i},\mathbf{k}_{ii},\lambda_{i}\}}\delta(x-x_{1})\;,\] (13) \[H^{q}_{T}(x,0,t) = \sum_{\{\lambda^{\prime}_{i},\lambda_{i}\}}\int\left[\mathrm{d} \mathcal{X}\mathrm{d}\mathcal{K}_{\perp}\right]\Psi^{\uparrow*}_{\{x^{\prime} _{i},\mathbf{k}^{\prime}_{ii},\lambda^{\prime}_{i}\}}\Psi^{\downarrow}_{\{x_{ i},\mathbf{k}_{ii},\lambda_{i}\}}\delta(x-x_{1})\;,\] (14) \[\tilde{H}^{q}_{T}(x,0,t) = -\sum_{\{\lambda^{\prime}_{i},\lambda_{i}\}}\int\left[\mathrm{d} \mathcal{X}\mathrm{d}\mathcal{K}_{\perp}\right]\Psi^{\downarrow*}_{\{x^{\prime} _{i},\mathbf{k}^{\prime}_{ii},\lambda^{\prime}_{i}\}}\Psi^{\uparrow}_{\{x_{i}, \mathbf{k}_{ii},\lambda_{i}\}}\delta_{\lambda_{i},-\lambda^{\prime}_{i}} \delta(x-x_{1})\;, \tag{15}\]
where
\[\left[\mathrm{d}\mathcal{X}\mathrm{d}\mathcal{K}_{\perp}\right] = \prod_{i=1}^{3}\frac{\mathrm{d}x_{i}\,\mathrm{d}^{2}\mathbf{k}_{ \perp i}}{16\pi^{3}}\,16\pi^{3}\,\delta\left(1-\sum_{i=1}^{3}x_{i}\right)\, \delta^{2}\left(\sum_{i=1}^{3}\mathbf{k}_{\perp i}\right)\,, \tag{16}\]
with the longitudinal momentum fraction and the transverse momentum for the active quark being \(x^{\prime}_{1}=x_{1}\) and \(\mathbf{k}^{\prime}_{\perp 1}=\mathbf{k}_{\perp 1}+(1-x_{1})\mathbf{\Delta}_{\perp}\). For the spectators these momenta become \(x^{\prime}_{i}=x_{i}\) and \(\mathbf{k}^{\prime}_{\perp i}=\mathbf{k}_{\perp i}-x_{i}\mathbf{\Delta}_{\perp}\). Here, \(t=-\mathbf{\Delta}_{\perp}^{2}\) when the skewness \(\zeta=0\).
We show the results of the chiral-even GPDs \((H,E,\tilde{H})\) for the valence quarks in Fig. 1, where the distribution functions are plotted with respect to the light-cone momentum \((x)\) and the square of the total momentum transferred to the final proton state \((-t)\). The GPDs for \(u\) and \(d\) quarks are presented in the left and the right panels of Fig. 1 respectively. We find that the distributions have their maxima when the proton does not transfer transverse momentum to its final state and the struck quark inside the proton carries less than 50% of the proton's longitudinal momentum. As expected, by increasing the momentum transfer in the transverse direction, the distribution peak shifts gradually towards the higher values
of \(x\) accompanied by a continuous drop in the magnitude. At the large \(x\)-region, all the \(x\)-dependence of \(x\) is observed in the \(x\)-dependence of \(x\). The \(x\)-dependence of \(x\) is shown in Fig. 1. The \(x\)-dependence of \(x\) is shown in Fig. 2. The \(x\)-dependence of \(x\) is shown in Fig. 3. The \(x\)-dependence of \(x\) is shown in Fig. 4. The \(x\)-dependence of \(x\) is shown in Fig. 5. The \(x\)-dependence of \(x\) is shown in Fig. 6. The \(x\)-dependence of \(x\) is shown in Fig. 7. The \(x\)-dependence of \(x\) is shown in Fig. 8. The \(x\)-dependence of \(x\) is shown in Fig. 9. The \(x\)-dependence of \(x\) is shown in Fig. 10. The \(x\)-dependence of \(x\) is shown in Fig. 11. The \(x\)-dependence of \(x\) is shown in Fig. 12. The \(x\)-dependence of \(x\) is shown in Fig. 13. The \(x\)-dependence of \(x\) is shown in Fig. 14. The \(x\)-dependence of \(x\) is shown in Fig. 15. The \(x\)-dependence of \(x\) is shown in Fig. 16. The \(x\)-dependence of \(x\) is shown in Fig. 17. The \(x\)-dependence of \(x\) is shown in Fig. 18. The \(x\)-dependence of \(x\) is shown in Fig. 19. The \(x\)-dependence of \(x\) is shown in Fig. 19. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The \(x\)-dependence of \(x\) is shown in Fig. 23. The \(x\)-dependence of \(x\) is shown in Fig. 24. The \(x\)-dependence of \(x\) is shown in Fig. 25. The \(x\)-dependence of \(x\) is shown in Fig. 26. The \(x\)-dependence of \(x\) is shown in Fig. 27. The \(x\)-dependence of \(x\) is shown in Fig. 28. The \(x\)-dependence of \(x\) is shown in Fig. 29. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 20. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 21. The \(x\)-dependence of \(x\) is shown in Fig. 22. The
distributions eventually decay and become independent of \(t\). However, this decay is observed to be faster for \(E^{q}\) than the other GPDs (\(H^{q}\) and \(\tilde{H}^{q}\)). As the anomalous magnetic moment and the axial charge are measured to be negative for the \(d\)-quark, the connected GPDs, \(E^{d}\) and \(\tilde{H}^{d}\), are correspondingly negative. All the mentioned features appear to be model-independent as they have also been observed in other QCD inspired models [38; 39; 40; 46; 47; 50].
When there is no momentum transfer (\(t=0\)), these distributions reproduce the valence quark distribution functions, particularly, the unpolarized and helicity-dependent functions, i.e., \(H^{q}(x,0,0)=f^{q}(x)\) and \(\tilde{H}^{q}(x,0,0)=g^{q}(x)\). These 1-d functions have been studied previously in the BLFQ approach by considering both the Fock sector containing valence quarks (\(|qqq\rangle\)) [68; 69] and one beyond this sector (\(|qqqg\rangle\)) [81]. Additionally, the detailed interpretation of the moments, which are functions of \(t\), is given below in Section III.1.
The 3-d graphical representations of the chiral-odd proton GPDs (\(H_{T},E_{T},\tilde{H}_{T}\)) for the \(u\)-quark and the \(d\)-quark are shown in the left and right panels of Fig. 2 respectively. Similar to the helicity conserving GPDs, these helicity non-conserving distribution functons are illustrated as functions of \(x\) and \(-t\). All the helicity flip distributions show similar behavior as for the case of helicity non-flip GPDs, except the behavior of \(E_{T}^{q}\) and \(\tilde{H}_{T}^{q}\) in the small \(x\) region. In that case, the peaks observed near \(x\to 0\) are model-dependent [41; 49]. All the flavor distribution peaks move along \(x\), when the momentum transfer from the initial proton is increased gradually. A noteworthy distribution is the combination of two chiral-odd GPDs, \(2\tilde{H}_{T}^{q}+E_{T}^{q}\) which provides the details on the angular momentum contribution at certain limits and is reducible to the tensor form factor. We observe zero crossing points in \(E_{T}^{d}\) along \(x\), which has also been observed in other models [41; 49]. Our GPDs' results, for the case of \(H_{T}^{u}\) and \(\tilde{H}_{T}^{u}\), are observed to be opposite to that of the \(d\)-quark distributions. Similar to \(H^{q}\) and \(\tilde{H}^{q}\), the helicity non-conserving GPD \(H_{T}^{q}\) is reducible to transversity PDF, \(H_{T}^{q}(x,0,0)=h^{q}(x)\) and has been previously studied in this approach [68; 69; 81].
We perform the QCD evolution to obtain the GPDs at higher scale \(\mu^{2}\) using the next-to-next-to-leading order (NNLO) Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equations of QCD [83; 84; 85; 86; 87; 88; 89; 90]. To solve these equations numerically, we use the Higher Order Perturbative Parton Evolution (HOPPET) toolkit [91]. We show the evolved chiral-even GPDs, unpolarized GPD \(H^{q}\) and helicity GPD \(\tilde{H}^{q}\), for both \(u\) and \(d\) quarks at different values of momentum transferred (\(t\)) in Fig. 3. The evolution is performed from the initial
Figure 2: The chiral-odd GPDs (a) \(H_{T}(x,0,t)\), (c) \(E_{T}(x,0,t)\) and (e) \(\tilde{H}_{T}(x,0,t)\) for the \(u\)-quark, where the respective GPDs for the \(d\)-quark are shown in (b), (d) and (f). The GPDs are presented with respect to \(x\) and \(-t\) (in GeV\({}^{2}\)).
scale \(\mu_{0}^{2}=0.195\) GeV\({}^{2}\) to \(\mu^{2}=5\) GeV\({}^{2}\). The method is not well-established for the evolution of transversity GPD, hence we refrain from evaluating this at the higher scale.
### Mellin moments of GPDs
For zero skewness (\(\zeta=0\)), the moments of the valence quark GPDs are defined as
\[[\mathrm{Mellin-moment}]_{n0}^{q}(t)=\int_{0}^{1}\mathrm{d}x\,x^{n-1}\,[ \mathrm{GPD}]^{q}(x,0,t)\;, \tag{17}\]
where \(n=1,2,3,...\) represent first, second, third moments and so on. The first moments of GPDs provide form factors depending upon the helicity configurations of the active quark and the proton. Specifically, the first moments of the unpolarized GPDs, \(H^{q}(x,0,t)\) and \(E^{q}(x,0,t)\), provide the Dirac and Pauli form factors, \(F_{1}^{q}(t)\) and \(F_{2}^{q}(t)\), respectively. The helicity-dependent GPDs, \(\tilde{H}^{q}(x,0,t)\) and \(\tilde{E}^{q}(x,0,t)\), give the axial-vector form factor \(G_{A}^{q}(t)\) and the pseudo-scalar form factor \(G_{P}^{q}(t)\) respectively. Lastly, the tensor form factors \(g_{T}^{q}(t)\)
and \(\kappa_{T}^{q}(t)\) are provided by the chiral-odd GPDs. Mathematically,
\[F_{1}^{q}(t)=A_{10}^{q}(t)=\int\mathrm{d}xH^{q}(x,0,t)\quad,\quad F _{2}^{q}(t)=B_{10}^{q}(t)=\int\mathrm{d}xE^{q}(x,0,t)\;, \tag{18}\] \[G_{A}^{q}(t)=\tilde{A}_{10}^{q}(t)=\int\mathrm{d}x\tilde{H}^{q}( x,0,t)\quad,\quad G_{P}^{q}(t)=\tilde{B}_{10}^{q}(t)=\int\mathrm{d}x\tilde{E}^{q}( x,0,t)\;,\] (19) \[g_{T}^{q}(t)=A_{T10}^{q}(t)=\int\mathrm{d}xH_{T}^{q}(x,0,t)\quad, \quad\kappa_{T}^{q}(t)=\bar{B}_{T10}^{q}(t)=\int\mathrm{d}x[E_{T}^{q}(x,0,t)+2 \tilde{H}_{T}^{q}(x,0,t)]\;, \tag{20}\]
and
\[A_{20}^{q}(t)=\int\mathrm{d}xxH^{q}(x,0,t)\quad,\quad B_{20}^{q }(t)=\int\mathrm{d}xxE^{q}(x,0,t)\;, \tag{21}\] \[\tilde{A}_{20}^{q}(t)=\int\mathrm{d}xx\tilde{H}^{q}(x,0,t)\quad, \quad\tilde{B}_{20}^{q}(t)=\int\mathrm{d}xx\tilde{E}^{q}(x,0,t)\;,\] (22) \[A_{T20}^{q}(t)=\int\mathrm{d}xxH_{T}^{q}(x,0,t)\quad,\quad\bar{ B}_{T20}^{q}(t)=\int\mathrm{d}xx[E_{T}^{q}(x,0,t)+2\tilde{H}_{T}^{q}(x,0,t)]\;. \tag{23}\]
Figure 4: (a) The first and (b) second Mellin moments of GPDs, also known as generalized form factors, for the \(u\)-quark (upper) and the \(d\)-quark (lower) w.r.t. the square of the momentum transfer \(-t\) (in GeV\({}^{2}\)).
Now, in the forward limit (\(t=0\)), the Dirac form factors exhibit the normalization as
\[F_{1}^{u}(0)=2\hskip 28.452756pt,\hskip 28.452756ptF_{1}^{d}(0)=1\;, \tag{24}\]
the Pauli form factors are regarded as the anomalous magnetic moments,
\[F_{2}^{q}(0)=\kappa^{q}\;, \tag{25}\]
the axial-vector form factors and pseudoscalar form factors are regarded as the axial-vector coupling constant (axial charge) and pseudo-scalar coupling constant,
\[G_{A}^{q}(0)=g_{A}^{q}\hskip 28.452756pt\text{and}\hskip 28.452756ptG_{P}^{q}(0)=g_ {P}^{q}\;, \tag{26}\]
respectively, and the tensor form factors identify with the tensor charge \(g_{T}^{q}\) and tensor magnetic moment \(\kappa_{T}^{q}\).
In Fig. 4a, we show the first Mellin moment of both chiral-even and chiral-odd GPDs, defined in Eq. (18)-(20). As discussed earlier, the first moment of unpolarized and helicity GPDs represent the Dirac, Pauli and axial form factors. The detailed study of these form factors can be found in Ref. [68; 69], where reasonable agreement with the experimental data has been observed. Along with these form factors, we evaluate tensor form factors which are connected with \(A_{T10}^{q}\) and \(\bar{B}_{T10}^{q}\).
The transversity GPDs are connected with the tensor FFs \(A_{T10}^{q}(=g_{T}^{q}(t))\) and \(\bar{B}_{T10}^{q}(=\kappa_{T}^{q}(t))\), and are illustrated in Fig. 4a. When comparing with other published studies, we find that our tensor FFs qualitatively agree with the lattice QCD predictions for both \(u\) and \(d\) quarks [56; 57] and with other model predictions [41; 92; 93; 49]. We illustrate the \(t\)-dependence of the first Mellin moments of chiral-odd GPDs, and compare our predictions with that of the lattice QCD approach [56; 57] and the chiral quark soliton model (\(\chi\)QSM) [92; 93] in Fig. 5. Since the lattice QCD method and the \(\chi\)QSM have predicted their results at 4 GeV\({}^{2}\) and 0.36 GeV\({}^{2}\) respectively which are considerably different from our model scale \(\mu_{0}^{2}=0.195\pm 0.020\) GeV\({}^{2}\)[68; 69], the comparison between them is only qualitative though some similarities are apparent. Our model scale could be significantly increased when we increase our BLFQ basis spaces to include Fock components beyond the valence sector. We anticipate that, with these planned model improvements, our results may then become more comparable with the lattice QCD and other model predictions.
In Table 1, we summarize our predictions for the first Mellin moments or the generalized form factors of the valence quark GPDs, when there is no momentum transfer from the initial
to the final state of proton (\(t=0\)). Similar to the other model results, we also observe that the tensor charge \(g_{T}^{q}(0)(=A_{T10}^{q})\) is larger than the axial charge \(g_{A}^{q}(0)(=\tilde{A}_{10}^{q}(0))\), regardless of the sign. However, the difference is observed to be small in our BLFQ computation.
In Fig. 4b, we present the second moment of the proton GPDs where, through a visual comparison to published lattice QCD results [56; 57], we again find similarities in the qualitative behavior. To elucidate the comparison with lattice QCD results, we show the \(t\)-dependence of the second Mellin moments of the chiral-odd GPDs in Fig. 6. Again, the predictions are made at different scales so that the comparison is only qualitative. The
Figure 5: The normalized first Mellin moments with respect to \(-t\) (in GeV\({}^{2}\)) compared with the predictions of the lattice QCD approach [56; 57] and the chiral quark soliton model (\(\chi\)QSM) [92; 93].
\begin{table}
\begin{tabular}{|l|c c c c c|} \hline Quantity & \(A_{10}(0)\) & \(B_{10}(0)\) & \(\tilde{A}_{10}(0)\) & \(A_{T10}(0)\) & \(\bar{B}_{T10}(0)\) \\ \hline \(u\)-quark & 2 & 1.367 & 1.162 & 1.251 & 3.208 \\ \(d\)-quark & 1 & -1.279 & -0.249 & -0.270 & 2.432 \\ \hline \end{tabular}
\end{table}
Table 1: The values of first moment of the GPDs at \(t=0\) for \(u\) and \(d\) quarks in BLFQ.
second moments give information about the gravitational form factors and our results for \(u\) and \(d\) quark contributions at \(t=0\) are shown in Table 2. Further, by simply adding the
Figure 6: The normalized second Mellin moments with respect to \(-t\) (in GeV\({}^{2}\)) compared with the predictions of the lattice QCD approach [56; 57].
Figure 7: The isoscalar generalized form factors w.r.t. the square of the momentum transfer \(-t\) (in GeV\({}^{2}\)).
second Mellin moments for \(u\) and \(d\) quarks, we obtain the isoscalar generalized FFs that we present in Fig. 7. Unlike other generalized FFs, \(B_{20}^{u+d}\) is observed to be independent of \(-t\) as required by the Ji sum rule [94].
According to Ji sum rule, the second moment of the chiral-even GPDs give the partonic contribution to the total angular momentum of the proton [94], we have
\[J_{q}^{z}=\frac{1}{2}\int\,dxx\left[H^{q}(x,0,0)+E^{q}(x,0,0)\right]=\frac{1}{2 }\left[A_{20}^{q}(0)+B_{20}^{q}(0)\right]\,. \tag{27}\]
From Table 2, we observe that Ji sum rule is satisfied in our model as we get \(J_{q}^{z}=1/2\) following Eq. (27). The detailed study on the total angular momentum contribution of the partons, when they do not flip their helicities, can be found in Ref. [78].
On the other hand, the second moments of chiral-odd GPDs are connected with the angular momentum carried by the partons with the transverse spin along the \(\widehat{x}\) direction in an unpolarized proton, \(J_{q}^{x}\). According to Burkardt [95], this quantity is one half of the expectation value of the transversity asymmetry
\[\left\{\delta^{x}J_{q}^{x}\right\}=\frac{1}{2}\int\,dxx\left[H_{T}^{q}(x,0,0) +2\tilde{H}_{T}^{q}(x,0,0)+E_{T}^{q}(x,0,0)\right]=\frac{1}{2}\left[A_{T20}^{ q}(0)+\bar{B}_{T20}^{q}(0)\right]\,. \tag{28}\]
The obtained values of transversity asymmetry for \(u\)-quark and \(d\)-quark in our model as well as a comparison with other predictions from the harmonic oscillator (HO) model [41], hypercentral constituent quark model (CQM) [41], and chiral quark soliton model (CQSM) [96] are shown in Table 3. We observe that our predictions are close to those of the HO model. Note that the methods chosen for comparison may have different initial scales, which could be a significant source of the differences in the results among the models.
\begin{table}
\begin{tabular}{|l|c c c c c|} \hline Quantity & \(A_{20}(0)\) & \(B_{20}(0)\) & \(\tilde{A}_{20}(0)\) & \(A_{T20}(0)\) & \(\bar{B}_{T20}(0)\) \\ \hline \(u\)-quark & 0.681 & 0.335 & 0.419 & 0.445 & 0.802 \\ \(d\)-quark & 0.319 & -0.335 & -0.084 & -0.088 & 0.604 \\ \hline \end{tabular}
\end{table}
Table 2: The values of second moment of the GPDs at \(t=0\) for \(u\) and \(d\) quarks in BLFQ.
### Impact-parameter dependent GPDs
Taking the 2-d Fourier transform of GPDs w.r.t. \(\mathbf{\Delta}_{\perp}\) leads to the GPDs in transverse impact-parameter \(\mathbf{(b_{\perp})}\) plane [1; 2]. We have
\[[\text{GPD}](x,0,\mathbf{b}_{\perp}) =\frac{1}{(2\pi)^{2}}\int\text{d}^{2}\mathbf{\Delta}_{\perp}\,e^{-t \mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,[\text{GPD}](x,0,t)\] \[=\frac{1}{2\pi}\int\Delta\text{d}\Delta J_{0}(\Delta b)[\text{GPD }](x,0,t)\:, \tag{29}\]
where \(\Delta=|\mathbf{\Delta}_{\perp}|\). The parameter \(b=|\mathbf{b}_{\perp}|\) describes the transverse distance between the active quark and the center of momentum of the proton and satisfies the condition that \(\sum_{i}x_{i}b_{i}=0\), where the sum runs over the partons.
We show the unpolarized and helicity GPDs for the valence quarks as a function of \(x\) and \(b\) in Fig. 8. Furthermore, we present the 3-d graphical representation of the transversity GPDs with respect to \(x\) and \(b\) in Fig. 9. We observe that all the GPDs, regardless of their signs, show a decrease in the width of the valence quark distributions in transverse impact-parameter plane, as \(x\) increases. For example, in Fig. 8a, we find that the width decreases from 1.00 fm to 0.58 fm as \(x\) increases from 0.5 to 0.7. This implies that when the quarks take a larger longitudinal momentum fraction, they locate near the center of transverse position (\(b=0\)). On the other hand, the peak of distributions shift towards the lower values of \(x\) accompanied by decreasing magnitude, and as we go away \(b=0\). Eventually, the distribution vanishes with increasing transverse distance. The rate of the dropoff varies for different GPDs, depending upon the helicities of both target proton and the active quark. All the GPDs have maximum distributions at \(b=0\). When the valence quarks carry more than 50% of the longitudinal momentum, the GPDs \(H^{q},\tilde{H}^{q}\) and \(H_{T}^{q}\) are observed to have this maxima, while the other GPDs have peaks at \(x<0.5\). Further, the flavor distributions
\begin{table}
\begin{tabular}{|c c c c c|} \hline Transversity asymmetry & BLFQ & HO & Hypercentral CQM & CQSM \\ \hline \(\langle\delta^{x}J_{u}^{x}\rangle\) & 0.62 & 0.68 & 0.39 & 0.49 \\ \(\langle\delta^{x}J_{d}^{x}\rangle\) & 0.26 & 0.28 & 0.10 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 3: Our transversity asymmetry values for \(u\)-quark and \(d\)-quark in the proton compared with the predictions of harmonic oscillator (HO) model [41], hypercentral constituent quark model (CQM) [41], and chiral quark soliton model (CQSM) [96].
Figure 8: The chiral-even GPDs: (a) \(H(x,0,b)\), (c) \(E(x,0,b)\) and (e) \(\tilde{H}(x,0,b)\) for the \(u\)-quark, where the respective GPDs for the \(d\)-quark are shown in (b), (d) and (f). The GPDs are presented with respect to \(x\) and \(b\) (in fm).
Figure 9: The chiral-odd GPDs: (a) \(H_{T}(x,0,b)\), (c) \(E_{T}(x,0,b)\) and (e) \(\tilde{H}_{T}(x,0,b)\) for the \(u\)-quark, where the respective GPDs for the \(d\)-quark are shown in (b), (d) and (f). The GPDs are presented with respect to \(x\) and \(b\) (in fm).
\(E^{d},\tilde{H}^{d},H^{d}_{T},\tilde{H}^{u}_{T}\) are found negative. These signs in transverse \(b\)-space are directly traced to the GPDs in momentum space \((x,t)\). Our impact-parameter dependent GPDs show similarities with those of various studies in the literature [97; 98; 99; 2; 46; 47; 2; 49] which leads us to suggest that there is an emerging trend towards model-independent characteristics.
## IV Conclusions
In this work, we have presented the leading-twist generalized parton distributions (GPDs) for the proton using the basis light-front quantization (BLFQ) approach, where the effective light-front Hamiltonian includes the transverse and longitudinal confinement as well as the one gluon exchange interaction between the valence quarks. The proton light-front wave functions (LFWFs) have been obtained by treating it as a relativistic three-body system and by diagonalizing the effective Hamiltonian matrix numerically. The resulting LFWFs are utilized to study the various static and dynamic properties, relevant to the low-energy regime. The parameters used in this work were previously fixed to reproduce the proton mass and electromagnetic form factors (FFs) [68; 69].
The multi-dimensional GPDs, when taken to specific limits, represent unified versions of form factors (FFs) and parton distribution functions (PDFs). We have computed the chiral-even and chiral-odd GPDs of the proton to observe the \(u\) and \(d\) quarks in momentum space, where we take into account the different configurations of helicities of both the active quark and target proton. We have found qualitatively similar behavior for all the distributions to those in related studies [46; 47; 48; 21; 49; 50; 22; 23; 24; 41]. Further, these functions are capable of producing the different FFs based on the helicity dependence of both quark and the proton, such that the unpolarized, helicity and transversity GPDs provide Dirac, Pauli, axial, and tensor FFs and are also known as the first Mellin moments of the GPDs. We observed that these FFs qualitatively match available predictions from other approaches [49; 56; 41; 92; 93; 41]. We have also computed the second moments of the GPDs, which provide precise information on the gravitational form factors and are linked with the total angular momentum contributions of partons inside the proton. In addition, we have computed the GPDs in transverse position space. Again, we have found that our results show similar qualitative descriptions as obtained in other models.
Our approach can be systematically improved by incorporating Fock sectors beyond the
valence quark component (\(|qqq\rangle\)). Our future efforts will include higher Fock components of the proton, for example, \(|qqqq\bar{q}\rangle\), \(|qqqqg\rangle\) and so on. It will also be of great interest to see the distributions of sea quarks and gluons describing the proton structure with input from light-front QCD to the model at its initial scale.
## Acknowledgements
SK is supported by Research Fund for International Young Scientists, Grant No. 12250410251, from the National Natural Science Foundation of China (NSFC). CM thanks the Chinese Academy of Sciences President's International Fellowship Initiative for the support via Grants No. 2021PM0023. CM is also supported by new faculty start up funding by the Institute of Modern Physics, Chinese Academy of Sciences, Grant No. E129952YR0. XZ is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences, by Key Research Program of Frontier Sciences, Chinese Academy of Sciences, Grant No. ZDBS-LY-7020, by the Natural Science Foundation of Gansu Province, China, Grant No. 20JR10RA067, by the Foundation for Key Talents of Gansu Province, by the Central Funds Guiding the Local Science and Technology Development of Gansu Province, Grant No. 22ZY1QA006, by international partnership program of the Chinese Academy of Sciences, Grant No. 016GJHZ2022103FN and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB34000000. JPV acknowledges partial support from the Department of Energy under Grant Nos. DE-FG02-87ER40371 and DE-SC0023692. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award NP-ERCAP0020944. A portion of the computational resources were also provided by Gansu Computing Center.
|
2306.06873 | Affine Deligne-Lusztig varieties via the double Bruhat graph II:
Iwahori-Hecke algebra | We introduce a new language to describe the geometry of affine
Deligne-Lusztig varieties in affine flag varieties. This second part of a two
paper series uses this new language, i.e. the double Bruhat graph, to describe
certain structure constants of the Iwahori-Hecke algebra. As an application, we
describe nonemptiness and dimension of affine Deligne-Lusztig varieties for
most elements of the affine Weyl group and arbitrary $\sigma$-conjugacy
classes. | Felix Schremmer | 2023-06-12T05:12:59Z | http://arxiv.org/abs/2306.06873v2 | # Affine Deligne-Lusztig varieties via the double Bruhat graph II:
###### Abstract
We introduce a new language to describe the geometry of affine Deligne-Lusztig varieties in affine flag varieties. This second part of a two paper series uses this new language, i.e. the double Bruhat graph, to describe certain structure constants of the Iwahori-Hecke algebra. As an application, we describe nonemptiness and dimension of affine Deligne-Lusztig varieties for most elements of the affine Weyl group and arbitrary \(\sigma\)-conjugacy classes.
## 1 Introduction
In a seminal paper, Deligne-Lusztig [1] introduced a class of varieties, which they use to describe many representations of finite groups of Lie type. An analogous construction yields the so-called affine Deligne-Lusztig varieties, which play an important role e.g. in the reduction of Shimura varieties [1, 2]. Continuing the treatment of [1], we study affine Deligne-Lusztig varieties in affine flag varieties.
Let \(G\) be a reductive group defined over a local field \(F\), whose completion of the maximal unramified extension we denote by \(\breve{F}\). Denote the Frobenius of \(\breve{F}/F\) by \(\sigma\) and pick a \(\sigma\)-stable Iwahori subgroup \(I\subseteq G(\breve{F})\). The affine Deligne-Lusztig variety \(X_{x}(b)\) associated to two elements \(x,b\in G(\breve{F})\) is the reduced ind-subscheme of the affine flag variety \(G(\breve{F})/I\) with geometric points
\[X_{x}(b)=\{g\in G(\breve{F})/I\mid g^{-1}b\sigma(g)\in IxI\}.\]
Observe that the isomorphism type of \(X_{x}(b)\) only depends on the \(\sigma\)-conjugacy class
\[[b]=\{g^{-1}b\sigma(g)\mid g\in G(\breve{F})\}\]
and the Iwahori double coset \(IxI\subseteq G(\breve{F})\). These Iwahori double cosets are naturally parametrized by the extended affine Weyl group \(\widetilde{W}\) of \(G\), and we get
\[G(\breve{F})=\bigsqcup_{x\in\widetilde{W}}I\dot{x}I.\]
Many geometric properties of the double cosets \(I\dot{x}I\) for various \(x\in\widehat{W}\) can be understood via the corresponding Iwahori-Hecke algebra \(\mathcal{H}=\mathcal{H}(\widehat{W})\). This algebra and its representation theory received tremendous interest since the discovery of the Satake isomorphism [10]. There are a few different and mostly equivalent constructions of this algebra in use. For now, we summarize that this is an algebra over a suitable base field or ring with a basis given by formal variables \(T_{x}\) for \(x\in\widehat{W}\). The element \(T_{x}\in\mathcal{H}\) can be thought of as the representation-theoretic analogue of the Iwahori double coset \(IxI\subseteq G(\breve{F})\). E.g. if \(x,y\in\widehat{W}\), we can write
\[IxI\cdot IyI=\bigcup_{z}IzI\]
where the union is taken over all \(z\in\widehat{W}\) such that the \(T_{z}\)-coefficient of \(T_{x}T_{y}\in\mathcal{H}\) is non-zero. For a general overview over the structure theory of Iwahori-Hecke algebras and its applications to the geometry of the affine flag variety, we refer to [1].
The set of \(\sigma\)-conjugacy classes \(B(G)=\{[b]\mid b\in G(\breve{F})\}\) is the second main object of interest in the definition of affine Deligne-Lusztig varieties. It is a celebrated result of Kottwitz [13, 14] that each \(\sigma\)-conjugacy class \([b]\) is uniquely determined by two invariants, known as its Newton point and its Kottwitz point. From He [1, Theorem 3.7], we get a parametrization of \(B(G)\) using the extended affine Weyl group \(\widehat{W}\). For each \(x\in\widehat{W}\), consider its \(\sigma\)-conjugacy class in \(\widehat{W}\), denoted
\[\mathcal{O}=\{y^{-1}x\sigma(y)\mid y\in\widehat{W}\}.\]
Two elements that are \(\sigma\)-conjugate in \(\widetilde{W}\) will also be \(\sigma\)-conjugate in \(G(\breve{F})\), but the converse does not hold true in general. We obtain a surjective but not injective map
\[\{\sigma\text{-conjugacy classes }\mathcal{O}\subseteq\widehat{W}\}\to B(G),\]
sending \(\mathcal{O}\) to \([\dot{x}]\in B(G)\) for any \(x\in\mathcal{O}\).
The analogous construction in the Iwahori-Hecke algebra is the formation of a \(\sigma\)-twisted cocenter, i.e. the quotient of \(\mathcal{H}\) by the submodule \([\mathcal{H},\mathcal{H}]_{\sigma}\) generated by
\[[h,h^{\prime}]_{\sigma}=hh^{\prime}-h^{\prime}\sigma(h),\qquad h,h^{\prime} \in\mathcal{H}.\]
An important result of He-Nie [12, Theorem C] gives a full description of this cocenter. For each \(\sigma\)-conjugacy class \(\mathcal{O}\subseteq\widehat{W}\) and any two elements of minimal length \(x_{1},x_{2}\in\mathcal{O}\), they prove that the images of \(T_{x_{1}}\) and \(T_{x_{2}}\) in the cocenter of \(\mathcal{H}\) agree. Denoting the common image by \(T_{\mathcal{O}}\), they prove moreover that these \(T_{\mathcal{O}}\) form a basis of the cocenter, parametrized by all \(\sigma\)-conjugacy classes \(\mathcal{O}\subseteq\widehat{W}\).
With these preferred bases \(\{T_{x}\}\) of \(\mathcal{H}\) and \(\{T_{\mathcal{O}}\}\) of the quotient, we obtain structure constants expressing the image of each \(T_{x}\) in the cocenter as linear combination of the \(T_{\mathcal{O}}\)'s. These are known as class polynomials, so we write
\[T_{x}\equiv\sum_{\begin{subarray}{c}\mathcal{O}\subseteq\widehat{W}\\ \sigma\text{-conj. class}\end{subarray}}f_{x,\mathcal{O}}T_{\mathcal{O}}\pmod{[ \mathcal{H},\mathcal{H}]_{\sigma}}.\]
These representation-theoretic structure constants are often hard to determine. However, they are very useful for studying affine Deligne-Lusztig varieties, especially the following main three questions:
1. When is \(X_{x}(b)\) empty? Equivalently, when is the Newton stratum empty?
2. If \(X_{x}(b)\neq\emptyset\), what is its dimension?
3. How many top dimensional irreducible components, up to the action of the \(\sigma\)-centralizer of \(b\), does \(X_{x}(b)\) have?
It is an important result of He that these main questions can be fully answered in terms of the class polynomials, cf. [10, Theorem 6.1] and [10, Theorem 2.19]. The class polynomials can moreover be used to count rational points on Newton strata, cf. [11, Proposition 3.7].
In the previous article [11], we showed that the same main questions can also be answered, in some cases, using the combinatorial notion of a double Bruhat graph. This is an explicitly described finite graph, introduced by Naito-Watanabe [12, Section 5.1] in order to describe periodic \(R\)-polynomials. Following a result of Gortz-Haines-Kottwitz-Reumann [1, Section 6] comparing affine Deligne-Lusztig varieties with certain intersections in the affine flag variety, we showed that the double Bruhat graph appears naturally as a way to encode certain subvarieties of the affine flag variety.
Write \(x=wt^{\mu}\in\widetilde{W},v\in W\), and assume that a regularity condition of the form
\[\forall\alpha\in\Phi^{+}:\ \langle v^{-1}\mu,\alpha\rangle\gg\langle\mu^{ \operatorname{dom}}-\nu(b),2\rho\rangle\]
is satisfied. Assume moreover that the group \(G\) is split over \(F\). Then [11, Corollary 6.9] shows that the questions of nonemptiness, dimension and top dimensional irreducible components are determined by the set of paths from \(v\) to \(wv\) in the double Bruhat graph that are increasing with respect to some fixed reflection order \(\prec\) and of weight \(\mu^{\operatorname{dom}}-\nu(b)\). Our first main result states that this set of paths determines the full class polynomial, and that the assumption of a split group can be removed.
**Theorem 1.1** (Cf. Theorem 5.10).: _Let \(x=w\varepsilon^{\mu}\in\widetilde{W},v\in W\) and \(\mathcal{O}\subseteq\widetilde{W}\) such that a regularity condition of the form_
\[\forall\alpha\in\Phi^{+}:\ \langle v^{-1}\mu,\alpha\rangle\gg\langle\mu^{ \operatorname{dom}}-\nu(\mathcal{O}),2\rho\rangle\]
_is satisfied. Then the class polynomial \(f_{x,\mathcal{O}}\) can be expressed in terms of paths in the double Bruhat graph from \(v\) to \(\sigma(wv)\) that are increasing with respect to some fixed reflection order. For a suitable parametrization of the Iwahori-Hecke algebra as an algebra over the polynomial ring \(\mathbb{Z}[Q]\) (Definition 5.1), the class polynomial is explicitly given by_
\[f_{x,\mathcal{O}}=\sum_{p}Q^{\ell(p)},\]
_where the sum is taken over all paths \(p\) in the double Bruhat graph from \(v\) to \(\sigma(wv)\) that are increasing with respect to some fixed reflection order and such that \(\nu(\mathcal{O})\) is the \(\sigma\)-average of \(v^{-1}\mu-\operatorname{wt}(p)\)._
We will prove Theorem 1.1 as a consequence of the following more fundamental result, computing the structure constants of the multiplication of our standard basis vectors in \(\mathcal{H}\).
**Theorem 1.2** (Cf. Theorem 5.2).: _Let \(x=w_{x}\varepsilon^{\mu_{x}},z=w_{z}\varepsilon^{\mu_{z}}\in\widehat{W}\) and \(v_{z}\in W\) satisfying a regularity condition of the form_
\[\forall\alpha\in\Phi^{+}:\ \langle v_{z}^{-1}\mu_{z},\alpha\rangle\gg\ell(x).\]
_Define polynomials \(\varphi_{x,z,y}\) via_
\[T_{x}T_{z}=\sum_{y\in\widehat{W}}\varphi_{x,z,y}T_{y}\in\mathcal{H}(\widehat{ W}).\]
_Pick an element \(y=w_{y}\varepsilon^{\mu_{y}}\in\widehat{W}\) and \(v_{x}\in W\) such that a regularity condition of the form_
\[\forall\alpha\in\Phi^{+}:\ \langle v_{x}^{-1}\mu_{x},\alpha\rangle\gg\ell(x)+ \ell(z)-\ell(y)\]
_is satisfied. Then we can describe the structure constant \(\varphi_{x,z,y}\) in terms of paths in the double Bruhat graph. Explicitly, we have \(\varphi_{x,z,y}=0\) unless \(w_{y}=(w_{x}v_{x})^{-1}v_{y}\). In this case, we have_
\[\varphi_{x,z,y}=\sum_{p}Q^{\ell(p)},\]
_where the sum is taken over all paths in the double Bruhat graph from \(v_{x}\) to \(w_{z}v_{z}\) that are increasing with respect to some reflection order and of weight_
\[\operatorname{wt}(p)=v_{x}^{-1}\mu_{x}+v_{z}^{-1}\mu_{z}-(w_{z}v_{z})^{-1}\mu _{y}.\]
Theorem 5.2 below actually proves a stronger statement, requiring only a weaker regularity condition of the form
\[\forall\alpha\in\Phi^{+}:\ \langle v^{-1}\mu_{x},\alpha\rangle\gg\ell(x)-\ell(y^{- 1}z).\]
The resulting description of \(\varphi_{x,z,y}\) is more involved however, replacing the single path \(p\) by pairs of bounded paths in the double Bruhat graph. Theorem 1.2 as stated here is sufficient to derive Theorem 1.1.
So under some very strong regularity conditions, the double Bruhat graph may also be used to understand multiplications of Iwahori double cosets \(IxI\cdot IzI\) in \(G(\breve{F})\). Theorems 1.1 and 1.2 give insight in the generic behaviour of class polynomials and products in the Iwahori-Hecke algebra, solving infinitely many previously intractable questions using a finite combinatorial object. From a practical point of view, this allows us to quickly derive many crucial properties of the weight multisets of the double Bruhat graph by referring to known properties of the Iwahori-Hecke algebra or affine Deligne-Lusztig varieties. Using some of the most powerful tools available to describe affine Deligne-Lusztig varieties and comparing them to the double Bruhat graph, we obtain the following result.
**Theorem 1.3** (Cf. Theorem 6.4).: _Let \(x=w\varepsilon^{\mu}\in\widetilde{W}\) and \(v\in W\) satisfying the regularity condition_
\[\forall\alpha\in\Phi^{+}:\ \langle v^{-1}\mu,\alpha\rangle\geqslant 2\operatorname{ rk}(G)+14,\]
_where \(\operatorname{rk}(G)\) is the rank of a maximal torus in the group \(G\)._
_Pick an arbitrary \(\sigma\)-conjugacy class \([b]\in B(G)\). Let \(P\) be the set of all paths \(p\) in the double Bruhat graph from \(v\) to \(\sigma(wv)\) that are increasing with respect to some fixed reflection order such that the \(\lambda\)-invariant of \([b]\) (cf. [1, Section 2]) satisfies_
\[\lambda(b)=v^{-1}\mu-\operatorname{wt}(p).\]
_Then \(P\neq\emptyset\) if and only if \(X_{x}(b)\neq\emptyset\). If \(p\) is a path of maximal length in \(P\), then_
\[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+\ell(p)-\langle\nu(b),2\rho\rangle- \operatorname{def}(b)\right).\]
_We give a similar description in terms of the dominant Newton points of \([b]\) rather than the \(\lambda\)-invariant._
Theorem 1.3 gives full answers to the questions (Q1) and (Q2) for arbitrary \([b]\in B(G)\) as long as the element \(x\in\widetilde{W}\) satisfies a somewhat mild regularity condition (being linear in the rank of \(G\)).
The proofs given in this article are mostly combinatorial in nature, and largely independent of its predecessor article [10]. We will rely only on some basic facts on the double Bruhat graph established in [10, Section 5]. The best known ways to compute the structure constants of Theorem 1.2 and the class polynomials \(f_{x,\mathcal{O}}\) are given by certain recursive relations involving simple affine reflections in the extended affine Weyl group. Similarly, the Deligne-Lusztig reduction method of Gortz-He [14, Section 2.5] provides such a recursive method to describe many geometric properties of affine Deligne-Lusztig varieties, in particular the ones studied in this paper series. On the double Bruhat side, these are mirrored by the construction of certain bijections between paths due to Naito-Watanabe [24, Section 3.3]. We recall these bijections and derive the corresponding properties of the weight multisets in Section 4. We study the consequences for the Iwahori-Hecke algebra in Section 5, and the resulting properties of affine Deligne-Lusztig varieties in Section 6.
In Section 7, we finish this series of two papers by listing a number of further-reaching conjectures, predicting a relationship between the geometry of affine Deligne-Lusztig varieties and paths in the double Bruhat graph in various cases. These conjectures are natural generalizations of our results, and withstand an extensive computer search for counterexamples.
Recall that our main goal is to find and prove a description of the geometry of affine Deligne-Lusztig varieties in the affine flag variety that is as concise and precise as the known analogous statements for the affine Grassmannian (as summarized in [10, Theorem 1.1]). Our conjectures and partial results towards proving them suggest that the language of the double Bruhat graph is very useful for this task, and might even be the crucial missing piece towards a full description.
We would like to remark that once a conjecture is found that describes the geometry of \(X_{x}(b)\) for arbitrary \(x,b\) in terms of the double Bruhat graph, a proof of such a conjecture might simply consist of a straightforward comparison of the Deligne-Lusztig reduction method due to Gortz-He [1] with the analogous recursive relations of the double Bruhat graph that are discussed in this article.
## 2 Acknowledgements
The author was partially supported by the German Academic Scholarship Foundation, the Marianne-Plehn programme, the DFG Collaborative Research Centre 326 _GAUS_ and the Chinese University of Hong Kong. I thank Eva Viehmann, Xuhua He and Quingchao Yu for inspiring discussions, and Eva Viehmann again for her comments on a preliminary version of this article.
## 3 Notation
We fix a non-archimedian local field \(F\) whose completion of the maximal unramified extension will be denoted \(\breve{F}\). We write \(\mathcal{O}_{F}\) and \(\mathcal{O}_{\breve{F}}\) for the respective rings of integers. Let \(\varepsilon\in F\) be a uniformizer. The Galois group \(\Gamma=\operatorname{Gal}(\breve{F}/F)\) is generated by the Frobenius \(\sigma\).
In the context of Shimura varieties, one would choose \(F\) to be a finite extension of the \(p\)-adic numbers. When studying moduli spaces of shutkas, \(F\) would be the field of Laurent series over a finite field.
In any case, we fix a reductive group \(G\) over \(F\). Via [1, Section 2], we may reduce questions regarding affine Deligne-Lusztig varieties of \(G\) to the case of a quasi-split group. In order to minimize the notational burden, we assume that the group \(G\) is quasi-split throughout this paper.
We construct its associated affine root system and affine Weyl group following Haines-Rapoport [11] and Tits [12].
Fix a maximal \(\breve{F}\)-split torus \(T_{\breve{F}}\subseteq G_{\breve{F}}\) and write \(T\) for its centralizer in \(G_{\breve{F}}\), so \(T\) is a maximal torus of \(G_{\breve{F}}\). Write \(\mathcal{A}=\mathcal{A}(G_{\breve{F}},T_{\breve{F}})\) for the apartment of the Bruhat-Tits building of \(G_{\breve{F}}\) associated with \(S\). We pick a \(\sigma\)-invariant alcove \(\mathfrak{a}\) in \(\mathcal{A}\). Its stabilizer is a \(\sigma\)-invariant Iwahori subgroup \(I\subset G(\breve{F})\).
Denote the normalizer of \(T\) in \(G\) by \(N(T)\). Then the quotient
\[\widehat{W}=N_{G}(T)(\breve{F})/(T(\breve{F})\cap I)\]
is called _extended affine Weyl group_, and \(W=N_{G}(T)(\breve{F})/T(\breve{F})\) is the _(finite) Weyl group_. The Weyl group \(W\) is naturally a quotient of \(\widehat{W}\). We denote the Frobenius action on \(W\) and \(\widehat{W}\) by \(\sigma\) as well.
The affine roots as constructed in [12, Section 1.6] are denoted \(\Phi_{\mathrm{af}}\). Each of these roots \(a\in\Phi_{\mathrm{af}}\) defines an affine function \(a:\mathcal{A}\to\mathbb{R}\). The vector part of this function is denoted \(\operatorname{cl}(a)\in V^{*}\), where \(V=X_{*}(S)\otimes\mathbb{R}=X_{*}(T)_{\Gamma_{0}}\otimes\mathbb{R}\). Here, \(\Gamma_{0}=\operatorname{Gal}(\overline{F}/\breve{F})\) is the
absolute Galois group of \(\tilde{F}\), i.e. the inertia group of \(\Gamma=\operatorname{Gal}(\overline{F}/F)\). The set of _(finite) roots_ is1\(\Phi:=\operatorname{cl}(\Phi_{\operatorname{af}})\).
Footnote 1: This is different from the root system that [10] and [11] denote by \(\Phi\); it coincides with the root system called \(\Sigma\) in [11].
Each affine root in \(\Phi_{\operatorname{af}}\) divides the standard apartment into two half-spaces, one being the positive and one the negative side. Those affine roots are where our fixed alcove \(\mathfrak{a}\) is on the positive side are called _positive affine roots_. If moreover the alcove \(\mathfrak{a}\) is adjacent to the root hyperplane, it is called _simple affine root_. We denote the sets of simple resp. positive affine roots by \(\Delta_{\operatorname{af}}\subseteq\Phi_{\operatorname{af}}^{+}\subseteq \Phi_{\operatorname{af}}\).
Writing \(W_{\operatorname{af}}\) for the extended affine Weyl group of \(G\), we get a natural \(\sigma\)-equivariant short exact sequence (cf. [11, Lemma 14])
\[1\to W_{\operatorname{af}}\to\widehat{W}\to\pi_{1}(G)_{\Gamma_{0}}\to 1.\]
Here, \(\pi_{1}(G):=X_{*}(T)/\mathbb{Z}\Phi^{\vee}\) denotes the Borovoi fundamental group.
For each \(x\in\widehat{W}\), we denote by \(\ell(x)\in\mathbb{Z}_{\geq 0}\) the length of a shortest alcove path from \(\mathfrak{a}\) to \(x\mathfrak{a}\). The elements of length zero are denoted \(\Omega\). The above short exact sequence yields an isomorphism of \(\Omega\) with \(\pi_{1}(G)_{\Gamma_{0}}\), realizing \(\widehat{W}\) as semidirect product \(\widehat{W}=\Omega\ltimes W_{\operatorname{af}}\).
Each affine root \(a\in\Phi_{\operatorname{af}}\) defines an affine reflection \(r_{a}\) on \(\mathcal{A}\). The group generated by these reflections is naturally isomorphic to \(W_{\operatorname{af}}\) (cf. [11]), so by abuse of notation, we also write \(r_{a}\in W_{\operatorname{af}}\) for the corresponding element. We define \(S_{\operatorname{af}}:=\{r_{a}\mid a\in\Delta_{\operatorname{af}}\}\), called the set of _simple affine reflections_. The pair \((W_{\operatorname{af}},S_{\operatorname{af}})\) is a Coxeter group with length function \(\ell\) as defined above.
We pick a special vertex \(\mathfrak{x}\in\mathcal{A}\) that is adjacent to \(\mathfrak{a}\). Since we assumed \(G\) to be quasi-split, we may and do choose \(\mathfrak{x}\) to be \(\sigma\)-invariant. We identify \(\mathcal{A}\) with \(V\) via \(\mathfrak{x}\mapsto 0\). This allows us to decompose \(\Phi_{\operatorname{af}}=\Phi\times\mathbb{Z}\), where \(a=(\alpha,k)\) corresponds to the function
\[V\to\mathbb{R},v\mapsto\alpha(v)+k.\]
From [11, Proposition 13], we moreover get decompositions \(\widehat{W}=W\ltimes X_{*}(T)_{\Gamma_{0}}\) and \(W_{\operatorname{af}}=W\ltimes\mathbb{Z}\Phi^{\vee}\). Using this decomposition, we write elements \(x\in\widehat{W}\) as \(x=w\varepsilon^{\mu}\) with \(w\in W\) and \(\mu\in X_{*}(T)_{\Gamma_{0}}\). For \(a=(\alpha,k)\in\Phi_{\operatorname{af}}\), we have \(r_{a}=s_{\alpha}\varepsilon^{k\alpha^{\vee}}\in W_{\operatorname{af}}\), where \(s_{\alpha}\in W\) is the reflection associated with \(\alpha\). The natural action of \(\widehat{W}\) on \(\Phi_{\operatorname{af}}\) can be expressed as
\[(w\varepsilon^{\mu})(\alpha,k)=(w\alpha,k-\langle\mu,\alpha\rangle).\]
We define the _dominant chamber_\(C\subseteq V\) to be the Weyl chamber containing our fixed alcove \(\mathfrak{a}\). This gives a Borel subgroup \(B\subseteq G\), and corresponding sets of positive/negative/simple roots \(\Phi^{+},\Phi^{-},\Delta\subseteq\Phi\).
By abuse of notation, we denote by \(\Phi^{+}\) also the indicator function of the set of positive roots, i.e.
\[\forall\alpha\in\Phi:\ \Phi^{+}(\alpha)=\begin{cases}1,&\alpha\in\Phi^{+},\\ 0,&\alpha\in\Phi^{-}.\end{cases}\]
The sets of positive and negative affine roots can be expressed as
\[\Phi^{+}_{\mathrm{af}}= (\Phi^{+}\times\mathbb{Z}_{\geq 0})\sqcup(\Phi^{-}\times\mathbb{Z}_{ \geq 1})=\{(\alpha,k)\in\Phi_{\mathrm{af}}\mid k\geq\Phi^{+}(-\alpha)\},\] \[\Phi^{-}_{\mathrm{af}}= -\Phi^{+}_{\mathrm{af}}=\Phi_{\mathrm{af}}\backslash\Phi^{+}_{ \mathrm{af}}=\{(\alpha,k)\in\Phi_{\mathrm{af}}\mid k<\Phi^{+}(-\alpha)\}.\]
One checks that \(\Phi^{+}_{\mathrm{af}}\) are precisely the affine roots that are sums of simple affine roots.
Decompose \(\Phi\) as a direct sum of irreducible root systems, \(\Phi=\Phi_{1}\sqcup\cdots\sqcup\Phi_{c}\). Each irreducible factor contains a uniquely determined longest root \(\theta_{i}\in\Phi^{+}_{i}\). Now the set of simple affine roots is
\[\Delta_{\mathrm{af}}=\{(\alpha,0)\mid\alpha\in\Delta\}\cup\{(-\theta_{i},1) \mid i=1,\ldots,c\}\subset\Phi^{+}_{\mathrm{af}}.\]
We call an element \(\mu\in X_{*}(T)_{\Gamma_{0}}\otimes\mathbb{Q}\)_dominant_ if \(\langle\mu,\alpha\rangle\geq 0\) for all \(\alpha\in\Phi^{+}\). Similarly, we call it \(C\)-regular for a real number \(C\) if
\[|\langle\mu,\alpha\rangle|\geq C\]
for each \(\alpha\in\Phi^{+}\). If \(\mu\in X_{*}(T)_{\Gamma_{0}}\) is dominant, then the Newton point of \(\varepsilon^{\mu}\in\widehat{W}\) is given by the \(\sigma\)-average of \(\mu\), defined as
\[\mathrm{avg}_{\sigma}(\mu)=\frac{1}{N}\sum_{i=1}^{N}\sigma^{i}(\mu),\]
where \(N>0\) is any integer such that the action of \(\sigma^{N}\) on \(X_{*}(T)_{\Gamma_{0}}\) is trivial.
An element \(x=w\varepsilon^{\mu}\in\widehat{W}\) is called \(C\)-regular if \(\mu\) is. We write \(\mathrm{LP}(x)\subseteq W\) for the set of length positive elements as introduced in [14, Section 2.2]. If \(x\) is \(2\)-regular, then \(\mathrm{LP}(x)\) consists only of one element, namely the uniquely determined \(v\in W\) such that \(v^{-1}\mu\) is dominant.
For elements \(\mu,\mu^{\prime}\) in \(X_{*}(T)_{\Gamma_{0}}\otimes\mathbb{Q}\) (resp. \(X_{*}(T)_{\Gamma_{0}}\) or \(X_{*}(T)_{\Gamma}\)), we write \(\mu\leq\mu^{\prime}\) if the difference \(\mu^{\prime}-\mu\) is a \(\mathbb{Q}_{\geq 0}\)-linear combination of positive coroots.
## 4 Double Bruhat graph
We recall the definition of the double Bruhat graph following Naito-Watanabe [13, Section 5.1]. It turns out that the paths we studied in order to understand affine Deligne-Lusztig varieties are a certain subset of the paths studied by Naito-Watanabe in order to study Kazhdan-Lusztig theory, or more precisely periodic \(R\)-polynomials.
**Definition 4.1**.: Let \(<\) be a reflection order on \(\Phi^{+}\), and write \(\Phi^{+}=\{\beta_{1}<\cdots<\beta_{\#\Phi^{+}}\}\). Let moreover \(v,w\in W\).
1. The _double Bruhat graph_\(\mathrm{DBG}(W)\) is a finite directed graph. Its set of vertices is \(W\). For each \(w\in W\) and \(\alpha\in\Phi^{+}\), there is an edge \(w\xrightarrow{\alpha}ws_{\alpha}\).
2. A _non-labelled path_\(\overline{p}\) in \(\operatorname{DBG}(W)\) is a sequence of adjacent edges \[\overline{p}:v=u_{1}\xrightarrow{\alpha_{1}}u_{2}\xrightarrow{\alpha_{2}} \cdots\xrightarrow{\alpha_{\ell}}u_{\ell+1}=w.\] We call \(\overline{p}\) a non-labelled path from \(v\) to \(w\) of length \(\ell(\overline{p})=\ell\). We say \(\overline{p}\) is _increasing_ with respect to \(<\) if \(\alpha_{1}<\cdots<\alpha_{\ell}\). In this case, we moreover say that \(\overline{p}\)_is bounded by_\(n\in\mathbb{Z}\) if \(\alpha_{\ell}=\beta_{i}\) for some \(i\leq n\).
3. A _labelled path_ or _path_\(p\) in \(\operatorname{DBG}(W)\) consists of an unlabelled path \[\overline{p}:v=u_{1}\xrightarrow{\alpha_{1}}u_{2}\xrightarrow{\alpha_{2}} \cdots\xrightarrow{\alpha_{\ell}}u_{\ell+1}=w\] together with integers \(m_{1},\dots,m_{\ell}\in\mathbb{Z}\) subject to the condition \[m_{i}\geq\Phi^{+}(-u_{i}\alpha_{i})=\begin{cases}0,&\ell(u_{i+1})>\ell(u_{i}),\\ 1,&\ell(u_{i+1})<\ell(u_{i}).\end{cases}\] We write \(p\) as \[p:v=u_{1}\xrightarrow{(\alpha_{1},m_{1})}u_{2}\xrightarrow{(\alpha_{2},m_{2} )}\cdots\xrightarrow{(\alpha_{\ell},m_{\ell})}u_{\ell+1}=w.\] The _weight_ of \(p\) is \[\operatorname{wt}(p)=m_{1}\alpha_{1}^{\vee}+\cdots+m_{\ell}\alpha_{\ell}^{\vee }\in\mathbb{Z}\Phi^{\vee}.\] The _length_ of \(p\) is \(\ell(p)=\ell(\overline{p})=\ell\). We say that \(p\) is _increasing_ with respect to \(<\) if \(\overline{p}\) is. In this case, we say that \(p\) is bounded by \(n\in\mathbb{Z}\) if \(\overline{p}\) is.
4. The set of all paths from \(v\) to \(w\) that are increasing with respect to \(<\) and bounded by \(n\in\mathbb{Z}\) is denoted \(\operatorname{paths}_{\leq n}^{<}(v\Rightarrow w)\). We also write \[\operatorname{paths}^{<}(v\Rightarrow w)=\operatorname{paths}_{\leq\#\Phi^{+} }^{<}(v\Rightarrow w).\] We will frequently use the immediate properties of these paths as developed in [1, Section 5]. For this section, our main result describe how these paths behave with respect to certain simple affine reflections. Fix a reflection order \[\Phi^{+}=\{\beta_{1}<\cdots<\beta_{\#\Phi^{+}}\}\] and write \[\pi_{>n}=s_{\beta_{n+1}}\cdots s_{\beta_{\#\Phi^{+}}}\in W\] as in [1, Definition 5.9].
**Theorem 4.2**.: _Let \(u,v\in W\) and \(n\in\{0,\dots,\#\Phi^{+}\}\). Pick a simple affine root \(a=(\alpha,k)\in\Delta_{\mathrm{af}}\) such that \((v\pi_{>n})^{-1}\alpha\in\Phi^{-}\)._
1. _If_ \(u^{-1}\alpha\in\Phi^{-}\)_, then there exists an explicitly described bijection of paths_ \[\psi:\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v)\to\mathrm{ paths}^{<}_{\leq n}(u\Rightarrow v)\] _satisfying for each_ \(p\in\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v)\) _the conditions_ \[\ell(\psi(p))=\ell(p),\qquad\mathrm{wt}(\psi(p))=\mathrm{wt}(p)+k(v^{-1}\alpha^ {\vee}-u^{-1}\alpha^{\vee}).\]
2. _If_ \(u^{-1}\alpha\in\Phi^{+}\)_, then there exists an explicitly described bijection of paths_ \[\varphi:\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v)\sqcup \mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow v)\to\mathrm{paths}^{<}_{ \leq n}(u\Rightarrow v)\] _satisfying for each_ \(p\in\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v)\) _and_ \(p^{\prime}\in\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow v)\) _the conditions_ \[\ell(\varphi(p))= \ell(p),\qquad\mathrm{wt}(\varphi(p))=\mathrm{wt}(p)+k(v^{-1} \alpha^{\vee}-u^{-1}\alpha^{\vee}),\] \[\ell(\varphi(p^{\prime}))= \ell(p^{\prime})+1,\qquad\mathrm{wt}(\varphi(p^{\prime}))= \mathrm{wt}(p^{\prime})-ku^{-1}\alpha^{\vee}.\]
The proof of this theorem can essentially be found in Section 3.3 of [14], which is a rather involved and technical construction. One may obtain a weaker version of Theorem 4.2 by comparing the action of simple affine reflections on semi-infinite orbits with [11, Theorem 5.5]. While such a weaker result would be sufficient for our geometric applications, we do need the full strength of Theorem 4.2 for our conclusions on the Iwahori-Hecke algebra. Moreover, we would like to explain the connection between our paper and [14]. Let us hence recall some of the notation used by Naito-Watanabe:
**Definition 4.3**.:
1. By \(\leq_{\frac{\alpha}{2}}\), we denote the semi-infinite order on \(\widetilde{W}\) as introduced by Lusztig [15]. It is generated by inequalities of the form \[w\varepsilon^{\mu}<_{\frac{\alpha}{2}}r_{(\alpha,k)}w\varepsilon^{\mu}\] where \((\alpha,k)\in\Phi^{+}_{\mathrm{af}}\), \(w\in W\) and \(\mu\in X_{*}(T)_{\Gamma_{0}}\) satisfy \(w^{-1}\alpha\in\Phi^{+}\).
2. For \(w,y\in\widetilde{W}\), we denote by \(P^{<}_{r}(y,w)\) the set of paths in \(\widetilde{W}\) of the form \[\Delta:\ y=y_{1}\xrightarrow{(\beta_{1},m_{1})}y_{2}\xrightarrow{(\beta_{2},m _{2})}\cdots\xrightarrow{(\beta_{\ell},m_{\ell})}y_{\ell+1}=w\] such that the following two conditions are both satisfied: * For each \(i=1,\ldots,\ell\), we have \(y_{i+1}>_{\frac{\alpha}{2}}y_{i}\). Writing \(y_{i}=w_{i}\varepsilon^{\mu_{i}}\), we have \[y_{i+1}=w_{i}s_{\beta_{i}}\varepsilon^{\mu_{i}+m_{i}\beta^{\vee}_{i}}.\] * The roots \(\beta_{i}\) are all positive and satisfy \(\beta_{1}<\cdots<\beta_{\ell}\).
We denote the number of edges in \(\Delta\) by \(\ell(\Delta):=\ell\).
These paths \(P_{r}^{<}(\cdot,\cdot)\) occur with exactly the same name in the article of Naito-Watanabe, and are called translation-free paths. They also consider a larger set of paths, where so-called translation edges are allowed, which is however less relevant for our applications.
From the definition of the semi-infinite order, we easily obtain the following relation between the paths in \(\widetilde{W}\) and the paths in the double Bruhat graph. This can be seen as a variant of [11, Proposition 5.2.1].
**Lemma 4.4**.: _Let \(y=w_{1}\varepsilon^{\mu_{1}},w=w_{2}\varepsilon^{\mu_{2}}\in\widetilde{W}\). Then the map_
\[\Psi: P_{r}^{<}(y,w)\to\{p\in\mathrm{paths}^{<}(w_{1}\Rightarrow w_{2}) \mid\mathrm{wt}(p)=\mu_{2}-\mu_{1}\},\] \[\left(\Delta:y=y_{0}\xrightarrow{(\beta_{1},m_{1})}y_{1} \xrightarrow{(\beta_{2},m_{2})}\cdots\xrightarrow{(\beta_{\ell},m_{\ell})}y_{ \ell+1}=w\right)\] \[\mapsto\left(\Phi(\Delta):\ w_{1}=\mathrm{cl}(y_{0})\xrightarrow{ (\beta_{1},m_{1})}\mathrm{cl}(y_{1})\xrightarrow{(\beta_{2},m_{2})}\cdots \xrightarrow{(\beta_{\ell},m_{\ell})}\mathrm{cl}(y_{\ell+1})\right)\]
_is bijective and length-preserving (i.e. \(\ell(\Psi(\Delta))=\ell(\Delta)\)). _
The main results of [11, Section 3.3] can be summarized as follows.
**Theorem 4.5**.: _Let \(y,w\in\widetilde{W}\) and pick a simple affine reflection \(s\in S_{\mathrm{af}}\) such that \(y<_{\frac{\infty}{2}}sy\) and \(sw<_{\frac{\infty}{2}}w\)._
1. _[_1_, Proposition 3.3.2]_: _There is an explicitly described bijection_ \[\psi:P_{r}^{<}(y,sw)\to P_{r}^{<}(sy,w).\] _The map_ \(\psi\) _preserves the lengths of paths. Its inverse map_ \(\psi^{\prime}=\psi^{-1}\) _is also explicitly described._
2. _[_1_, Proposition 3.3.1]_: _There is an explicitly described bijection_ \[\varphi:P_{r}^{<}(sy,sw)\sqcup P_{r}^{<}(sy,w)\to P_{r}^{<}(y,w).\] _For_ \(\Delta\in P_{r}^{<}(sy,sw)\)_, we have_ \(\ell(\varphi(\Delta))=\ell(\Delta)\)_. For_ \(\Delta\in P_{r}^{<}(sy,w)\)_, we have_ \(\ell(\varphi(\Delta))=\ell(\Delta)+1\)_. Its inverse map_ \(\varphi^{\prime}=\varphi^{-1}\) _is also explicitly described._
In view of Lemma 4.4, we immediately get the special case of Theorem 4.2 for the sets \(\mathrm{paths}^{<}(u\Rightarrow v)\), i.e. if \(n=\#\Phi^{+}\). By inspecting the proof and the explicit constructions involved in the proof of Theorem 4.5, we will obtain the full statement of Theorem 4.2. In order to facilitate this task, we introduce a technique that we call "path padding".
**Definition 4.6**.: Let \(u,v\in W\) and \(0\leqslant n\leqslant\#\Phi^{+}\). Fix positive integers \(m_{i}\) for \(i=1,\ldots,\#\Phi^{+}\). Then we define the _padding map_
\[\mathrm{pad}_{(m_{i})}:\mathrm{paths}_{\leq n}^{<}(u\Rightarrow v)\to \mathrm{paths}^{<}(u\Rightarrow v\pi_{>n}),\]
sending a path \(p\in\mathrm{paths}_{\leq n}^{<}(u\Rightarrow v)\) to the composite path
\[\mathrm{pad}_{(m_{i})}(p):u\xRightarrow v \xrightarrow{(\beta_{n+1},m_{n+1})}vs_{\beta_{n+1}} \xrightarrow{(\beta_{n+2},m_{n+2})}\cdots\] \[\cdots\xrightarrow{(\beta_{\#\Phi^{+}},m_{\#\Phi^{+}})}vs_{ \beta_{n+1}}\cdots s_{\beta_{\#\Phi^{+}}}=v\pi_{\succ n}.\]
**Lemma 4.7**.: _Let \(u,v\in W\) and \(0\leq n\leq\#\Phi^{+}\). Pick a simple affine root \(a=(\alpha,k)\in\Delta_{\mathrm{af}}\) such that \((v\pi_{\succ n})^{-1}\alpha\in\Phi^{-}\)._
1. _Suppose that_ \(u^{-1}\alpha\in\Phi^{-}\)_. For each collection of integers_ \((m_{i}\geq 4)_{1\leq i\leq\#\Phi^{+}}\)_, there is a unique map_ \[\tilde{\psi}:\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v) \rightarrow\mathrm{paths}^{<}_{\leq n}(u\Rightarrow v)\] _and a collection of integers_ \((m^{\prime}_{i}\geq m_{i}-3)_{1\leq i\leq\#\Phi^{+}}\) _such that the following diagram commutes:_ \[\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v) \stackrel{{\mathrm{pad}_{(m_{i})}}}{{\leftrightarrow}} \mathrm{paths}^{<}(s_{\alpha}u\Rightarrow s_{\alpha}v\pi_{\succ n})\stackrel{{ \Psi^{-1}}}{{\rightarrow}}\bigsqcup_{\mu\in\mathbb{Z}\Phi^{ \vee}}P^{<}_{r}(r_{a}u,r_{a}v\pi_{\succ n}\varepsilon^{\mu})\] \[\mathrm{paths}^{<}_{\leq n}(u\Rightarrow v)\stackrel{{ \mathrm{pad}_{(m^{\prime}_{i})}}}{{\leftrightarrow}}\mathrm{paths}^{<}(u \Rightarrow v\pi_{\succ n})\stackrel{{\Psi^{-1}}}{{\rightsquigarrow}} \bigsqcup_{\mu\in\mathbb{Z}\Phi^{\vee}}P^{<}_{r}(u,v\pi_{\succ n} \varepsilon^{\mu}).\] _The map_ \(\psi\) _on the right comes from Theorem_ 4.5 _(a). The map_ \(\tilde{\psi}\) _has an explicit description independent of the integers_ \((m_{i})\)_. Moreover,_ \(\tilde{\psi}\) _satisfies the weight and length constraints as required in Theorem_ 4.2 _(a)._ _Similarly, there exist integers_ \((m^{\prime\prime}_{i}\geq m_{i}-3)_{i}\) _and a uniquely determined and explicitly described map_ \(\tilde{\psi}^{\prime}\) _making the following diagram commute:_ \[\mathrm{paths}^{<}_{\leq n}(u\Rightarrow v)\stackrel{{ \mathrm{pad}_{(m_{i})}}}{{\longleftrightarrow}} paths^{<}(u\Rightarrow v\pi_{\succ n})\stackrel{{\Psi^{-1}}}{{ \rightsquigarrow}}\bigsqcup_{\mu\in\mathbb{Z}\Phi^{\vee}}P^{<}_{r}(u,v\pi_{ \succ n}\varepsilon^{\mu})\] \[\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u\Rightarrow s_{\alpha}v) \stackrel{{\mathrm{pad}_{(m^{\prime\prime}_{i})}}}{{ \leftrightarrow}}\mathrm{paths}^{<}(s_{\alpha}u\Rightarrow s_{\alpha}v\pi_{ \succ n})\stackrel{{\Psi^{-1}}}{{\rightarrow}}\bigsqcup_{\mu\in \mathbb{Z}\Phi^{\vee}}P^{<}_{r}(r_{a}u,r_{a}v\pi_{\succ n}\varepsilon^{\mu}).\]
2. _Suppose that_ \(u^{-1}\alpha\in\Phi^{+}\)_. For each collection of integers_ \((m_{i}\geq 4)_{1\leq i\leq\#\Phi^{+}}\)_, the explicitly described maps_ \[\varphi_{1}:\bigsqcup_{\mu\in\mathbb{Z}\Phi^{\vee}}P^{<}_{r}(r_{a }u,r_{a}v\pi_{\succ n}\varepsilon^{\mu})\rightarrow\bigsqcup_{\mu\in\mathbb{ Z}\Phi^{\vee}}P^{<}_{r}(u,v\pi_{\succ n}\varepsilon^{\mu}),\] \[\varphi_{2}:\bigsqcup_{\mu\in\mathbb{Z}\Phi^{\vee}}P^{<}_{r}(r_{a }u,v\pi_{\succ n}\varepsilon^{\mu})\rightarrow\bigsqcup_{\mu\in\mathbb{Z} \Phi^{\vee}}P^{<}_{r}(u,v\pi_{\succ n}\varepsilon^{\mu}),\] \[\varphi^{\prime}:\bigsqcup_{\mu\in\mathbb{Z}\Phi^{\vee}}P^{<}_{r}(u,v\pi_{\succ n}\varepsilon^{\mu})\rightarrow\bigsqcup_{\mu\in\mathbb{Z}\Phi^{ \vee}}P^{<}_{r}(r_{a}u,r_{a}v\pi_{\succ n}\varepsilon^{\mu})\sqcup P^{<}_{r}( r_{a}u,v\pi_{\succ n}\varepsilon^{\mu})\] _from Theorem_ 4.5 _(b) can be lifted, up to padding and_ \(\Psi^{-1}\) _as in (a), to uniquely determined maps_ \[\tilde{\varphi}_{1}:\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u \Rightarrow s_{\alpha}v)\rightarrow\mathrm{paths}^{<}_{\leq n}(u \Rightarrow v),\] \[\tilde{\varphi}_{2}:\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u \Rightarrow v)\rightarrow\mathrm{paths}^{<}_{\leq n}(u\Rightarrow v),\] \[\tilde{\varphi}^{\prime}:\mathrm{paths}^{<}_{\leq n}(u \Rightarrow v)\rightarrow\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u \Rightarrow s_{\alpha}v)\sqcup\mathrm{paths}^{<}_{\leq n}(s_{\alpha}u \Rightarrow s_{\alpha}v).\]
_All three maps are explicitly described in a way that is independent of the integers_ \(m_{\bullet}\)_. The maps_ \(\varphi_{1}\) _and_ \(\varphi_{2}\) _moreover satisfy the desired length and weight compatibility relations from Theorem_ 4.2 _(b)._
Proof.: We only explain how to obtain the map \(\tilde{\psi}\) from the map \(\psi\), as the other cases are analogous. So pick any path \(p\in\operatorname{paths}_{\leq n}^{<}(s_{\alpha}u\Rightarrow s_{\alpha}v)\). Write it as
\[p:\ s_{\alpha}u=w_{1}\xrightarrow{(\gamma_{1},n_{1})}w_{2}\xrightarrow{( \gamma_{2},n_{2})}\cdots\xrightarrow{(\gamma_{\ell(p)},n_{\ell(p)})}w_{\ell(p) +1}=s_{\alpha}v.\]
Then
\[\operatorname{pad}_{(m_{i})}(p): s_{\alpha}u=w_{1}\xrightarrow{(\gamma_{1},n_{1})}\cdots \xrightarrow{(\gamma_{\ell(p)},n_{\ell(p)})}s_{\alpha}w_{\ell(p)+1}=s_{\alpha}v\] \[\xrightarrow{(\beta_{n+1},m_{n+1})}\cdots\xrightarrow{(\beta_{ \#\Phi^{+}},m_{\#\Phi^{+}})}s_{\alpha}v\pi_{>n}.\]
Define \(\gamma_{\ell(p)+i}=\beta_{n+i}\) and \(n_{\ell(p)+i}=m_{\ell(p)+i}\) for \(i=1,\ldots,\#\Phi^{+}-\ell(p)\). Then we can write
\[\operatorname{pad}_{(m_{i})}(p):s_{\alpha}u=w_{1}\xrightarrow{(\gamma_{1},n_ {1})}\cdots\xrightarrow{(\gamma_{\ell^{\prime}},n_{\ell^{\prime}})}w_{\ell^{ \prime}+1}=v\pi_{>n},\]
such that \(\ell^{\prime}=\ell(p)+(\#\Phi^{+}-n)\).
Writing \(\mu:=\operatorname{wt}(\operatorname{pad}_{(m_{i})}(p))+k\left((v\pi_{>n})^{- 1}\alpha^{\vee}-u^{-1}\alpha^{\vee}\right)\), we may express the path \(\Delta:=\Psi^{-1}(\operatorname{pad}_{(m_{i})}(p))\in P_{r}^{<}\left(r_{a}u,r_ {a}v\pi_{>\beta_{n}}\varepsilon^{\mu}\right)\) as
\[\Delta: r_{a}u=w_{1}\varepsilon^{-kw_{1}^{-1}\alpha^{\vee}}\xrightarrow{( \gamma_{1},n_{1})}w_{2}\varepsilon^{n_{1}\gamma_{1}^{\vee}-kw_{1}^{-1}\alpha^ {\vee}}\xrightarrow{(\gamma_{2},k_{2})}\] \[\ldots\xrightarrow{(\gamma_{\ell^{\prime}},n_{\ell^{\prime}})}w _{\ell^{\prime}+1}\varepsilon^{\operatorname{wt}(\operatorname{pad}_{(m_{i})} (p))-kw_{1}^{-1}\alpha^{\vee}}=r_{a}v\pi_{>n}\varepsilon^{\mu}.\]
We now apply the map \(\psi\) as defined in [17, Section 3.3]. For this, we need to determine the set
\[D_{r_{a}}(\Delta)=\{d\in\{1,\ldots,\ell^{\prime}\}\mid(\alpha,k)=(w_{d}^{-1} \gamma_{d},n_{d})\}.\]
Since \(m_{i}\geqslant 4\) for all \(i\), we get
\[D_{r_{a}}(\Delta)=\{d\mid d\in\{1,\ldots,\ell(p)\}\text{ and }(\alpha,k)=(w_{d}^{-1} \gamma_{d},n_{d})\}\subseteq[1,\ell(p)].\]
In particular, the set \(D_{r_{a}}(\Delta)\) depends only on \(p\) and not the integers \((m_{i})\).
Naito-Watanabe construct the path \(\psi(\Delta)\) as follows: Write \(D_{r_{a}}(\Delta)=\{d_{1}<\cdots<d_{m}\}\), which we allow to be the empty set.
For each index \(q\in\{1,\ldots,m\}\), we define \(r_{q}\in\{d_{q}+2,\ldots,d_{q+1}\}\) (where \(d_{m+1}=\ell^{\prime}+1\)) to be the smallest index such that
\[w_{r_{q}}^{-1}\alpha\in\Phi^{+}\text{ and }\gamma_{r_{q}-1}<w_{r_{q}}^{-1} \alpha<\gamma_{r_{q}}.\]
The existence of such an index \(r_{q}\) is proved in [17, Lemma 2.3.2]. For \(i=1,\ldots,\#\Phi^{+}-n\), note that there is no positive root \(\beta\) satisfying \(\gamma_{i}<\beta<\gamma_{i+1}\) (resp. \(\gamma_{\ell^{\prime}}<\beta\) if
\(i=\#\Phi^{+}-n\geqslant 1\)). Hence \(r_{1},\ldots,r_{m}\leqslant n\) and they only depend on the path \(p\), not the integers \(m_{\bullet}\).
We introduce the shorthand notation
\[x_{h}:=w_{h}\varepsilon^{n_{1}\gamma_{1}^{\vee}+\cdots+n_{h-1}\gamma_{h-1}^{ \vee}-kw_{1}^{-1}\alpha^{\vee}},\]
such that \(\Delta\) is of the form \(x_{1}\to\cdots\to x_{\ell^{\prime}+1}\). Then \(\psi(\Delta)\) is defined as the composition of \(\Delta_{0}^{\prime},\ldots,\Delta_{m}^{\prime}\), given by
\[\Delta_{0}^{\prime} :u=r_{a}x_{1}\xrightarrow{(\gamma_{1},n_{1}^{\prime})}r_{a}x_{2 }\xrightarrow{(\gamma_{2},n_{2}^{\prime})}\cdots\xrightarrow{(\gamma_{d_{1}-1 },n_{d_{1}-1}^{\prime})}r_{a}x_{d_{1}},\] \[\Delta_{q}^{\prime} :r_{a}x_{d_{q}}=x_{d_{q}+1}\xrightarrow{(\gamma_{d_{q}+1},n_{d_ {q}+1})}\cdots\xrightarrow{(\gamma_{r_{q}-1},n_{r_{q}-1})}x_{r_{q}} \xrightarrow{(w_{r_{q}}^{-1}\alpha,k)}r_{a}x_{r_{q}}\xrightarrow{(\gamma_{r_ {q}},n_{r_{q}}^{\prime})}\cdots\] \[\xrightarrow{(\gamma_{d_{q+1}-1},n_{d_{q+1}-1}^{\prime})}r_{a}x_ {d_{q+1}},\]
where we write
\[n_{i}^{\prime}:=n_{i}+k\langle\alpha^{\vee},w_{i}\gamma_{i}\rangle,\qquad i=1, \ldots,\ell^{\prime}.\]
Since \(r_{1},\ldots,r_{m}\leqslant n\), we may write \(\psi(\Delta)=\Psi^{-1}(\operatorname{pad}_{(m_{i}^{\prime})}(p^{\prime}))\) with
\[m_{i}^{\prime}=m_{i}-k\langle\alpha^{\vee},vs_{\beta_{n+1}}\cdots s_{\beta_{i-1 }}(\beta_{i})\rangle,\qquad i>n.\]
The path \(p^{\prime}\) is the composition of the paths \(p_{0}^{\prime},\ldots,p_{m}^{\prime}\) defined as
\[p_{0}^{\prime} :u=s_{\alpha}w_{1}\xrightarrow{(\gamma_{1},n_{1}^{\prime})} \cdots\xrightarrow{(\gamma_{d_{1}-1},n_{d_{1}-1}^{\prime})}s_{\alpha}w_{d_{1 }-1}\] \[p_{q}^{\prime} :s_{\alpha}w_{d_{q}}=w_{d_{q}+1}\xrightarrow{(\gamma_{d_{q}+1}, n_{d_{q}+1})}\cdots\xrightarrow{(\gamma_{r_{q}-1},n_{r_{q}-1})}w_{r_{q}} \xrightarrow{(w_{r_{q}}^{-1}\alpha,k)}s_{\alpha}w_{r_{q}}\xrightarrow{(\gamma _{r_{q}},n_{r_{q}}^{\prime})}\cdots\] \[\xrightarrow{(\gamma_{d_{q+1}-1},n_{d_{q+1}-1}^{\prime})}s_{ \alpha}w_{d_{q+1}}.\]
We see that \(p^{\prime}\) as defined above is explicitly described only in terms of \(p\) and independently of the \((m_{i})\).
To summarize: We chose integers \((m_{i}^{\prime})\) only depending on \((m_{i}),u,v,n,<,a\) with the following property: For each path \(p\in\operatorname{paths}_{\leq n}^{<}(s_{\alpha}u\Rightarrow s_{\alpha}v)\), we may write
\[\psi(\Psi^{-1}\operatorname{pad}_{(m_{i})}(p))=\Psi^{-1}(\operatorname{pad}_ {(m_{i}^{\prime})}(p^{\prime}))\text{ for some path }p^{\prime}\in \operatorname{paths}_{\leq n}^{<}(u\Rightarrow v).\]
It follows that the function \(\tilde{\psi}\) as claimed exists. It is uniquely determined since \(\Psi^{-1}\) and \(\operatorname{pad}_{(m_{i}^{\prime})}\) are injective. Moreover, we saw that \(p^{\prime}:=\tilde{\psi}(p)\) can be explicitly described depending only on \(p\) and not the integers \((m_{i})\).
The function \(\tilde{\psi}\) preserves lengths of paths by construction. Using the explicit description, it is possible to verify that it also satisfies the weight constraint stated in Theorem 4.2 (a). The interested reader is invited to verify that the constructions of \(\psi^{\prime},\varphi_{1},\varphi_{2},\varphi^{\prime}\) of Naito-Watanabe carry through in similar ways.
With the main lemma proved, we can conclude Theorem 4.2 immediately. Indeed, it remains to show that the functions \(\tilde{\psi}\) and \(\tilde{\varphi}:=(\tilde{\varphi}_{1},\tilde{\varphi}_{2})\) from Lemma 4.7 are bijective. Since \(\psi\) is bijective with \(\psi^{\prime}\) being its inverse, it follows from the categorical definition and a bit of diagram chasing that \(\tilde{\psi}\) is bijective with \(\tilde{\psi}^{\prime}\) its inverse. Similarly, one concludes that \(\tilde{\varphi}\) is bijective with \(\tilde{\varphi}^{\prime}\) its inverse. The main result of this section is proved.
_Remark 4.8_.:
1. Theorem 4.2 can be conveniently restated using the language of weight multisets from [23, Definition 5.9]. For \(u,v\in W\) and \(0\leqslant n\leqslant\#\Phi^{+}\), we write \(\operatorname{wts}(u\Rightarrow v\dasharrow v\pi_{>n})\) for the multiset \[\{(\operatorname{wt}(p),\ell(p))\mid p\in\operatorname{paths}^{<}_{\leq n}(u \Rightarrow v)\}_{m}.\] We proved that this yields a well-defined multiset \(\operatorname{wts}(u\Rightarrow v\dasharrow v^{\prime})\) for all \(u,v,v^{\prime}\in W\). If \(a=(\alpha,k)\in\Delta_{\operatorname{af}}\) is a simple affine root with \((v^{\prime})^{-1}\alpha\in\Phi^{-}\) and \(u^{-1}\alpha\in\Phi^{-}\), then \[\operatorname{wts}(u\Rightarrow v\dasharrow v^{\prime})=\{(\omega+k(v^{-1} \alpha^{\vee}-u^{-1}\alpha^{\vee}),e)\mid(\omega,e)\in\operatorname{wts}(s_{ \alpha}u\Rightarrow s_{\alpha}v\dasharrow s_{\alpha}v^{\prime})\}_{m}.\] If \((v^{\prime})^{-1}\alpha\in\Phi^{-}\) and \(u^{-1}\alpha\in\Phi^{+}\), then \(\operatorname{wts}(u\Rightarrow v\dasharrow v^{\prime})\) is the additive union of the two multisets \[\{(\omega+k(v^{-1}\alpha^{\vee}-u^{-1}\alpha^{\vee}),e)\mid( \omega,e)\in\operatorname{wts}(s_{\alpha}u\Rightarrow s_{\alpha}v\dasharrow s _{\alpha}v^{\prime})\}_{m}\] \[\cup \{(\omega-ku^{-1}\alpha^{\vee},e)\mid(\omega,e)\in\operatorname{ wts}(s_{\alpha}u\Rightarrow v\dasharrow v^{\prime})\}_{m}.\]
2. The double Bruhat graph can be seen as a generalization of the quantum Bruhat graph, cf. [23, Proposition 5.12]. It is very helpful to compare results about the double Bruhat graph with the much better developed theory of the quantum Bruhat graph. Under this point of view, one obtains a version of Theorem 4.2 for the quantum Bruhat graph. This is a well-known recursive description of weights in the quantum Bruhat graph, cf. [15, Lemma 7.7].
3. The remainder of this paper will mostly study consequences of recursive relations from Theorem 4.2. By studying the proof of Theorem 5.2 below, one may see that the weight multiset is already uniquely determined by these recursive relations together with a few additional facts to fix a recursive start. This can be seen as an alternative proof that the weight multiset is independent of the chosen reflection order, cf. [23, Corollary 5.8].
## 5 Iwahori-Hecke algebra
Let us briefly motivate the definition of the Iwahori-Hecke algebra associated with an affine Weyl group.
Under suitable assumptions on our group and our fields, the _Hecke algebra_\(\mathcal{H}(G,I)\) is classically defined to be the complex vector space of all compactly supported functions
\(f:G(\vec{F})\to\mathbb{C}\) satisfying \(f(i_{1}gi_{2})=f(g)\) for all \(g\in G(\vec{F}),i_{1},i_{2}\in I\). It becomes an algebra where multiplication is defined via convolution of functions. In this form, it occurs in the classical formulation of the Satake isomorphism [10].
It is proved by Iwahori-Matsomoto [14, Section 3] that \(\mathcal{H}(G,I)\) has a basis given by \(\{S_{x}\mid x\in\widetilde{W}\}\) over \(\mathbb{C}\) where the multiplication is uniquely determined by the conditions
\[S_{x}S_{y}=S_{xy}, x,y\in\widetilde{W}\text{ and }\ell(xy)=\ell(x)+\ell(y),\] \[S_{r_{a}}S_{x}=qS_{r_{a}x}+(q-1)S_{x}, x\in\widetilde{W},a\in\Delta_{\text{af}}\text{ and }\ell(r_{a}x)<\ell(x).\]
Here, \(q:=\#\left(\mathcal{O}_{F}/\mathfrak{m}_{\mathcal{O}_{F}}\right)\) is the cardinality of the residue field of \(F\). The basis element \(S_{x}\) corresponds to the indicator function of the coset \(IxI\subseteq G(\vec{F})\).
With the convenient change of variables \(T_{x}:=q^{-\ell(x)/2}S_{x}\in\mathcal{H}(G,I)\), the above relations get the equally popular form
\[T_{x}T_{y}=T_{xy}, x,y\in\widetilde{W}\text{ and }\ell(xy)=\ell(x)+\ell(y),\] \[T_{r_{a}}T_{x}=T_{r_{a}x}+(q^{1/2}-q^{-1/2})T_{x}, x\in\widetilde{W},a\in\Delta_{\text{af}}\text{ and }\ell(r_{a}x)<\ell(x).\]
Since the number \(q\) is independent of the choice of affine root system, we define the _Iwahori-Hecke algebra_ of \(\widetilde{W}\) as follows.
**Definition 5.1**.: The _Iwahori-Hecke algebra_\(\mathcal{H}(\widetilde{W})\) of \(\widetilde{W}\) is the algebra over \(\mathbb{Z}[Q]\) defined by the generators
\[T_{x},\qquad x\in\widetilde{W}\]
and the relations
\[T_{x}T_{y}=T_{xy}, x,y\in\widetilde{W}\text{ and }\ell(xy)=\ell(x)+\ell(y),\] \[T_{r_{a}}T_{x}=T_{r_{a}x}+QT_{x}, x\in\widetilde{W},a\in\Delta_{\text{af}}\text{ and }\ell(r_{a}x)<\ell(x).\]
One easily sees that \(\mathcal{H}(\widetilde{W})\) is a free \(\mathbb{Z}[Q]\)-module with basis \(\{T_{x}\mid x\in\widetilde{W}\}\), and that each \(T_{x}\) is invertible, because
\[T_{r_{a}}(T_{r_{a}}-Q)=1,\qquad a\in\Delta_{\text{af}}.\]
All results presented in this article can be immediately generalized to most other conventions for the Iwahori-Hecke algebra, e.g. by substituting \(Q=q^{1/2}-q^{-1/2}\).
### Products via the double Bruhat graph
We are interested in the question how to express arbitrary products of the form \(T_{x}T_{y}\) with \(x,y\in\widehat{W}\) in terms of this basis. This is related to understanding the structure of the subset \(IxI\cdot IyI\subseteq G(\vec{F})\). While it might be too much to ask for a general formula, we can understand these products (and thus the Iwahori-Hecke algebra) better by relating it to the double Bruhat graph. Our main result of this section is the following:
**Theorem 5.2**.: _Let \(C_{1}>0\) be a constant and define \(C_{2}:=(8\#\Phi^{+}+4)C_{1}\)._
_Let \(x=w_{x}\varepsilon^{\mu_{x}},z=w_{z}\varepsilon^{\mu_{z}}\in\widetilde{W}\) such that \(x\) is \(C_{2}\)-regular and \(z\) is \(2\ell(x)\)-regular. Define polynomials \(\varphi_{x,z,yz}\in\mathbb{Z}[Q]\) via_
\[T_{x}T_{z}=\sum_{y\in\widetilde{W}}\varphi_{x,z,yz}T_{yz}\in\mathcal{H}( \widehat{W}).\]
_Pick an element \(y=w_{y}\varepsilon^{\mu_{y}}\in\widehat{W}\) such that \(\ell(x)-\ell(y)<C_{1}\). Let_
\[\mathrm{LP}(x)=\{v_{x}\},\quad\mathrm{LP}(y)=\{v_{y}\},\quad\mathrm{LP}(z)=\{v _{z}\}\]
_and define the multiset_
\[M:=\left\{\begin{array}{ll}&(\omega_{1},\ell_{1})\in\mathrm{wts}(v_{x} \Rightarrow v_{y}\dasharrow w_{z}v_{z}),\\ \par&(\omega_{2},\ell_{2})\in\mathrm{wts}(w_{x}v_{x}w_{0}\Rightarrow w_{y}v_ {y}w_{0}\dasharrow w_{y}w_{z}v_{z})\\ &\mbox{s.th. }v_{y}^{-1}\mu_{y}=v_{x}^{-1}\mu_{x}-\omega_{1}+w_{0}\omega_{2} \end{array}\right\}_{m}.\]
_Then_
\[\varphi_{x,z,yz}=\sum_{e\in M}Q^{e}.\]
_Remark 5.3_.: (a) In principle, we have the following recursive relations to calculate \(T_{x}T_{z}\) as long as all occuring elements are in shrunken Weyl chambers, e.g. 2-regular: Pick a simple affine root \(a=(\alpha,k)\in\Delta_{\mathrm{af}}\). If \(xr_{a}<x\) (i.e. \(v_{x}^{-1}\alpha\in\Phi^{+}\)), then
\[T_{x}T_{z}=T_{xr_{a}}T_{r_{a}}T_{z}=\begin{cases}T_{xr_{a}}T_{r_{a}z},&r_{a}z> z\ (\mbox{i.e. }(w_{z}v_{z})^{-1}\alpha\in\Phi^{+}),\\ T_{xr_{a}}T_{r_{a}z}+QT_{xr_{a}}T_{z},&r_{a}z<z\ (\mbox{i.e. }(w_{z}v_{z})^{-1} \alpha\in\Phi^{-}).\end{cases}\]
This kind of recursive relation is analogous to the recursive behaviour of the multiset \(\mathrm{wts}(v_{x}\Rightarrow v_{y}\dasharrow w_{z}v_{z})\), cf. Theorem 4.2.
Similarly, if \(r_{a}x<x\) (i.e. \((w_{x}v_{x})^{-1}\alpha\in\Phi^{-}\)), we get
\[T_{x}T_{z} =T_{r_{a}}T_{r_{a}x}T_{z}=\sum_{y\in\widehat{W}}\varphi_{r_{a}x,z, yz}T_{r_{a}}T_{yz}\] \[=\sum_{y\in\widehat{W}}\varphi_{r_{a}x,z,yz}\cdot\begin{cases}T_{r _{a}yz},&r_{a}yz>yz\ (\mbox{i.e. }(w_{y}w_{z}v_{z})^{-1}\alpha\in\Phi^{+}),\\ T_{r_{a}yz}+QT_{yz},&r_{a}yz<yz\ (\mbox{i.e. }(w_{y}w_{z}v_{z})^{-1}\alpha\in\Phi^{-}). \end{cases}\]
This kind of recursive relation is analogous to the recursive behaviour of the multiset \(\mathrm{wts}(w_{x}v_{x}w_{0}\Rightarrow w_{y}v_{y}w_{0}\dasharrow w_{y}w_{z}v_ {z})\), cf. Theorem 4.2.
For the proof of Theorem 5.2, we have to apply these recursive relations iteratively while keeping track of the length and regularity conditions to ensure everything happens inside the shrunken Weyl chambers.
2. Let us compare Theorem 5.2 to the quantum Bruhat graph. In view of [12, Proposition 5.12], it follows that \(\varphi_{x,z,yz}=0\) unless \[v_{y}^{-1}\mu_{y}\leq v_{x}^{-1}\mu_{x}-\operatorname{wt}_{\operatorname{QB}(W) }(v_{x}\Rightarrow v_{y})-\operatorname{wt}_{\operatorname{QB}(W)}(w_{y}v_{y} \Rightarrow w_{x}v_{x}).\] By [12, Theorem 4.2], this latter inequality is equivalent to the Bruhat order condition \(y\leq x\), which is (by definition of the Iwahori Hecke algebra) always a necessary condition for \(\varphi_{x,z,yz}\) to be non-zero.
3. If the condition \(\ell(x)-\ell(y)<C_{1}\) gets strengthened to \(\ell(x)+\ell(z)-\ell(yz)<C_{1}\), it follows that the product \(yz\) must be length additive, so \(v_{y}=w_{z}v_{z}\)[12, Lemma 2.13]. One of the simple facts on the double Bruhat graph [12, Lemma 5.10] yields \[\operatorname{wts}(w_{x}v_{x}w_{0}\Rightarrow w_{y}v_{y}w_{0}\dashrightarrow w_{y}w_{z}v_{z})=\begin{cases}\emptyset,&w_{y}v_{y} \neq w_{x}v_{x},\\ \{(0,0)\}_{m},&w_{y}v_{y}=w_{x}v_{x}.\end{cases}\] So the multiset \(M\) as defined in Theorem 5.2 is empty unless \(w_{y}v_{y}=w_{x}v_{x}\), in which case it will be equal to \[M=\{\ell\mid(\omega,\ell)\in\operatorname{wts}(v_{x}\Rightarrow v_{y})\text{ s.th. }v_{y}^{-1}\mu_{y}=v_{x}^{-1}\mu_{x}-\omega\}_{m}.\] This recovers Theorem 1.2. The unique smallest element of \(\operatorname{wts}(v_{x}\Rightarrow v_{y})\) from [12, Proposition 5.12] corresponds to the uniquely determined largest element in \(\widetilde{W}\) having non-zero coefficient in \(T_{x}T_{z}\). This element is known as the _Demazure product_ of \(x\) and \(z\) in \(\widetilde{W}\). We recover the formula for the Demazure product of \(x\) and \(z\) in terms of the quantum Bruhat graph from He-Nie [13, Proposition 3.3] in the situation of Theorem 5.2.
**Definition 5.4**.:
1. For \(x\in\widetilde{W}\) and \(w\in W\), we define the multiset \(Y(x,w)\) as follows: The underlying set \(|Y(x,w)|\) is a subset of \(\widetilde{W}\times\mathbb{Z}\), and the multiplicity of the pair \((y,e)\in\widetilde{W}\times\mathbb{Z}\) in \(Y(x,w)\) is defined via the equation \[T_{x}T_{w\varepsilon^{2\rho^{\vee}\ell(x)}}=\sum_{(y,e)\in Y(x,w)}Q^{e}T_{yw \varepsilon^{2\rho^{\vee}\ell(x)}}.\]
2. We define the usual product group structure on \(\widetilde{W}\times\mathbb{Z}\), i.e. \[(y_{1},e_{1})\cdot(y_{2},e_{2}):=(y_{1}y_{2},e_{1}+e_{2})\] for \(y_{1},y_{2}\in\widetilde{W}\) and \(e_{1},e_{2}\in\mathbb{Z}\). If \(M\) is a multiset with \(|M|\subseteq\widetilde{W}\times\mathbb{Z}\), we write \(M\cdot(y,e)\) for the multiset obtained by the right action of \((y,e)\in\widetilde{W}\times\mathbb{Z}\).
**Lemma 5.5**.: _Let \(x,z\in\widetilde{W}\) such that \(z\) is \(2\ell(x)\)-regular._
1. _Write_ \(z=w_{z}\varepsilon^{\mu_{z}}\) _and_ \(\mathrm{LP}(z)=\{v_{z}\}\)_. Then_ \[T_{x}T_{z}=\sum_{(y,e)\in Y(x,w_{z}v_{z})}Q^{e}T_{yz}.\]
2. _Let_ \(a=(\alpha,k)\in\Delta_{\mathrm{af}}\) _with_ \(xr_{a}<x\) _and_ \(w\in W\)_. If_ \(w^{-1}\alpha\in\Phi^{+}\)_, we have_ \[Y(x,w)=Y(xr_{a},s_{\alpha}w)\cdot(r_{a},0).\] _If_ \(w^{-1}\alpha\in\Phi^{-}\)_, we express_ \(Y(x,w)\) _as the additive union of multisets_ \[Y(x,w)= \Big{(}Y(xr_{a},s_{\alpha}w)\cdot(r_{a},0)\Big{)}\cup\Big{(}Y(xr_{a},w)\cdot(1,1)\Big{)}.\]
3. _For_ \(y=w_{y}\varepsilon^{\mu_{y}}\in\widehat{W}\) _and_ \(e\in\mathbb{Z}\)_, the multiplicity of_ \((y,e)\in Y(x,w)\) _agrees with the multiplicity of_ \((y^{-1},e)\) _in_ \(Y(x^{-1},\mathrm{cl}(y)w)\)_, where_ \(\mathrm{cl}(y)\in W\) _is the classical part of_ \(y\in W\ltimes X_{*}(T)_{\Gamma_{0}}\)_._
Proof.:
1. The regularity condition allows us to write \(z\) as the length additive product \[z=z_{1}\cdot z_{2},\qquad z_{1}=w_{z}v_{z}\varepsilon^{2\rho^{\vee}\ell(x)}, \qquad z_{2}=v_{z}^{-1}\varepsilon^{\mu_{z}-v_{z}2\rho^{\vee}\ell(x)}.\] Then we get \[T_{x}T_{z}=T_{x}T_{z_{1}}T_{z_{2}}=\sum_{(y,e)\in Y(x,w_{z}v_{z})}T_{yz_{1}}T _{z_{2}}.\] By regularity of \(z_{1}\), it follows that \(\mathrm{LP}(yz_{1})=\mathrm{LP}(z_{1})=\{1\}\) for each \(y\leq x\) in the Bruhat order. Thus \(T_{yz_{1}}T_{z_{2}}=T_{yz_{1}z_{2}}=T_{yz}\) for each \((y,e)\in Y(x,w_{z}v_{z})\).
2. Let \(z=w\varepsilon^{\mu}\) with \(\mu\) superregular and dominant, as in (a). Use the fact \[T_{x}T_{z}=T_{xr_{a}}T_{r_{a}}T_{z}\] and evaluate \(T_{r_{a}}T_{z}\) depending on whether \(w^{-1}\alpha\) is positive or negative.
3. Fix \(y\in\widehat{W}\) and assume that both \(z\) and \(yz\) are \(2\ell(x)\)-regular. We calculate \[\sum_{e\in\mathbb{Z}}\left(\text{multiplicity of }(y,e)\text{ in }Y(x,w_{z}v_{z})\right)Q^{e}\] \[= (\text{coefficient of }T_{yz}\text{ in }T_{x}T_{z})\] \[= (\text{coefficient of }T_{1}\text{ in }T_{(yz)^{-1}}T_{x}T_{z})\] \[= (\text{coefficient of }T_{1}\text{ in }T_{z^{-1}}T_{x^{-1}}T_{yz})\] \[= (\text{coefficient of }T_{z}\text{ in }T_{x^{-1}}T_{yz})\] \[= \sum_{e\in\mathbb{Z}}\left(\text{multiplicity of }(y^{-1},e)\text{ in }Y(x^{-1},w_{y}w_{z}v_{z})\right)Q^{e}.\]
Comparing coefficients of \(Q^{e}\) in \(\mathbb{Z}[Q]\), the claim follows.
_Remark 5.6_.: The connection to our previous article [12] is given as follows: For \(x,z\) as in Lemma 5.5, the regularity condition on \(z\) basically ensures that \(zIz^{-1}\) behaves like \({}^{w_{z}v_{z}}U(L)\), so we can approximate \(IzI\) by the semi-infinite orbit \(Iz\,^{v_{z}}U(L)=I\,^{w_{z}v_{z}}U(L)z\). Then \(IxI\cdot IzI\) is very close to
\[IxI\cdot{}^{w_{z}v_{z}}U(L)z=\bigcup_{(y,e)\in Y(x,w_{z}v_{z})}Iy\,^{w_{z}v_{z} }U(L)z\subseteq G(\check{F}).\]
Now observe for any \(y\in\widetilde{W}\) that
\[IxI\cap Iy\,^{w_{z}v_{z}}U(L)\neq\emptyset\iff y\in IxI\,\cdot{}^{w_{z}v_{z}}U (L).\]
So the multiset \(Y(x,w)\) is the representation-theoretic correspondent of the main object of interest in [12, Theorem 6.2].
**Lemma 5.7**.: _Let \(x=w_{x}\varepsilon^{\mu_{x}}\in\widetilde{W}\) and pick elements \(u_{1},u_{2}\in W\) as well as \(v_{x}\in\mathrm{LP}(x)\)._
1. _The multiset_ \(\mathrm{wts}(v_{x}\Rightarrow u_{1}\dashrightarrow u_{2})\) _is equal to the additive union of multisets_ \[\bigcup_{(w_{y}\varepsilon^{\mu y},e)\in Y(x,u_{2})}\Bigl{\{}(v_{x}^{-1}\mu_{ x}-u_{1}^{-1}\mu_{y}+\omega,e+\ell)\mid(\omega,\ell)\in\mathrm{wts}(w_{x}v_{x} \Rightarrow w_{y}u_{1}\dashrightarrow w_{y}u_{2})\Bigr{\}}_{m}.\]
2. _The multiset_ \(\mathrm{wts}(w_{x}v_{x}w_{0}\Rightarrow u_{2}w_{0}\dashrightarrow u_{1})\) _is equal to the additive union of multisets_ \[\bigcup_{\begin{subarray}{c}u_{3}\in W\\ (w_{y}\varepsilon^{\mu y},e)\in Y(x,u_{3})\\ s.t.\ w_{y}u_{3}=u_{1}\end{subarray}}\Big{\{}(w_{0}u_{2}^{-1}w_{y}\mu_{y}-w_{ 0}v_{x}^{-1}\mu_{x}+\omega,e+\ell)\] \[\mid(\omega,\ell)\in\mathrm{wts}(v_{x}w_{0}\Rightarrow w_{y}^{-1}u _{2}w_{0}\dashrightarrow u_{3})\Bigr{\}}_{m}.\]
Proof.: (a) Induction on \(\ell(x)\). In case \(\ell(x)=0\), we get \(Y(x,u_{2})=\{(x,0)\}_{m}\). From [12, Lemma 5.6 (c)], we indeed get that \(\mathrm{wts}(v_{x}\Rightarrow u_{1}\dashrightarrow u_{2})\) is equal to
\[\{(v_{x}^{-1}\mu_{x}-u_{1}^{-1}\mu_{x}+\omega,\ell)\mid(\omega,\ell)\in \mathrm{wts}(w_{x}v_{x}\Rightarrow w_{x}u_{1}\dashrightarrow w_{x}u_{2})\}_ {m}.\]
Now in the inductive step, pick a simple affine root \(a=(\alpha,k)\) with \(xr_{a}<x\). This means \(v_{x}^{-1}\alpha\in\Phi^{+}\) and \(v_{x^{\prime}}:=s_{\alpha}v_{x}\in\mathrm{LP}(x^{\prime})\), where
\[x^{\prime}:=w_{x^{\prime}}\varepsilon^{\mu_{x^{\prime}}}:=xr_{a}=w_{x}s_{ \alpha}\varepsilon^{s_{\alpha}(\mu_{x})+k\alpha^{\vee}}.\]
Let us first consider the case \(u_{2}^{-1}\alpha\in\Phi^{+}\). Then \(Y(x,u_{2})=Y(x^{\prime},s_{\alpha}u_{2})\cdot(r_{a},0)\) by Lemma 5.5 (b). We get
\[\bigcup_{(w_{y}\varepsilon^{\mu y},e)\in Y(x,u_{2})}\Bigl{\{}(v_{x}^{-1}\mu_ {x}-u_{1}^{-1}\mu_{y}+\omega,e+\ell)\mid(\omega,\ell)\in\mathrm{wts}(w_{x}v_{ x}\Rightarrow w_{y}u_{1}\dashrightarrow w_{y}u_{2})\Bigr{\}}_{m}\]
\[=\bigcup_{(w_{y^{\prime}}\varepsilon^{\mu y^{\prime}},e)\in Y(x^{\prime},s_{ \alpha}u_{2})}\Bigl{\{}(v_{x^{\prime}}^{-1}\mu_{x}^{\prime}+kv_{x}^{-1}\alpha^ {\vee}-(s_{\alpha}u_{1})^{-1}\mu_{y^{\prime}}-ku_{1}^{-1}\alpha^{\vee}+\omega,e +\ell)\mid\]
\[(\omega,\ell)\in\mathrm{wts}(w_{x^{\prime}}v_{x^{\prime}}\Rightarrow w_{y^{ \prime}}(s_{\alpha}u_{1})\dashrightarrow w_{y^{\prime}}(s_{\alpha}u_{2})) \Bigr{\}}_{m}.\]
By the inductive assumption, this is equal to \[\{(\omega+k(v_{x}^{-1}\alpha^{\vee}-u_{1}^{-1}\alpha^{\vee}),\ell)\mid(\omega,\ell )\in\operatorname{wts}(s_{\alpha}v_{x}\Rightarrow s_{\alpha}u_{1}\dashrightarrow s _{\alpha}u_{2})\}_{m}.\] By Theorem 4.2 (a), this is equal to \(\operatorname{wts}(v_{x}\Rightarrow u_{1}\dashrightarrow u_{2})\), using the assumption \(u_{2}^{-1}\alpha\in\Phi^{+}\) again. In the converse case where \(u_{2}^{-1}\alpha\in\Phi^{-}\), we argue entirely similarly. Use Lemma 5.5 to write \[Y(x,u_{2})=\Big{(}Y(x^{\prime},s_{\alpha}u_{2})\cdot(r_{a},0)\Big{)}\cup\Big{(} Y(x^{\prime},u_{2})\cdot(1,1)\Big{)}\] Considering Theorem 4.2 (b), the inductive claim follows.
2. One may argue similarly to (a), tracing through somewhat more complicated expressions to reduce to Theorem 4.2 again. Instead, we show that (a) and (b) are equivalent. Recall that \(w_{x}v_{x}w_{0}\in\operatorname{LP}(x^{-1})\)[16, Lemma 2.12]. By (a), we see that \(\operatorname{wts}(w_{x}v_{x}w_{0}\Rightarrow u_{2}w_{0}\dashrightarrow u_{1})\) is equal to \[\bigcup_{(w_{y}e^{\mu y},e)\in Y(x^{-1},u_{1})}\Big{\{} \begin{array}{l}((w_{x}v_{x}w_{0})^{-1}(-w_{x}\mu_{x})-(u_{2}w_{0})^{-1} \mu_{y}+\omega,e+\ell),\\ \mid(\omega,\ell)\in\operatorname{wts}(v_{x}w_{0}\Rightarrow w_{y}u_{2}w_{0} \dashrightarrow w_{y}u_{1})\Big{\}}.\end{array}\]
In view of Lemma 5.5 (c), we recover the claim in (b).
**Lemma 5.8**.: _Let \(C_{1},e\geqslant 0\) be two non-negative integers. Define \(C_{2}:=(8e+4)C_{1}\)._
_Let \(x,y\in\widetilde{W}\) such that \(x\) is \(C_{2}\)-regular and \(\ell(x)-\ell(y)<C_{1}\). Let \(u\in W\). Write_
\[\begin{array}{ll}x=w_{x}\varepsilon^{\mu_{x}},&y=w_{y}\varepsilon^{\mu_{y}},\\ \operatorname{LP}(x)=\{v_{x}\},&\operatorname{LP}(y)=\{v_{y}\}.\end{array}\]
_Define the multiset_
\[M:=\left\{\begin{array}{ll}&(\omega_{1},\ell_{1})\in\operatorname{wts}(v_{x }\Rightarrow v_{y}\dashrightarrow u),\\ \ell_{1}+\ell_{2}\mid&(\omega_{2},\ell_{2})\in\operatorname{wts}(w_{x}v_{x}w_ {0}\Rightarrow w_{y}v_{y}w_{0}\dashrightarrow w_{y}u)\\ &\text{s.th. }v_{y}^{-1}\mu_{y}=v_{x}^{-1}\mu_{x}-\omega_{1}+w_{0}\omega_{2} \end{array}\right\}_{m}.\]
_Then the multiplicity of \((y,e)\) in \(Y(x,u)\) agrees with the multiplicity of \(e\) in \(M\)._
Proof.: Induction on \(e\). Consider the inductive start \(e=0\). If \(0\in M\), then \(\ell_{1}=\ell_{2}=0\) and \(v_{x}=v_{y}\) by definition of \(M\). Hence \(x=y\), and indeed \(0\in M\) has multiplicity \(1\). Similarly, \((y,0)\) also has multiplicity \(1\) in \(Y(x,u)\).
If \(0\notin M\), we see \(x\neq y\) and indeed \((y,0)\notin Y(x,u)\) for \(x\neq y\). This settles the inductive start.
In the inductive step, let us write \(x\) as length additive product \(x=x_{1}x_{2}x_{3}\) where
\[x_{1}=\varepsilon^{4C_{1}w_{x}v_{x}\rho^{\vee}},\qquad x_{2}=w_{x}\varepsilon^{ \mu_{x}-8C_{1}v_{x}\rho^{\vee}},\qquad x_{3}=\varepsilon^{4C_{1}v_{x}\rho^{ \vee}}.\]
Note that the inductive assumptions are satisfied for \(C_{1},e-1,x_{2}\) and any element \(y^{\prime}\in\widetilde{W}\) such that \(\ell(x_{2})-\ell(y^{\prime})<C_{1}\).
The length additivity of \(x=x_{1}x_{2}x_{3}\) implies
\[Y(x,u)=\Big{\{}(y_{1}y_{2}y_{3},e_{1}+e_{2}+e_{3})\mid\begin{array}{l}(y_{3},e _{3})\in Y(x_{3},u),\\ (y_{2},e_{2})\in Y(x_{2},\operatorname{cl}(y_{3})u),\\ (y_{1},e_{1})\in Y(x_{1},\operatorname{cl}(y_{2})\operatorname{cl}(y_{3})u) \end{array}\Big{\}}_{m}.\]
Pick elements
\[(y_{3},e_{3})\in Y(x_{3},u),\quad(y_{2},e_{2})\in Y(x_{2},\operatorname{cl}(y_ {3})u),\quad(y_{1},e_{1})\in Y(x_{1},\operatorname{cl}(y_{2})\operatorname{ cl}(y_{3})u)\]
such that \(\ell(y_{1}y_{2}y_{3})>\ell(x)-C_{1}\) and \(e_{1}+e_{2}+e_{3}=e\).
In this case, we certainly get \(\ell(y_{i})>\ell(x_{i})-C_{1}\) for \(i=1,2,3\). Since \(x_{1},x_{2},x_{3}\) are \(4C_{1}\)-regular by construction, it follows that each \(y_{i}\) is \(2C_{1}\)-regular by \(y_{i}\leq x_{i}\) and \(\ell(y_{i})>\ell(x_{i})-C_{1}\) (studying how regularity behaves in a sequence of Bruhat covers from \(y_{i}\) to \(x_{i}\)). We claim that
\[\ell(y_{1}y_{2}y_{3})=\ell(y_{1})+\ell(y_{2})+\ell(y_{3}).\]
In view of [13, Lemma 2.13], it suffices to see that \(\ell(y_{1}y_{2})=\ell(y_{1})+\ell(y_{2})\) and \(\ell(y_{2}y_{3})=\ell(y_{2})+\ell(y_{3})\) (using regularity). If, say, \(y_{1}y_{2}\) is not a length additive product, we use the same result to find a root \(\alpha\in\Phi\) with \(\ell(y_{1},\operatorname{cl}(y_{2})\alpha)>0\) and \(\ell(y_{2},\alpha)<0\). If \(1\leq m\leq 2C_{1}-1\), then \((\alpha,m)\) is a positive affine root with \(y_{1}(\alpha,m)\in\Phi_{\operatorname{af}}^{-}\) and \(y_{2}^{-1}(\alpha,m)\in\Phi_{\operatorname{af}}^{-}\). It follows that
\[\ell(y_{1}y_{2})\leq\ell(y_{1})+\ell(y_{2})-2C_{1}+2\leq\ell(y_{1})+\ell(y_{2 })-C_{1}.\]
This contradicts the above assumption \(\ell(y_{1}y_{2}y_{3})>\ell(x)-C_{1}\geq\ell(y_{1})+\ell(y_{2})+\ell(y_{3})-C_{1}\). The proof that \(y_{2}y_{3}\) is length additive is completely analogous.
Let us consider the special case \(e_{1}=e_{3}=0\) separately. Then \(y_{1}=x_{1}\) and \(y_{3}=x_{3}\). The length additivity of the product \(x_{1}y_{2}x_{3}\) implies that \(\operatorname{LP}(y_{2})=\{v_{x}\}\) and \(\operatorname{cl}(y_{2})=w_{x}\). Using Lemma 5.7 (a), we can express \(\{(0,0)\}_{m}=\operatorname{wts}(v_{x}\Rightarrow v_{x}\dashrightarrow u)\) in the form
\[\bigcup_{(w_{y}e^{\mu y_{y}},e^{\prime})\in Y(x_{2},u)}\{(\cdots,e^{\prime}+ \ell)\mid(\omega,\ell)\in\operatorname{wts}(w_{x}v_{x}\Rightarrow w_{y}v_{x} \dashrightarrow w_{y}u)\}_{m}.\]
From this and [13, Lemma 5.10], it follows that \(Y(x_{2},u)\) contains only one element \((y^{\prime},e^{\prime})\) with \(\operatorname{cl}(y^{\prime})=w_{x}\), and that this element must be equal to \((x_{2},0)\).
We see that, if \(e_{1}=e_{3}=0\), we must also have \(e_{2}=0\). This case has been settled before.
We hence assume that \(e_{1}+e_{3}>0\). In particular, we may apply the inductive assumption to \(x_{2},y_{2},e_{2}\). Recall that the multiplicity of \((y,e)\) in \(Y(x,u)\) is equal to the number of tuples (with multiplicity)
\[(y_{3},e_{3})\in Y(x_{3},u),\quad(y_{2},e_{2})\in Y(x_{2},\operatorname{cl}(y_ {3})u),\quad(y_{1},e_{1})\in Y(x_{1},\operatorname{cl}(y_{2})\operatorname{ cl}(y_{3})u)\]
such that \(e_{1}+e_{2}+e_{3}=e\) and \(y=y_{1}y_{2}y_{3}\) (necessarily length additive). Hence \(\operatorname{LP}(y_{2})=\{\operatorname{cl}(y_{3})v_{y}\}\) and \(w_{y}=\operatorname{cl}(y_{1})\operatorname{cl}(y_{2})\operatorname{cl}(y_{3})\). By induction, the multiplicity of \((y,e)\) in \(Y(x,u)\) is also equal to the number of tuples (with multiplicity)
\[(y_{3},e_{3})\in Y(x_{3},u),\] \[(\omega_{1},\ell_{1})\in \operatorname{wts}(v_{x}\Rightarrow\operatorname{cl}(y_{3})v_{y} \dashrightarrow\operatorname{cl}(y_{3})u),\] \[(\omega_{2},\ell_{2})\in \operatorname{wts}(w_{x}v_{x}w_{0}\Rightarrow\operatorname{cl}(y _{1})^{-1}w_{y}v_{y}w_{0}\dashrightarrow\operatorname{cl}(y_{1})^{-1}w_{y}u),\] \[(y_{1},e_{1})\in Y(x_{1},\operatorname{cl}(y_{1})^{-1}w_{y}u),\]
satisfying \(e=e_{1}+\ell_{1}+\ell_{2}+e_{3}\) and
\[y_{1}^{-1}yy_{3}^{-1}=\operatorname{cl}(y_{1})^{-1}w_{y}\operatorname{cl}(y_{ 3})^{-1}\varepsilon^{(\operatorname{cl}(y_{3})v_{y})(v_{x}^{-1}\mu_{x_{2}}- \omega_{1}+w_{0}\omega_{2})}.\]
The latter identity can be rewritten, if we write \(y_{3}=w_{3}\varepsilon^{\mu_{3}}\) and \(y_{1}=w_{1}\varepsilon^{\mu_{1}}\), as
\[v_{y}^{-1}\mu_{y}=v_{x}^{-1}\mu_{x_{2}}-\omega_{1}+w_{0}\omega_{2}+v_{y}^{-1} \mu_{3}+(w_{y}v_{y})^{-1}w_{1}\mu_{1}.\]
We see that we may study the contributions of \((y_{3},e_{3},\omega_{1},\ell_{1})\) and \((y_{1},e_{1},\omega_{2},\ell_{2})\) separately.
We may combine the above data for \((y_{3},e_{3},\omega_{1},\ell_{1})\), noticing that we are only interested in the multiset
\[\Big{\{}(-v_{y}^{-1}\mu_{3}+\omega_{1}+v_{x}^{-1}\mu_{x_{3}},e_{3}+\ell_{1}) \mid\begin{array}{c}(w_{3}\varepsilon^{\mu_{3}},e_{3})\in Y(x_{3},u),\\ (\omega_{1},\ell_{1})\in\operatorname{wts}(v_{x}\Rightarrow w_{3}v_{y} \dashrightarrow w_{3}u)\end{array}\Big{\}}_{m}.\]
By Lemma 5.7 (a), the above multiset agrees with \(\operatorname{wts}(v_{x}\Rightarrow v_{y}\dashrightarrow u)\).
Similarly, we may combine the data for \((y_{1},e_{1},\omega_{2},\ell_{2})\), noticing that we are only interested in the multiset
\[\Big{\{}(w_{0}(w_{y}v_{y})^{-1}w_{1}\mu_{1}+\omega_{2}-w_{0}(w_{x}v_{x})^{-1} \mu_{x_{1}},e_{1}+\ell_{2})\mid\]
\[u^{\prime}\in W\]
\[(w_{1}\varepsilon^{\mu_{1}},e_{1})\in Y(x_{1},u^{\prime})\text{ s.th. }w_{1}u^{\prime}=w_{y}u\]
By Lemma 5.7 (b), the above multiset agrees with \(\operatorname{wts}(w_{x}v_{x}w_{0}\Rightarrow w_{y}v_{y}w_{0}\dashrightarrow w _{y}u)\).
We summarize that the multiplicity of \((y,e)\) in \(Y(x,u)\), i.e. the number of tuples \((y_{3},e_{3},\omega_{1},\ell_{1},\omega_{2},\ell_{2},y_{1},e_{1})\) with multiplicity as above, is equal to the number of tuples
\[(\lambda_{1},f_{1})\in \operatorname{wts}(v_{x}\Rightarrow v_{y}\dashrightarrow u)\] \[(\lambda_{2},f_{2})\in \operatorname{wts}(w_{x}v_{x}w_{0}\Rightarrow w_{y}v_{y}w_{0} \dashrightarrow w_{y}u)\]
satisfying \(e=f_{1}+f_{2}\) and
\[v_{y}^{-1}\mu_{y}=v_{x}^{-1}\mu_{x_{2}}-\lambda_{1}+v_{x}^{-1}\mu_{x_{3}}+w_{0 }\lambda_{2}+(w_{x}v_{x})^{-1}\mu_{x_{1}}.\]
Up to evaluating the product \(x=x_{1}x_{2}x_{3}\in W\ltimes X_{*}(T)_{\Gamma_{0}}\), this finishes the induction and the proof.
**Corollary 5.9**.: _Let \(x=w_{x}\varepsilon^{\mu_{x}},z=w_{z}\varepsilon^{\mu_{z}}\in\widetilde{W}\). Write_
\[T_{x}T_{z}=\sum_{y\in\widetilde{W}}\sum_{e\geq 0}n_{y,e}Q^{e}T_{yz},\qquad n _{y,e}\in\mathbb{Z}_{\geq 0}.\]
_Pick elements \(v_{x}\in\mathrm{LP}(x_{x}),v_{z}\in\mathrm{LP}(x_{z}),e\in\mathbb{Z}_{\geq 0}\) and \(y=w_{y}\varepsilon^{\mu_{y}}\in\widetilde{W}\). Then \(n_{y,e}\) is at most equal to the multiplicity of the element_
\[\left(v_{x}^{-1}\left(\mu_{x}-w_{x}^{-1}w_{y}\mu_{y}\right),e\right)\]
_in the multiset_
\[\mathrm{wts}(v_{x}\Rightarrow w_{y}^{-1}w_{x}v_{x}\dashrightarrow w_{z}v_{z}).\]
Proof.: Let us write \(\mathcal{H}(\widetilde{W})_{\geq 0}\) for the subset of those elements of \(\mathcal{H}(\widetilde{W})\) which are non-negative linear combinations of elements of the form \(Q^{e}T_{x}\) for \(e\in\mathbb{Z}_{\geq 0}\) and \(x\in\widetilde{W}\). For dominant coweights \(\lambda_{1},\lambda_{2}\in X_{\bullet}(T)_{\Gamma_{0}}\), we obtain
\[T_{\varepsilon^{w_{x}v_{x}\lambda_{1}x}}T_{z\varepsilon^{v_{x} \lambda_{2}}}=T_{\varepsilon^{w_{x}v_{x}\lambda_{1}}}T_{x}T_{z}T_{\varepsilon^ {v_{x}\lambda_{2}}}\] \[=\sum_{y\in\widetilde{W}}\sum_{e\geq 0}n_{y,e}Q^{e}T_{ \varepsilon^{w_{x}v_{x}\lambda_{1}}}T_{yz}T_{\varepsilon^{v_{x}\lambda_{2}}}\] \[\in\sum_{y\in\widetilde{W}}\sum_{e\geq 0}n_{y,e}Q^{e}T_{ \varepsilon^{w_{x}v_{x}\lambda_{1}yz\varepsilon^{v_{x}\lambda_{2}}}}+\mathcal{ H}(\widetilde{W})_{\geq 0}.\]
So the quantity \(n_{y,e}\) can only increase if we replace \((x,y,z)\) by \((\varepsilon^{w_{x}v_{x}\lambda_{1}}x,\varepsilon^{w_{x}v_{x}\lambda_{1}}y,z \varepsilon^{w_{z}\lambda_{2}})\). Choosing our dominant coweights \(\lambda_{1},\lambda_{2}\) appropriately regular, the claim follows from Lemma 5.8.
Proof of Theorem 5.2.: In view of Corollary 5.9 and the definition of paths in the double Bruhat graph, it follows easily that for all \(x,y,z\in\widetilde{W}\), the degree of the polynomial \(\varphi_{x,y,z}\) in \(\mathbb{Z}[Q]\) is bounded from above by \(\#\Phi^{+}\) (re-proving this well-known fact). Thus the theorem follows by assuming \(e\leq\#\Phi^{+}\) in Lemma 5.8 (noticing that also the multiset \(M\) cannot contain elements \(>\#\Phi^{+}\) using the definition of paths in the double Bruhat graph).
### Class polynomial
Choose for each \(\sigma\)-conjugacy class \(\mathcal{O}\subseteq\widetilde{W}\) a minimal length element \(x_{\mathcal{O}}\in\mathcal{O}\). Then the class polynomials associated with each \(x\in\widetilde{W}\) are the uniquely determined polynomials \(f_{x,\mathcal{O}}\in\mathbb{Z}[Q]\) satisfying
\[T_{x}\equiv\sum_{\mathcal{O}}f_{x,\mathcal{O}}T_{x_{\mathcal{O}}} \pmod{[\mathcal{H},\mathcal{H}]_{\sigma}},\]
where \([\mathcal{H},\mathcal{H}]_{\sigma}\) is the \(\mathbb{Z}[Q]\)-submodule of \(\mathcal{H}\) generated by the elements of the form
\[[h,h^{\prime}]_{\sigma}=hh^{\prime}-h^{\prime}\sigma(h)\in\mathcal{H}.\]
These polynomials \(f_{x,\mathcal{O}}\in\mathbb{Z}[Q]\) are independent of the choice of minimal length representatives \(x_{\mathcal{O}}\in\mathcal{O}\), and there is an explicit algorithm to compute them, cf. [11]. Using this algorithm, one easily sees the following boundedness property: Whenever \(\ell(x)<\ell(x_{\mathcal{O}})\), we must have \(f_{x,\mathcal{O}}=0\). The main result of this section is the following.
**Theorem 5.10**.: _Let \(B>0\) be any real number. There exists an explicitly described constant \(B^{\prime}>0\), depending only on \(B\) and the root system \(\Phi\), such that the following holds true:_
_Let \(x=w\varepsilon^{\mu}\in\widehat{W}\) be \(B^{\prime}\)-regular and write \(\operatorname{LP}(x)=\{v\}\). For each \(\sigma\)-conjugacy class \(\mathcal{O}\subseteq\widehat{W}\) with \(\langle v^{-1}\mu-\nu(\mathcal{O}),2\rho\rangle\leq B\) and \(\kappa(\mathcal{O})=\kappa(x)\), we have_
\[f_{x,\mathcal{O}}=\sum_{\begin{subarray}{c}(\omega,e)\in\operatorname{wts}(v \Rightarrow\sigma(wv))\text{ s.t.}\\ \nu(\mathcal{O})=\operatorname{avg}_{\sigma}(v^{-1}\mu-\omega)\end{subarray}}Q ^{e}\in\mathbb{Z}[Q].\]
_Remark 5.11_.:
1. Our proof reduces Theorem 5.10 to Theorem 5.2. This yields a short and instructive proof, but results in a very large value of \(B^{\prime}\). One may alternatively compare the aforementioned algorithm of He-Nie directly with Theorem 4.2 to obtain a significantly smaller value of \(B^{\prime}\).
2. Explicit formulas for the full class polynomials, rather than just degree and sometimes leading coefficients, have been very rare in the past. One exception of this are the elements with finite Coxeter part as studied in [11]. In the setting of Theorem 5.10, this means that \(v^{-1}\sigma(wv)\in W\) has a reduced expression in \(W\) where every occurring simple reflection lies in a different \(\sigma\)-orbit in \(S\). Then the class polynomial from [11, Theorem 7.1] is, translating to our notation as above, given by \(Q^{\ell(v^{-1}\sigma(wv))}\). Write \(v^{-1}\sigma(wv)=s_{\alpha_{1}}\cdots s_{\alpha_{n}}\) for such a reduced expression as above, and choose a reflection order \(\prec\) with \(\alpha_{1}\prec\cdots\prec\alpha_{n}\). Then one sees that there is only one unlabelled \(\prec\)-increasing path from \(v\) to \(\sigma(wv)\) in the double Bruhat graph, given by \[v\to vs_{\alpha_{1}}\rightarrow\cdots\to vs_{\alpha_{1}}\cdots s_{ \alpha_{n}}=\sigma(wv).\] This path has length \(n\). Since the simple coroots \(\alpha_{1},\ldots,\alpha_{n}\) lie in pairwise distinct \(\sigma\)-orbits, it follows for any coroot \(\omega\in\mathbb{Z}\Phi^{\vee}\) that there is at most one choice of integers \(m_{1},\ldots,m_{n}\in\mathbb{Z}\) with \[m_{1}\alpha_{1}^{\vee}+\cdots+m_{n}\alpha_{n}^{\vee}\equiv\omega\in X_{*}(T)_ {\Gamma}.\] With a bit of bookkeeping, one may explicitly describe \(\operatorname{wts}(v\Rightarrow\sigma(wv))\) as a multiset of pairs \((\omega,n)\), each with multiplicity one, for exactly those coweights \(\omega\) which are non-negative linear combinations of the simple coroots \(\alpha_{1}^{\vee},\ldots,\alpha_{n}^{\vee}\). This easy double Bruhat theoretic calculation recovers [11, Theorem 7.1] in the setting of Theorem 5.10.
3. Let \(J\subseteq\Delta\) be the support of \(v^{-1}\sigma(wv)\) in \(W\). Let \(v^{J}\in W^{J}\) be the unique minimal length element in \(vJ\). Write \(v=v^{J}v_{1}\) and \(\sigma(wv)=v^{J}v_{2}\) so that \(v_{1},v_{2}\in W_{J}\). Choosing a suitable reflection order, we get a one to one correspondence between paths in the double Bruhat graph of \(W\) from \(v\) to \(\sigma(wv)\) and paths in the double Bruhat graph of \(W_{J}\) from \(v_{1}\) to \(v_{2}\). The resulting statement on class polynomials recovers [15, Theorem C] in the setting of Theorem 5.10.
Proof of Theorem 5.10.: Define \(C_{1}:=B+1\), and let \(C_{2}>0\) be as in Theorem 5.2.
By choosing \(B^{\prime}\) appropriately, we may assume that we can write \(x\) as a length-additive product
\[x=x_{1}x_{2},\qquad x_{1}=wv\varepsilon^{\mu_{1}},\qquad x_{2}=v^{-1} \varepsilon^{\mu_{2}}\]
such that \(x_{1}\) is \(2\ell(x_{2})\)-regular and \(x_{2}\) is \(C_{2}\)-regular. Observe that \(\mathrm{LP}(x_{2})=\{v\}\) and \(\mathrm{LP}(x_{1})=\{1\}\). Then
\[T_{x}=T_{x_{1}}T_{x_{2}}\equiv T_{x_{2}}\sigma(T_{x_{1}})\pmod{[\mathcal{H}, \mathcal{H}]_{\sigma}}.\]
Write \(\mathcal{H}_{\leq\ell(x)-B-1}\) for the \(\mathbb{Z}[Q]\)-submodule of \(\mathcal{H}\) generated by all elements \(T_{z}\) satisfying \(\ell(z)<\ell(x)-B\).
Using Theorem 1.2, we may write
\[T_{x_{2}}T_{\sigma(x_{1})}\equiv\sum_{(\omega,e)\in\mathrm{wts}(v\Rightarrow \sigma(wv))}Q^{e}T_{\varepsilon^{v^{-1}\mu-\omega}}\pmod{\mathcal{H}_{\leq \ell(x)-B-1}}.\]
So if \(\mathcal{O}\) satisfies \(\langle v^{-1}\mu-\nu(\mathcal{O}),2\rho\rangle\leq B\), we see that
\[f_{x,\mathcal{O}}=\sum_{(\omega,e)\in\mathrm{wts}(v\Rightarrow\sigma(wv))}Q^ {e}f_{\varepsilon^{v^{-1}\mu-\omega},\mathcal{O}}.\]
Here, we used the above observation that \(f_{y,\mathcal{O}}=0\) if \(\ell(y)<\langle\nu(\mathcal{O}),2\rho\rangle\). By regularity of \(v^{-1}\mu\) with respect to \(\omega\), we see that \(v^{-1}\mu-\omega\) is always dominant and \(1\)-regular in the above sum. Hence
\[f_{\varepsilon^{v^{-1}\mu-\omega},\mathcal{O}}=\begin{cases}1,&\text{if }\nu( \mathcal{O})=\mathrm{avg}_{\sigma}(v^{-1}\mu-\omega),\\ 0,&\text{otherwise.}\end{cases}\]
The claim follows.
## 6 Affine Deligne-Lusztig varieties
One crucial feature of the class polynomials \(f_{x,\mathcal{O}}\) is that they encode important information on the geometry of affine Deligne-Lusztig varieties.
**Theorem 6.1** ([14, Theorem 2.19]).: _Let \(x\in\widehat{W}\) and \([b]\in B(G)\). Denote_
\[f_{x,[b]}:=\sum_{\mathcal{O}}Q^{\ell(\mathcal{O})}f_{x,\mathcal{O}}\in\mathbb{ Z}[Q],\]
_where the sum is taken over all \(\sigma\)-conjugacy classes \(\mathcal{O}\subset\widetilde{W}\) whose image in \(B(G)\) is \([b]\). For each such \(\sigma\)-conjugacy class \(\mathcal{O}\), we write_
\[\ell(\mathcal{O})=\min\{\ell(y)\mid y\in\mathcal{O}\}.\]
_Then \(X_{x}(b)\neq\emptyset\) if and only if \(f_{x,[b]}\neq\emptyset\). In this case,_
\[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+\deg(f_{x,[b]})\right)-\langle\nu(b), 2\rho\rangle\]
_and the number of \(J_{b}(F)\)-orbits of top dimensional irreducible components in \(X_{x}(b)\) is equal to the leading coefficient of \(f_{x,[b]}\). _
Combining with the explicit description of class polynomials from Theorem 5.10, we conclude the following.
**Proposition 6.2**.: _Let \(B>0\) be any real number. There exists an explicitly described constant \(B^{\prime}>0\), depending only on \(B\) and the root system \(\Phi\), such that the following holds true:_
_Let \(x=\text{we}^{\mu}\in\widetilde{W}\) be \(B^{\prime}\)-regular and write \(\operatorname{LP}(x)=\{v\}\). Let \([b]\in B(G)\) such that \(\langle v^{-1}\mu-\nu(b),2\rho\rangle<B\) and \(\kappa(b)=\kappa(x)\). Let \(E\) denote the multiset_
\[E=\{e\mid(\omega,e)\in\operatorname{wts}(v\Rightarrow\sigma(wv))\text{ s.th. }\nu(b)=\operatorname{avg}_{\sigma}(v^{-1}\mu-\omega)\}_{m}.\]
_Then \(X_{x}(b)\neq\emptyset\) if and only if \(E\neq\emptyset\). In this case, set \(e:=\max(E)\). Then_
\[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+e-\langle\nu(b),2\rho\rangle\right),\]
_and the number of \(J_{b}(F)\)-orbits of top dimensional irreducible components of \(X_{x}(b)\) is equal to the multiplicity of \(e\) in \(E\)._
Proof.: Let \(\mathcal{O}\subseteq\widetilde{W}\) be the unique \(\sigma\)-conjugacy class whose image in \(B(G)\) is \([b]\) (unique by regularity). Then \(\ell(\mathcal{O})=\langle\nu(b),2\rho\rangle\) and \(f_{x,[b]}=Q^{\ell(\mathcal{O})}f_{x,\mathcal{O}}\). Expressing
\[f_{x,\mathcal{O}}=\sum_{e\in E}Q^{e}\]
using Theorem 5.10, the statements follow immediately using Theorem 6.1.
For split groups, this recovers [11, Corollary 6.9] up to possibly different regularity constraints. In practice, one may use Proposition 6.2 to deduce statements on the double Bruhat graph from the well-studied theory of affine Deligne-Lusztig varieties.
**Corollary 6.3**.: _Let \(u,v\in W\) and let \(J=\operatorname{supp}(u^{-1}v)\subseteq\Delta\) be the support of \(u^{-1}v\) in \(W\), and \(\omega\in\mathbb{Z}\Phi^{\vee}\)._
1. _Suppose that_ \(\ell(u^{-1}v)\) _is equal to_ \(d_{\operatorname{QB}(W)}(u\Rightarrow v)\)_, the length of a shortest path from_ \(u\) _to_ \(v\) _in the quantum Bruhat graph. Then_ \((\omega,\ell(u^{-1}v))\in\operatorname{wts}(u\Rightarrow v)\) _whenever_ \(\omega\geqslant\operatorname{wt}_{\operatorname{QB}(W)}(u\Rightarrow v)\) _and_ \(\omega\in\mathbb{Z}\Phi^{\vee}_{J}\)
_._
2. _If_ \(\omega\in\mathbb{Z}\Phi_{J}^{\,\vee}\) _with_ \(\omega\geq 2\rho_{J}^{\,\vee}\)_, which denotes the sum of positive coroots in_ \(\Phi_{J}^{\,\vee}\)_, we have_ \[(\omega,\ell(u^{-1}v))\in\operatorname{wts}(u\Rightarrow v).\]
Proof.: Assume without loss of generality that the group \(G\) is split. Reducing to the double Bruhat graph of \(W_{J}\) as in Remark 5.11 (d), we may and do assume that \(J=\Delta\).
Let \(B=\langle\omega,2\rho\rangle+1\) and \(B^{\prime}>0\) as in Proposition 6.2. Choose \(x=w\varepsilon^{\mu}\in\widetilde{W}\) to be \(B^{\prime}\)-superregular such that \(\operatorname{LP}(x)=\{u\}\) and \(v=wu\). Let \([b]\in B(G)\) be the \(\sigma\)-conjugacy class containing \(\varepsilon^{u^{-1}\mu-\omega}\), so that \(\nu(b)=u^{-1}\mu-\omega\).
1. By [13, Proposition 4.2], the element \(x\) is cordial. By [13, Theorem 1.1] and [1, Theorem B], we get \(X_{x}(b)\neq\emptyset\) and \[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+\ell(u^{-1}v)-\langle\nu(b),2\rho \rangle\right).\] The claim follows.
2. Similar to (a), using [14, Theorem 1.1]. This celebrated result of He shows that if \(\omega\geq 2\rho^{\,\vee}\) and \(\operatorname{supp}(u^{-1}v)=\Delta\), then \(X_{x}(b)\neq\emptyset\) and \[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+\ell(u^{-1}v)-\langle\nu(b),2\rho \rangle\right).\] The claim follows again.
The reader who wishes to familiarize themselves more with the combinatorics of double Bruhat graphs may take the challenge and prove the above corollary directly.
We now want to state the main result of this section, describing the nonemptiness pattern and dimensions of affine Deligne-Lusztig varieties associated with sufficiently regular elements \(x\in\widetilde{W}\) and arbitrary \([b]\in B(G)\). We let \(\lambda(b)\in X_{*}(T)_{\Gamma}\) be the \(\lambda\)-invariant as introduced by Hamacher-Viehmann [13, Section 2]. By \(\operatorname{conv}:X_{*}(T)_{\Gamma}\to X_{*}(T)_{\Gamma_{0}}\otimes \mathbb{Q}\), we denote the convex hull map from [12, Section 3.1], so that \(\nu(b)=\operatorname{conv}(\lambda(b))\).
Our regularity condition is given as follows: Decompose the (finite) Dynkin diagram of \(\Phi\) into its connected components, so we have \(\Phi=\Phi_{1}\sqcup\dots\sqcup\Phi_{c}\). Denote \(\theta_{i}\in\Phi_{i}^{+}\) the uniquely determined longest root, and write it as linear combination of simple roots
\[\theta_{i}=\sum_{\alpha\in\Delta}c_{i,\alpha}\alpha.\]
Define the regularity constant \(C\) to be
\[C=1+\max_{i=1,\dots,c}\sum_{\alpha\in\Delta}c_{i,\alpha}\in\mathbb{Z}.\]
With that, we can state our main result as follows.
**Theorem 6.4**.: _Let \(x=w\varepsilon^{\mu}\in\widetilde{W}\) be \(C\)-regular and \([b]\in B(G)\) such that \(\kappa(b)=\kappa(x)\). Write \(\mathrm{LP}(x)=\{v\}\) and denote \(E\) to be either of the following two sets \(E_{1}\) or \(E_{2}\):_
\[E_{1}:= \{e\mid(\omega,e)\in\mathrm{wts}(v\Rightarrow\sigma(wv))\text{ s.th. }\lambda(b)\equiv v^{-1}\mu-\omega\in X_{*}(T)_{\Gamma}\},\] \[E_{2}:= \{e\mid(\omega,e)\in\mathrm{wts}(v\Rightarrow\sigma(wv))\text{ s.th. }\nu(b)=\mathrm{conv}(v^{-1}\mu-\omega)\}.\]
_Then \(X_{x}(b)\neq\emptyset\) if and only if \(E\neq\emptyset\). In this case,_
\[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+\max(E)-\langle\nu(b),2\rho\rangle- \mathrm{def}(b)\right).\]
_Remark 6.5_.:
1. Since \(\mathrm{conv}(\lambda(b))=\nu(b)\), we have \(E_{1}\subseteq E_{2}\). The inclusion may be strict, and it is a non-trivial consequence of Theorem 6.4 that the two sets have the same maxima.
2. If \(\Phi\) is irreducible, the regularity constant \(C\) is explicitly given as follows:
3. Unlike in Proposition 6.2, we get no information on the number of top dimensional irreducible components. The main advantage of Theorem 6.4 over Proposition 6.2 comes from the different regularity conditions, making Theorem 6.4 more applicable.
4. The unique minimum in \(\mathrm{wts}(v\Rightarrow\sigma(wv))\) from [13, Proposition 5.12] corresponds to the unique maximum in \(B(G)_{x}\). This recovers the formula for the generic Newton point from [16, Proposition 3.1] in the setting of Theorem 6.4.
5. If the difference between \(v^{-1}\mu\) and \(\nu(b)\) becomes sufficiently large, the maximum \(\max(E)\) can be expected to be \(\ell(v^{-1}\sigma(wv))\) (cf. [13, Lemma 5.10] or Corollary 6.3 (b) above) and we recover the notion of virtual dimension from He [15, Section 10]. In fact, one may use Corollary 6.3 (b) to recover [15, Theorem 1.1] in the situation of Theorem 6.4. This line of argumentation is ultimately cyclic, since a special case of [15, Theorem 1.1] was used in the proof of Corollary 6.3 (b). We may however summarize that Corollary 6.3 (b) is the double Bruhat theoretic correspondent of [15, Theorem 1.1]. Similarly, most known results on affine Deligne-Lusztig varieties correspond to theorems on the double Bruhat graph and vice versa.
6. The proof method for Theorem 6.4 is similar to the proof of [15, Proposition 11.5] or equivalently the proof of [15, Theorem 1.1].
Proof of Theorem 6.4.: We assume without loss of generality that the group \(G\) is of adjoint type, following [13, Section 2]. This allows us to find a coweight \(\mu_{v}\in X_{*}(T)_{\Gamma_{0}}\) satisfying for each simple root \(\alpha\in\Delta\) the condition
\[\langle\mu_{v},\alpha\rangle=\Phi^{+}(-v\alpha)=\begin{cases}1,&v\alpha\in \Phi^{-},\\ 0,&v\alpha\in\Phi^{+}.\end{cases}\]
It follows that \(\langle\mu_{v},\beta\rangle\geqslant\Phi^{+}(-v\beta)\) for all \(\beta\in\Phi^{+}\). Define
\[x_{1}:=wv\varepsilon^{v^{-1}\mu-\mu_{v}},\qquad x_{2}=v^{-1}\varepsilon^{v\mu_{ v}}\in\widetilde{W}.\]
By choice of \(\mu_{v}\), we see that \(v^{-1}\mu-\mu_{v}\) is dominant and \((C-1)\)-regular. The above estimate \(\langle\mu_{v},\beta\rangle\geqslant\Phi^{+}(-v\beta)\) implies \(v\in\mathrm{LP}(x_{2})\). Hence \(x=x_{1}x_{2}\) is a length additive product. We obtain
\[T_{x}=T_{x_{1}}T_{x_{2}}\equiv T_{\sigma^{-1}(x_{2})}T_{x_{1}}\pmod{[\mathcal{ H},\mathcal{H}]_{\sigma}}.\]
Define the multiset \(Y\) via
\[T_{\sigma^{-1}(x_{2})}T_{x_{1}}=\sum_{(y,e)\in Y}Q^{e}T_{yx_{1}}\in\mathcal{H}. \tag{6.6}\]
Then each \((y,e)\in Y\) satisfies \(y\leqslant\sigma^{-1}(x_{2})\) in the Bruhat order. Writing \(y=w_{y}\varepsilon^{\mu_{y}}\), we get \(\mu_{y}^{\mathrm{dom}}\leqslant\sigma^{-1}(\mu_{v})\) in \(X_{\ast}(T)_{\Gamma_{0}}\). We estimate
\[\max_{\beta\in\Phi}|\langle\mu_{y},\beta\rangle|=\max_{\beta\in\Phi^{+}} \langle\mu_{y}^{\mathrm{dom}},\beta\rangle=\max_{i}\langle\mu_{y}^{\mathrm{ dom}},\theta_{i}\rangle\leqslant\max_{i}\langle\mu_{v},\theta_{i}\rangle \leqslant C-1,\]
by choice of \(C\). It follows that
\[yx_{1}=w_{y}wv\varepsilon^{v^{-1}\mu-\mu_{v}+(wv)^{-1}\mu_{y}}\]
with \(v^{-1}\mu-\mu_{v}+(wv)^{-1}\mu_{y}\) being dominant. For any dominant coweight \(\lambda\in X_{\ast}(T)_{\Gamma_{0}}\), we can multiply (6.6) by \(T_{\varepsilon^{\lambda}}\) to obtain
\[T_{\sigma^{-1}(x_{2})}T_{x_{1}\varepsilon^{\lambda}}=T_{\sigma^{-1}(x_{2})}T_ {x_{1}}T_{\varepsilon^{\lambda}}=\sum_{(y,e)\in Y}T_{yx_{1}}T_{\varepsilon^{ \lambda}}=\sum_{(y,e)\in Y}T_{yx_{1}\varepsilon^{\lambda}}.\]
In light of Lemma 5.5, we see that the multiset \(Y\) is equal to the multiset \(Y(\sigma^{-1}(x_{2}),wv)\) defined earlier.
For each \((y,e)\in Y\), write \(yx_{1}=\tilde{w}_{y}\varepsilon^{\tilde{\mu}_{y}}\) to define the sets
\[E_{1}(yx_{1}):= \{e\mid(\omega,e)\in\mathrm{wts}(1\Rightarrow\sigma(\tilde{w}_{y} ))\text{ s.th. }\lambda(b)=\tilde{\mu}_{y}-\omega\in X_{\ast}(T)_{\Gamma}\},\] \[E_{2}(yx_{1}):= \{e\mid(\omega,e)\in\mathrm{wts}(1\Rightarrow\sigma(\tilde{w}_{y} ))\text{ s.th. }\nu(b)=\mathrm{conv}(\tilde{\mu}_{y}-\omega)\}.\]
Define \(E(yx_{1})\) to be \(E_{1}(yx_{1})\) or \(E_{2}(yx_{1})\) depending on whether \(E\) was chosen as \(E_{1}\) or \(E_{2}\). By Lemma 5.7 (a), we may write \(\mathrm{wts}(\sigma^{-1}(v)\Rightarrow wv)\) as the additive union of multisets
\[\mathrm{wts}(\sigma^{-1}(v)\Rightarrow wv)\] \[= \bigcup_{(w_{y}\varepsilon^{\mu_{y}},e)\in Y(\sigma^{-1}(x_{2}), wv)}\{(\mu_{v}-(wv)^{-1}\mu_{y}+\omega,e+\ell)\mid(\omega,\ell)\in\mathrm{wts}(1 \Rightarrow w_{y}wv)\}_{m}.\] \[= \bigcup_{(y,e)\in Y}\{(v^{-1}\mu-\tilde{\mu}_{y}+\omega,e+\ell) \mid(\omega,\ell)\in\mathrm{wts}(1\Rightarrow\tilde{w}_{y})\}_{m}. \tag{6.7}\]
Note that the definition of the sets \(E_{1},E_{2},E_{1}(yx_{1}),E_{2}(yx_{1})\) does not change if we apply \(\sigma^{-1}\) to the occurring weights \(\omega\). Hence (6.7) implies
\[E=\bigcup_{(y,e)\in Y}\{e+\ell\mid\ell\in E(yx_{1})\}.\]
By definition of the multiset \(Y\), the class polynomials of \(f_{x,\mathcal{O}}\) for arbitrary \(\sigma\)-conjugacy classes \(\mathcal{O}\subset\widehat{W}\) are given by
\[f_{x,\mathcal{O}}=\sum_{(y,e)\in Y}Q^{e}f_{yx_{1},\mathcal{O}}.\]
By Theorem 6.1, we see that \(X_{x}(b)\neq\emptyset\) if and only if \(X_{yx_{1}}(b)\neq\emptyset\) for some \((y,e)\in Y\). In this case, the dimension of \(X_{x}(b)\) is the maximum of
\[\dim X_{yx_{1}}(b)+\frac{1}{2}\left(\ell(x)-\ell(yx_{1})+e\right),\]
where \((y,e)\) runs through all elements of \(Y\) satisfying \(X_{yx_{1}}(b)\neq\emptyset\).
We see that it suffices to prove the following claim for all \((y,e)\in Y\):
\[X_{yx_{1}}(b)\neq\emptyset\]
if and only if
\[E(yx_{1})\neq\emptyset\]
and in this case, we have \[\dim X_{yx_{1}}(b)=\frac{1}{2}\left(\ell(yx_{1})+\max(E(yx_{1}))-\langle\nu( b),2\rho\rangle-\operatorname{def}(b)\right).\]
Writing \(yx_{1}=\tilde{w}\varepsilon^{\tilde{\mu}}\), we saw above that \(\tilde{\mu}\) is dominant. Applying [13, Theorem 1.2] to the inverse of \(yx_{1}\), or equivalently [14, Theorem 4.2] directly to \(yx_{1}\), we see that the element \(yx_{1}\) is _cordial_ in the sense of [13]. This gives a convenient criterion to check \(X_{yx_{1}}(b)\neq\emptyset\) and to calculate its dimension. We saw in Corollary 6.3 (a) that the multiset \(\operatorname{wts}(1\Rightarrow\sigma(\tilde{w}_{y}))\) must satisfy the analogous conditions. Let us recall these results.
The uniquely determined largest Newton point in \(B(G)_{yx_{1}}=B(G)_{\tilde{w}\varepsilon^{\tilde{\mu}}}\) is \(\operatorname{avg}_{\sigma}(\tilde{\mu})\), cf. [14, Theorem 4.2].
Let \(J^{\prime}=\operatorname{supp}(\tilde{w})\subseteq\Delta\) be the support of \(\tilde{w}\) and \(J=\bigcup_{i}\sigma^{i}(J^{\prime})=\operatorname{supp}_{\sigma}(\tilde{w})\) its \(\sigma\)-support. Let \(\pi_{J}:X_{*}(T)_{\Gamma_{0}}\to X_{*}(T)_{\Gamma_{0}}\otimes\mathbb{Q}\) be the corresponding function from [1, Definition 3.2] or equivalently [10, Section 3.1]. Then \(\pi_{J}(\tilde{\mu})\) is the unique smallest Newton point occurring in \(B(G)_{yx_{1}}\), cf. [16, Theorem 1.1].
The condition of cordiality [13, Theorem 1.1] implies that \(B(G)_{yx_{1}}\) contains all those \([b]\in B(G)\) with the correct Kottwitz point \(\kappa(b)=\kappa(yx_{1})=\kappa(x)\) and Newton point
\[\pi_{J}(\tilde{\mu})\leq\nu(b)\leq\operatorname{avg}_{\sigma}(\tilde{\mu}).\]
In this case, we know moreover from [13, Theorem 1.1] that \(X_{yx_{1}}(b)\) is equidimensional of dimension
\[\dim X_{yx_{1}}(b)=\frac{1}{2}\left(\ell(yx_{1})+\ell(\tilde{w})-\langle\nu( b),2\rho\rangle-\operatorname{def}(b)\right).\]
This condition on Newton points is equivalent to \(\operatorname{avg}_{\sigma}(\tilde{\mu})-\nu(b)\) being a non-negative \(\mathbb{Q}\)-linear combination of simple coroots of \(J\), or equivalently \(\tilde{\mu}-\lambda(b)\) being a non-negative \(\mathbb{Z}\)-linear combination of these coroots.
On the double Bruhat side, note that \((\omega,e)\in\operatorname{wts}(1\Rightarrow\tilde{w})\) implies \(\omega\in\mathbb{Z}\Phi^{\vee}_{J^{\prime}}\) and \(e\leqslant\ell(\tilde{w})\). This can either be seen directly, similar to the proof of [13, Lemma 5.10], or as in Corollary 6.3, reducing to [20, Theorem 1.1]. From Corollary 6.3, we know conversely that any \(\omega\geqslant 0\) with \(\omega\in\mathbb{Z}\Phi^{\vee}_{J^{\prime}}\) satisfies \((\omega,\ell(\tilde{w}))\in\operatorname{wts}(1\Rightarrow\tilde{w})\).
Comparing these explicit descriptions of \(\dim X_{yx_{1}}(b)\) and \(\max(E(yx_{1}))\), we conclude the claim \((*)\). This finishes the proof.
## 7 Outlook
We saw that the weight multiset of the double Bruhat graph can be used to describe the geometry of affine Deligne-Lusztig varieties in many cases. This includes the case of superparabolic elements \(x\) together with sufficiently large integral \([b]\in B(G)\) in split groups [13, Theorem 6.7], as well as the case of sufficiently regular elements \(x\) together with arbitrary \([b]\in B(G)\) (Theorem 6.4). One may ask how much the involved regularity constants can be improved, and whether a unified theorem simultaneously generalizing [13, Theorem 6.7] and Theorem 6.4 can be found. Towards this end, we propose a number of conjectures that would generalize our theorems in a straightforward manner.
Let \(x=w\varepsilon^{\mu}\in\widetilde{W}\) and \([b]\in B(G)\). If \(X_{x}(b)\neq\emptyset\), define the integer \(D\in\mathbb{Z}_{\geqslant 0}\) such that
\[\dim X_{x}(b)=\frac{1}{2}\left(\ell(x)+D-\langle\nu(b),2\rho\rangle-\operatorname {def}(b)\right),\]
and denote the number of \(J_{b}(F)\)-orbits of top dimensional irreducible components in \(X_{x}(b)\) to be \(C\in\mathbb{Z}_{\geqslant 1}\). We would like to state the following conjectures. The first conjecture makes a full prediction of the nonemptiness pattern and the dimension for elements \(x\) in the shrunken Weyl chamber and arbitrary \([b]\in B(G)\).
**Conjecture 7.1**.: _Suppose that \(x\) lies in a shrunken Weyl chamber, i.e. \(\operatorname{LP}(x)=\{v\}\) for a uniquely determined \(v\in W\). Define \(E\) to be either of the multisets_
\[E_{1}:= \{e\mid(\omega,e)\in\operatorname{wts}(v\Rightarrow\sigma(wv))\text { s.th. }\lambda(b)\equiv v^{-1}\mu-\omega\in X_{*}(T)_{\Gamma}\}_{m},\] \[E_{2}:= \{e\mid(\omega,e)\in\operatorname{wts}(v\Rightarrow\sigma(wv)) \text{ s.th. }\nu(b)=\operatorname{conv}(v^{-1}\mu-\omega)\}_{m}.\]
_We make the following predictions._
1. \(X_{x}(b)\neq\emptyset\) _if and only if_ \(E\neq\emptyset\) _and_ \(\kappa(x)=\kappa(b)\in\pi_{1}(G)_{\Gamma}\) _(the latter condition on Kottwitz points is automatically satisfied if_ \(E=E_{1}\)_)._
2. _If_ \(X_{x}(b)\neq\emptyset\)_, then_ \(\max(E)=D\)_._
3. _If_ \(X_{x}(b)\neq\emptyset\)_, then_ \(C\) _is at most the multiplicity of_ \(D\) _in_ \(E\) _(which may be_ \(+\infty\) _for_ \(E_{2}\)_)._
The multiset \(E_{1}\) is always contained in \(E_{2}\), since \(\nu(b)=\operatorname{conv}(\lambda(b))\). The inclusion may be strict. So in fact we are suggesting two different dimension formulas for shrunken \(x\), and claim that both yield the same answer, which moreover agrees with the dimension.
For sufficiently regular \(x\), Theorem 6.4 shows (a) and (b). Under some strong superregularity conditions, Proposition 6.2 shows (c) with equality. While both proofs can certainly be optimized with regards to the involved regularity constants, proving Conjecture 7.1 as stated will likely require further methods. It is unclear how to show the conjecture e.g. for the particular element \(x=w_{0}\varepsilon^{-2\rho^{\vee}}\), since the proof method for Theorem 6.4 fails.
It is easy to see that Conjecture 7.1 is compatible with many known results on affine Deligne-Lusztig varieties, such as the ones recalled in the introduction of the previous article [13, Theorem 1.2]. By Corollary 6.3, we see that parts (a) and (b) of Conjecture 7.1 hold true for cordial elements \(x\). If \(x\) is of the special form \(x=w_{0}\varepsilon^{\mu}\) with \(\mu\) dominant, then \(x\) is in a shrunken Weyl chamber and we know that (c) holds with equality, cf. [13, Remark 6.11].
Our second conjecture suggests how the double Bruhat graph can be used for elements \(x\) which are not necessarily in shrunken Weyl chambers.
**Conjecture 7.2**.: _Suppose that \([b]\) is integral, i.e. of defect zero._
_Define for each \(v\in\operatorname{LP}(x)\) and \(u\in W\) the multiset_
\[E(u,v):=\{e\mid(\omega,e)\in\operatorname{wts}(u\Rightarrow\sigma(wu)\dashto \sigma(wv))\text{ s.th. }u^{-1}\mu-\omega=\lambda(b)\in X_{*}(T)_{\Gamma}\}_{m}.\]
_Set \(\max\emptyset:=-\infty\) and define_
\[d :=\max_{u\in W}\min_{v\in\operatorname{LP}(x)}\max(E(u,v))\in \mathbb{Z}_{\geqslant 0}\cup\{-\infty\}.\] \[c :=\sum_{u\in W}\min_{v\in W}\left(\text{multiplicity of }d\text{ in }E(u,v)\right)\in\mathbb{Z}_{\geqslant 0}.\]
_We make the following predictions._
1. _If there exists for every_ \(u\in W\) _some_ \(v\in\operatorname{LP}(x)\) _with_ \(E(u,v)=\emptyset\)_, i.e. if_ \(d=-\infty\)_, then_ \(X_{x}(b)=\emptyset\)_._
2. _If_ \(X_{x}(b)\neq\emptyset\)_, then_ \(D\leqslant d\)_._
3. _If_ \(X_{x}(b)\neq\emptyset\) _and_ \(D=d\)_, then_ \(C\leqslant c\)_._
4. _If_ \([b]\) _satisfies the regularity condition_ \(\langle\nu(b),\alpha\rangle\geqslant 1\) _for all_ \(\alpha\in\Phi^{+}\)_, then_ \(X_{x}(b)\neq\emptyset\) _and_ \(D=d\)_._
If the group is split, then [13, Theorem 6.7] proves (a), (b) and (c). Moreover, under some strong superparabolicity assumptions, we get the full conjecture including an equality result for (c). We expect that a similar superparabolicity statement holds true for non-split groups, but it is unclear what the involved regularity constants should be, which is why we did not formulate a precise, computer falsifiable conjecture.
If the element \(x\in\widetilde{W}\) is in a shrunken Weyl chamber with \(\mathrm{LP}(x)=\{v\}\), then the multiset \(E_{1}\) from Conjecture 7.1 is equal to the multiset \(E(v,v)\) from Conjecture 7.2. If we moreover assume that Conjecture 7.1 holds true, then we get parts (a), (b) and (c) of Conjecture 7.2.
Compatibility of Conjecture 7.2 with previously known results is a lot harder to verify. We expect that one does not have to account for all pairs \((u,v)\) as in Conjecture 7.2 to accurately describe nonemptiness and dimension of \(X_{x}(b)\), similar to [13, Theorem 6.7(c)] or Conjecture 7.1. However, we cannot make precise prediction how such a refinement of Conjecture 7.2 should look like in general.
Nonetheless, extensive computer searches did not yield a single counterexample to either conjecture. Most straightforward generalizations of these conjectures, however, can be disproved quickly using such a computer search [11].
_Example 7.3_.: For both conjectures, the estimate on the number of irreducible components is only an upper bound. Indeed, we may consider a split group of type \(G_{2}\). Write \(\alpha_{1}\) for the short simple root and \(\alpha_{2}\) the long simple root. Let \(s_{1},s_{2}\in W\subseteq\widetilde{W}\) be the corresponding simple reflections and \(s_{0}\in\widetilde{W}\) the remaining simple affine reflection. Then consider
\[x=s_{2}s_{1}s_{2}s_{1}s_{0}s_{2}s_{1}s_{0}s_{2}s_{1}s_{2}s_{1}=s_{2}s_{1}s_{2} s_{1}s_{2}s_{1}\varepsilon^{\mu}\]
with \(\langle\mu,\alpha_{1}\rangle=0,\langle\mu,\alpha_{2}\rangle=1\), so that \(\mathrm{LP}(x)=\{1\}\). For \(b=[1]\) being the basic \(\sigma\)-conjugacy class, we get \(C=1\) whereas both conjectures give \(3\) as upper bound.
_Example 7.4_.: One may ask whether it is possible to find for each non-shrunken \(x\) an element \(v\in\mathrm{LP}(x)\) such that the analogous statement of Conjecture 7.1 holds true. While this is certainly possible, say, for cordial elements \(x\), such a statement cannot be expected to hold true in general. We may choose \(G=\mathrm{GL}_{4}\) and \(x=s_{3}s_{2}s_{1}\varepsilon^{\mu}\) where the pairing of \(\mu\) with the simple roots \(\alpha_{1},\alpha_{2},\alpha_{3}\) is given by \(1,-1,1\) respectively. Then \(\mathrm{LP}(x)=\{s_{2},s_{2}s_{3}\}\). For \([b]\) basic, we have \(D=3\), yet the analogous statements of Conjecture 7.1 for both possible choices of \(v\) in \(\mathrm{LP}(x)\) would predict \(D=5\).
_Example 7.5_.: Conjecture 7.2 should not be expected to hold for non-integral \([b]\). Indeed, it suffices to choose \(G=\mathrm{GL}_{3}\) and \(x=w\varepsilon^{\mu}\) to be of length zero such that the action of \(x\) on the affine Dynkin diagram is non-trivial. Let \([b]=[x]\), so that \(B(G)_{x}=\{[b]\}\). Define
\[E(u,v):=\{e\mid(\omega,e)\in\mathrm{wts}(u\Rightarrow wu\dasharrow wv)\text{ s.th. }u^{-1}\mu-\omega=\lambda(b)\in X_{*}(T)_{\Gamma}\}_{m}\]
for \(u,v\in W=\mathrm{LP}(x)\). Since \(w\neq 1\), we have \(E(u,v)=\emptyset\) whenever \(v=uw_{0}\). A statement analogous to Conjecture 7.2 (a) would thus predict that \(X_{x}(b)=\emptyset\), which is absurd.
_Example 7.6_.: The converse of Conjecture 7.2 (a) should not be expected to hold, even for \([b]\) basic. The construction in Conjecture 7.2 can fail to detect \((J,w,\delta)\)-alcove elements, hence falsely predict a non-empty basic locus. For a concrete example, one may choose \(G=\mathrm{GL}_{3}\) and \(x\) to be the shrunken element \(x=s_{2}\varepsilon^{\rho^{\vee}}\), with \(\langle\rho^{\vee},\alpha\rangle=1\) for all simple roots \(\alpha\). Then \(\mathrm{LP}(x)=\{1\}\). For \(u=s_{1}s_{2}\) and \([b]=[1]\) basic, we have \(E(u,1)\neq\emptyset\).
_Example 7.7_.: In Conjecture 7.2 (d), the regularity condition \(\langle\nu(b),\alpha\rangle\geqslant 1\) cannot be weakened to \(\langle\nu(b),\alpha\rangle>0\). We may consider a group \(G\) of type \(A_{4}\) such that the Frobenius acts as the unique non-trivial automorphism of the finite Dynkin diagram \(A_{4}\). Choose
\[x=s_{3}s_{4}s_{2}s_{3}s_{1}\varepsilon^{\mu}\]
where the pairing of \(\mu\) with the simple roots \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\) are given by \(-1,-3,-2,1\) respectively.
Let \(\lambda\in X_{\bullet}(T)_{\Gamma_{0}}\) be a coweight such that the pairings with the simple roots are given by \(1,1,0,0\) respectively, and \([b]=[\varepsilon^{\lambda}]\). Then \(\nu(b)\) satisfies this weakened regularity condition, namely we have \(\langle\nu(b),\alpha\rangle=\frac{1}{2}\) for all simple roots \(\alpha\). In this case, the dimension prediction in Conjecture 7.2 yields \(d=7\) (predicting \(\dim X_{x}(b)\) to be \(15\)), whereas the actual dimension is \(\dim X_{x}(b)=14\), i.e. \(D=5\).
In particular, one should not expect to always have equality in part (b) of Conjecture 7.2. Even for \([b]\) basic, the above example yields \(D=5<d=7\).
|
2305.11687 | A New LBV Candidate in M33 | The evolutionary relationships and mechanisms governing the behavior of the
wide variety of luminous stars populating the upper H-R diagram are not well
established. Luminous blue variables (LBVs) are particularly rare, with only a
few dozen identified in the Milky Way and nearby galaxies. Since 2012, the
Barber Observatory Luminous Stars Survey has monitored more than 100 luminous
targets in M33, including M33C-4119 which has recently undergone photometric
and spectroscopic changes consistent with an S Doradus eruption of an LBV. | John C. Martin, Roberta M. Humphreys, Kerstin Weis, Dominik J. Bohmans | 2023-05-19T14:08:23Z | http://arxiv.org/abs/2305.11687v1 | # A New LBV Candidate in M33
###### Abstract
The evolutionary relationships and mechanisms governing the behavior of the wide variety of luminous stars populating the upper H-R diagram are not well established. Luminous blue variables (LBVs) are particularly rare, with only a few dozen identified in the Milky Way and nearby galaxies. Since 2012, the Barber Observatory Luminous Stars Survey has monitored more than 100 luminous targets in M33, including M33C-4119 which has recently undergone photometric and spectroscopic changes consistent with an S Doradus eruption of an LBV.
Massive stars(732),Luminous blue variable stars(944),Triangulum Galaxy(1712) 0000-0002-4070-3870]John C. Martin
0000-0002-4882-7888]Roberta M. Humphreys
0000-0002-4882-7888]Kerstin Weis
0000-0002-4882-7888]Dominik J. Bomans
M33C-4119 (LGGS J013312.81+303012.6) is a new Luminous Blue Variable (LBV) candidate discovered in an on-going survey of M33 (Martin & Humphreys, 2017). LBVs are rare and difficult to identify due to the infrequency of their characteristic photometric and spectroscopic variability. Only a few dozen have been identified in the Milky Way and nearby galaxies (Humphreys et al., 2016). Their eruptive mechanism and connection to other classes of massive stars including B[e] supergiants, warm hypergiants, and supernova impostors are poorly understood.
M33C-4119 is a luminous OB-supergiant (Humphreys et al., 2014) in an outer spiral arm of M33 numbered B78 in Association 127 (Humphreys & Sandage, 1980) about 12 arc minutes southwest of the galaxy center. The designation M33C-4119 is from Burggraf (2015). It is also identified as IFM_B 333 by Ivanov et al. (1993) and J013312.81+303012.6 by Massey et al. (2016). Before 2012 it exhibited 0.1 - 0.2 mag alpha-Cygni type variations (Hartman et al., 2006; Chambers et al., 2016; Burggraf, 2015) including measurements of digitized photographic plates as far back as 1968 (Gottschling, 2017). A few measurements \(\sim\)0.5 mag brighter than average were recorded 2001-2002 (Burggraf, 2015; Massey et al., 2016).
From 2012 to the present its brightness has been recorded several times a year by a BVRI CCD survey of M33 conducted with the University of Illinois Springfield Barber Observatory 20-inch telescope(Martin & Humphreys, 2017). From 2012 - 2018 it brightened \(\approx 1.0\) mag in all observed bands, including a rapid rise beginning in 2017. The initial stage, 2012-2016, is confirmed by CCD photometry recorded by the Tautenburg Landessterwarte 2-m telescope (Burggraf, 2015; Gottschling, 2017). It maintained peak brightness (\(\approx 0.5\) magnitude brighter than seen previously) for more than a year before a more rapid decline to its minimum brightness by 2021 (Figure 1 and Table 1). Throughout the event the star's color was correlated with its change in brightness as expected during an S Doradus eruption, being significantly bluer/hotter when fainter and redder/cooler when visually brighter.
Spectra of M33C-4119 were obtained with the MMT Hectospec Multi-Object Spectrograph in 2010.76 and 2014.88 with the 600-line grating in the blue and red covering 3600 to 8300A (Figure 2 and 3) and also with the LBT MODS spectrograph on 2011.75 (Humphreys et al., 2013, 2014, 2017). The 2010 and 2011 spectra closely resemble each other. The 2014 spectrum shows a significant change and a shift to a cooler apparent temperature. A fourth spectrum recorded in 2007.76 by Burggraf (2015) using the CAFOS spectrograph on the Calar Alto 2.2-m telescope is similar in appearance to the 2010 and 2011 spectra with lower resolution and a higher noise level which affects the clear detection of weaker emission and absorption features.
Both the 2007 and 2010 spectra show a hot supergiant with a stellar wind and mass loss. The H\(\alpha\) and H\(\beta\) emission lines have prominent electron scattering wings in both spectra, and in the 2010 spectrum P Cygni absorption minima are present at H\(\beta\) and H\(\gamma\). In both 2007 and 2010, strong He I emission is present at \(\lambda\) 5876, 6678 and 7065. In the 2010 spectrum other He I lines are present in absorption. Strong absorption lines of Si III, N II and O II are present along with a weak Mg II \(\lambda\) 4481 line. The absorption lines suggest an early B-type supergiant of spectral type B2 - B3 when the star was at minimum brightness.
In 2014 when the star was halfway through its period of brightening, the He I emission lines are replaced by absorption and the He I lines previously present in absorption are weaker. Absorption lines of Ca II K, the Na I D lines and Mg II are significantly stronger, indicating a shift to cooler temperatures consistent with a late B-type spectral type (\(\approx\) B8).
To estimate the star's total bolometric luminosity and place on an HR Diagram, we determined the visual extinction. Since M33C-4119 has strong emission lines, we adopt \(A_{V}=1.10\) from two nearby OB stars. Although we lack a spectrum at maximum light, we argue that the observed shift in color supports an equivalent late A or F spectral type with little or no bolometric correction. A maximum V \(\approx\) 16.8 mag at a distance modulus of 24.5 mag (Scowcroft et al., 2009) implies M\({}_{v}=M_{bol}\approx\) -8.8 (Figure 4. This is also consistent with the luminosity estimated from the spectral types and brightness observed in 2010 and 2014.
LBVs are defined by S Doradus eruptive episodes characterized by a 1-2 mag increase in visual brightness accompanied by an apparent shift to cooler temperatures and change in spectral type to late-A to F type with little or no appreciable change in luminosity. Many hot supergiants have emission lines and the B[e] supergiants are spectroscopically like LBVs, thus observing an S Dor event is the only way to confirm a star is an LBV. The brightening of M33C-4119 is consistent with an S Dor eruption including the time scale, the rise in visual brightness, the reddening of the colors
near peak brightness, and the shift to later spectral type during the brightening. Although there is no spectrum at maximum, the case is compelling that M33C-4119 is an LBV.
Figure 1: The V magnitude and colors of M33C-4119 (LGGS J013312.81+303012.7). The triangles are the LGGS (Massey et al., 2016). The open squares with error bars are Burggraf (2015) and Gottschling (2017) CCD photometry from the Tautenburg Landessterwarte 2-m telescope. The crosses are PanSTARRS Sloan g (Chambers et al., 2016). The filled circles with error bars are Martín & Humphreys (2017) and this work. The dashed horizontal lines in the color plots note the average value. The dashed vertical lines mark the times of the spectra recorded in 2010 and 2014.
\begin{table}
\begin{tabular}{c c c c} \hline \hline MJD & V & (B-V) & (V-I) \\ \hline
56140.40 & \(17.84\pm 0.06\) & & \\
56599.22 & \(17.71\pm 0.07\) & \(0.03\pm 0.09\) & \\
57035.14 & \(17.42\pm 0.04\) & \(0.15\pm 0.06\) & \\
57310.16 & \(17.28\pm 0.05\) & \(0.11\pm 0.06\) & \\
57406.10 & \(17.26\pm 0.05\) & \(0.35\pm 0.07\) & \(0.35\pm 0.09\) \\
57634.41 & \(17.31\pm 0.07\) & \(0.04\pm 0.11\) & \\
57638.40 & \(17.33\pm 0.05\) & \(0.11\pm 0.06\) & \(0.34\pm 0.10\) \\
57964.40 & \(17.57\pm 0.04\) & & \(0.21\pm 0.09\) \\
57988.43 & \(17.31\pm 0.06\) & & \\
58043.15 & \(17.16\pm 0.04\) & \(0.15\pm 0.06\) & \(0.38\pm 0.10\) \\
58073.22 & \(17.28\pm 0.05\) & \(0.00\pm 0.16\) & \\
58108.13 & \(17.44\pm 0.05\) & \(0.12\pm 0.06\) & \(0.29\pm 0.09\) \\
58316.35 & \(17.18\pm 0.04\) & \(0.20\pm 0.06\) & \\
58339.40 & \(16.97\pm 0.05\) & \(0.14\pm 0.08\) & \\
58373.15 & \(16.99\pm 0.06\) & \(0.26\pm 0.08\) & \\
58375.16 & \(16.88\pm 0.04\) & & \(0.44\pm 0.10\) \\
58433.02 & \(16.87\pm 0.05\) & & \(0.32\pm 0.11\) \\
58673.38 & \(16.86\pm 0.03\) & \(0.28\pm 0.04\) & \\
58695.39 & \(17.00\pm 0.04\) & & \\
58696.41 & \(17.02\pm 0.05\) & \(0.14\pm 0.09\) & \\
58750.20 & \(16.87\pm 0.04\) & \(0.23\pm 0.05\) & \(0.40\pm 0.09\) \\
58757.17 & \(16.86\pm 0.05\) & & \\
58779.18 & \(16.81\pm 0.04\) & \(0.21\pm 0.05\) & \\
58784.14 & \(16.86\pm 0.04\) & & \(0.50\pm 0.09\) \\
58812.20 & \(16.87\pm 0.06\) & & \\
59059.40 & \(17.14\pm 0.06\) & & \(0.40\pm 0.10\) \\
59081.34 & \(17.19\pm 0.05\) & \(0.13\pm 0.06\) & \(0.42\pm 0.09\) \\
59082.38 & \(17.22\pm 0.05\) & \(0.16\pm 0.06\) & \\
59161.14 & \(17.36\pm 0.04\) & & \(0.38\pm 0.10\) \\
59171.15 & \(17.34\pm 0.05\) & \(0.09\pm 0.08\) & \(0.37\pm 0.09\) \\
59227.14 & \(17.55\pm 0.05\) & \(0.09\pm 0.07\) & \(0.31\pm 0.09\) \\
59415.39 & \(17.81\pm 0.07\) & & \\
59441.42 & \(17.78\pm 0.06\) & & \\
59442.41 & \(17.80\pm 0.05\) & & \\
59463.33 & \(17.76\pm 0.06\) & \(0.06\pm 0.08\) & \(0.14\pm 0.10\) \\
59522.13 & \(17.63\pm 0.05\) & & \(0.09\pm 0.10\) \\
59525.12 & \(17.65\pm 0.05\) & & \\
59546.20 & \(17.59\pm 0.05\) & \(0.10\pm 0.06\) & \(0.33\pm 0.09\) \\
59551.14 & \(17.61\pm 0.06\) & & \(0.07\pm 0.11\) \\
59583.20 & \(17.59\pm 0.04\) & & \\
59608.14 & \(17.61\pm 0.05\) & \(0.07\pm 0.09\) & \\ \hline \end{tabular} Note. – From survey of luminous stars in M33 described in Martin & Humphreys (2017).
\end{table}
Table 1: Recent Photometry of M33C-4119
The UIS Barber Observatory survey of luminous stars in M33 was initiated under and supported by NSF grant AST-1108890 with additional support from the University of Illinois Springfield Henry R. Barber Astronomy Endowment funded by the people of Central Illinois.
We also thank Brigita Burggraf and Niels Gottschling for the photometry and spectroscopy they contributed.
This work also made use of the Pan-STARRS1 Surveys (PS1) and the PS1 public science archive which has been made possible through contributions of a number of organizations credited in Chambers et al. (2016).
|
2310.15657 | Testing the Limits: Unusual Text Inputs Generation for Mobile App Crash
Detection with Large Language Model | Mobile applications have become a ubiquitous part of our daily life,
providing users with access to various services and utilities. Text input, as
an important interaction channel between users and applications, plays an
important role in core functionality such as search queries, authentication,
messaging, etc. However, certain special text (e.g., -18 for Font Size) can
cause the app to crash, and generating diversified unusual inputs for fully
testing the app is highly demanded. Nevertheless, this is also challenging due
to the combination of explosion dilemma, high context sensitivity, and complex
constraint relations. This paper proposes InputBlaster which leverages the LLM
to automatically generate unusual text inputs for mobile app crash detection.
It formulates the unusual inputs generation problem as a task of producing a
set of test generators, each of which can yield a batch of unusual text inputs
under the same mutation rule. In detail, InputBlaster leverages LLM to produce
the test generators together with the mutation rules serving as the reasoning
chain, and utilizes the in-context learning schema to demonstrate the LLM with
examples for boosting the performance. InputBlaster is evaluated on 36 text
input widgets with cash bugs involving 31 popular Android apps, and results
show that it achieves 78% bug detection rate, with 136% higher than the best
baseline. Besides, we integrate it with the automated GUI testing tool and
detect 37 unseen crashes in real-world apps from Google Play. | Zhe Liu, Chunyang Chen, Junjie Wang, Mengzhuo Chen, Boyu Wu, Xing Che, Dandan Wang, Qing Wang | 2023-10-24T09:10:51Z | http://arxiv.org/abs/2310.15657v1 | Testing the Limits: Unusual Text Inputs Generation for Mobile App Crash Detection with Large Language Model
###### Abstract.
Mobile applications have become a ubiquitous part of our daily life, providing users with access to various services and utilities. Text input, as an important interaction channel between users and applications, plays an important role in core functionality such as search queries, authentication, messaging, etc. However, certain special text (e.g., -18 for Font Size) can cause the app to crash, and generating diversified unusual inputs for fully testing the app is highly demanded. Nevertheless, this is also challenging due to the combination of explosion dilemma, high context sensitivity, and complex constraint relations. This paper proposes InputBlaster which leverages the LLM to automatically generate unusual text inputs for mobile app crash detection. It formulates the unusual inputs generation problem as a task of producing a set of test generators, each of which can yield a batch of unusual text inputs under the same mutation rule. In detail, InputBlaster leverages LLM to produce the test generators together with the mutation rules serving as the reasoning chain, and utilizes the in-context learning schema to demonstrate the LLM with examples for boosting the performance. InputBlaster is evaluated on 36 text input widgets with cash bugs involving 31 popular Android apps, and results show that it achieves 78% bug detection rate, with 136% higher than the best baseline. Besides, we integrate it with the automated GUI testing tool and detect 37 unseen crashes in real-world apps from Google Play.
2023Acmcopy of ACM ISSI 978-x-xxxxxx-x/YY/MM...$15.00[https://doi.org/10.1145/mnmnn.mmn](https://doi.org/10.1145/mnmnn.mmn)
2
## 1. Introduction
Mobile applications (apps) have become an indispensable component of our daily lives, enabling instant access to a myriad of services, information, and communication platforms. The increasing reliance on these applications necessitates a high standard of quality and performance to ensure user satisfaction and maintain a competitive edge in the fast-paced digital landscape. The ubiquity of mobile applications has led to a constant need for rigorous testing and validation to ensure their reliability and resilience against unexpected user inputs.
Text input plays a crucial role in the usability and functionality of mobile applications, serving as a primary means for users to interact with and navigate these digital environments [43, 44]. From search queries and form submissions to instant messaging and content creation, text input is integral to the core functionality of numerous mobile applications across various domains. The seamless handling of text input is essential for delivering a positive user experience, as it directly impacts the ease of use, efficiency, and overall satisfaction of the users.
Given the unexpected input, the program might suffer from memory leakage, data corruption, falling into the dead loop, resulting in the application stuck, crash, or other serious issues [14, 27, 28, 63]. Even worse, these buggy texts can only demonstrate a tiny difference from the normal text, or they themselves are normal text in other contexts, which makes the issue easily occur and difficult to spot. There has been a fair amount in the news about the crash of iOS and Android systems caused by a special text input [1], which has greatly affected people's daily lives. For example, in July 2020, a specific character of the Indian language caused iOS devices constantly crash. It has affected a wide range of iOS applications, including iMessage, WhatsApp, and Facebook Messenger [2], and as long as certain text inputs contain the character, these apps would crash.
Taken in this sense, automatically generating unusual inputs for fully testing the input widgets and uncovering bugs is highly demanded. Existing automated GUI testing techniques focus on generating the valid text input for passing the GUI page and conducting the follow-up page exploration [6, 8, 27, 43, 44, 62, 63], e.g., QTypist [44] used GPT-3 to generate semantic input text to improve the coverage of the test. They could not be easily adapted to this task, since the unusual inputs can be more diversified and follow different rationales from the valid inputs. There are also studies targeting at generating strings that violate the constraints
(e.g., string length) with heuristic analysis or finite state automaton techniques (Sundundhi et al., 2017; Wang et al., 2018; Wang et al., 2019). Yet they are designed for specific string functions like concatenation and replacement, and could not be generalized in this task.
Nevertheless, it is very challenging for the automatic generation of diversified unusual inputs. The first challenge is the combination explosion. There can be numerous input formats including text, number, date, time, currency, and innumerable settings, e.g., different character sets, languages and text lengths, which makes it quite difficult if not impossible to enumerate all these variants. The second challenge is context sensitivity. The unusual inputs should closely relate to the context of the input widgets to effectively trigger the bug, e.g., a negative value for font size (as shown in Figure 1), an extremely large number to potentially violate the widget for people's height. The third challenge is the constraint relation within and among the input widgets. The constraints can be that a widget only accepts pure numbers (without characters), or the sum of item values smaller/bigger than the total (as shown in Figure 1), which requires an exact understanding of the related widgets and these constraints so as to generate targeted variation. What's more difficult is that certain constraints only appear when interacting with the apps (i.e., dynamic hints in terms of the incorrect texts), and static analysis cannot capture these circumstances.
Large Language Models (LLMs) (K
### Motivational Study
#### 2.1.1. **Data Collection**
The dataset is collected from one of the largest Android GUI datasets Rico (Koo et al., 2018), which has a great number of Android GUI screenshots and their corresponding view hierarchy files (Zhu et al., 2019; Zhu et al., 2019). These apps belong to diversified categories such as news, entertainment, medical, etc. We analyze the view hierarchy file according to the package name and extract the GUI page belonging to the same app. A total of 7,136 apps with each having more than 3 GUI pages are extracted. For these apps, we first randomly select 136 apps with 506 GUI pages and check their text inputs through view hierarchy files. We summarize a set of keywords that indicate the apps have text inputs widgets (Zhu et al., 2019), e.g., _EditText_, _hint-text_, _AutoCompleteTextView_, _etc_. We then use these keywords to automatically filter the view hierarchy files from the remaining 7,000 apps, and obtain 5,761 candidate apps with at least one potential text input widget. Four authors then manually check them to ensure that they have text inputs until a consensus is reached. In this way, we finally obtain 5,013 (70.2%) apps with at least one text input widget, and there are 3,723 (52.2%) apps having two or more text input widgets. Please note that there is no overlap with the evaluation dataset.
#### 2.1.2. **The Constraint Categories of Text Inputs**
We randomly select 2000 apps with text inputs and conduct manual categorization to derive the constraint types of input widgets. Following the open coding protocol (Zhu et al., 2019), two authors individually examine the content of the text input, including the app name, activity name, input type and input content. Then each annotator iteratively merges similar codes, and any disagreement of the categorization will be handed over to the third experienced researcher for double checking. Finally, we come out with a categorization of the constraints within (intra-widget) and among the widgets (inter-widget), with details summarized in Figure 2.
**Intra-widget constraint.** Intra-widget constraints depict the requirements of a single text input, e.g., a widget for a human's height can only input the non-negative number. There are explicit and implicit sub-types. The former accounts for 63%, which manifests as the requirement to display input directly on the GUI page. And the latter account for 37%, mainly manifested as the feedback when incorrect text input is received, e.g., after inputting a simple password, the app would remind the user "at least one upper case character (A-Z) is required" as demonstrated in Figure 2.
**Inter-widget constraint.** Inter-widget constraints depict the requirements among multiple text input widgets on a GUI page, for example, the diastolic pressure should be less than systolic pressure as shown in Figure 2.
**Summary.** As demonstrated above, the text input widgets are quite common in mobile apps, e.g., 70.2% apps with at least one such widget. Furthermore, considering the diversity of inputs and contexts, it would require significant efforts to manually build a complete set of mutation rules to fully test an input widget, and the automated technique is highly demanded. This confirms the popularity of text inputs in mobile apps and the complexity of it for full testing, which motivates us to automatically generate a batch of unusual text inputs for effective testing and bug detection.
### Background of LLM and In-context Learning
The target of this work is to generate the input text, and the Large Language Model (LLM) trained on ultra-large-scale corpus can understand the input prompts (sentences with prepending instructions or a few examples) and generate reasonable text. When pre-trained on billions of samples from the Internet, recent LLMs (like Chat-GPT (Zhu et al., 2019), GPT-3 (Koo et al., 2018) and T5 (Zhu et al., 2019)) encode enough information to support many natural language processing tasks (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019).
Tuning a large pre-trained model can be expensive and impractical for researchers, especially when limited fine-tuned data is available for certain tasks. In-context Learning (ICL) (Liu et al., 2019; Zhu et al., 2019; Zhu et al., 2019) offers a new alternative that uses Large Language Models to perform downstream tasks without requiring parameter updates. It leverages input-output demonstration in the prompt to help the model learn the semantics of the task. This new paradigm has achieved impressive results in various tasks, including code generation and assertion generation.
## 3. Approach
This paper aims at automatically generating a batch of unusual text inputs which can possibly make the mobile apps crash. The common practice might directly produce the target inputs with LLM as existing studies in valid input generation (Zhu et al., 2019) and fuzzing deep learning libraries (Zhu et al., 2019; Zhu et al., 2019). Yet, this would be quite inefficient for our task, because each interaction with the LLM requires a few seconds waiting for the response and consumes lots of energy. Instead, this paper proposes to produce the test generators (a code snippet) with LLM, each of which can generate a batch of unusual text inputs
Figure 2. The category of constraints.
under the same mutation rule (e.g., insert special characters into a string), as demonstrated in Figure 4
To achieve this, we propose InputBlaster which leverages LLM to produce the test generators together with the mutation rules which serve as the reasoning chains for boosting the performance, and each test generator then automatically generates a batch of unusual text inputs, as shown in Figure 3. In detail, given a GUI page with text input widgets and its corresponding view hierarchy file, we first leverage LLM to generate the valid text input which can pass the GUI page (Sec 3.1). We then leverage LLM to produce the test generator which can generate a batch of unusual text inputs, and simultaneously we also ask the LLM to output the mutation rule which serves as the reasoning chain for guiding the LLM in making the effective mutations from valid inputs (Sec 3.2). To further boost the performance, we utilize the in-context learning schema to provide useful examples when querying the LLM, from online issue reports and historical running records (Sec 3.3).
### Prompt Generation for Valid Input
InputBlaster first leverages LLM to generate the valid input which will serve as the target towards which the following mutation can be conducted. The context information relates to the input widgets and its belonged GUI page can provide important clues about what the valid input should be, therefore we input this information into LLM (in Section 3.1.1). In addition, we also include the dynamic feedback information when interacting with the input widgets (in Section 3.1.2), and the constraint categories we summarized in the previous section (in Section 3.1.3) to improve the performance. Furthermore, besides the valid text input, we also ask LLM to output its inferred constraints for generating the valid input which will facilitate the approach to generating the mutation rules in the next section. We summarize all the extracted information with examples in Table 1.
#### 3.1.1. **Context Extraction**
The context information is extracted from the view hierarchy file, which is easily obtained by automated GUI testing tools (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). As shown in Table 1, we extract the text-related field of the input widget which indicates how the valid input should be. In detail, we extract the "hint text", "resource id", and "text" fields of the input widget, and utilize the first non-empty one among the above three fields.
We also extract the activity name of the GUI page and the mobile app name, and this global context further helps refine the understanding of the input widget. In addition, we extract the local context of the input widget (i.e., from nearby widgets) to provide thorough viewpoints and help clarify the meaning of the widget. The candidate information source includes the parent node widgets, the leaf node widget, widgets in the same horizontal axis, and fragment of the current GUI page. For each information source, we extract the "text" field (if it is empty, use the "resource-id" field), and concatenate them into the natural-language description with the separator (';').
#### 3.1.2. **Dynamic Hint Extraction**
When one inputs an incorrect text into the app, there are some feedbacks (i.e., dynamic hints) related to the inputs, e.g., the app may alter the users that the password should contain letters and digits. The dynamic hint can further help LLM understand what the valid input should look like.
We extract the dynamic hints via differential analysis which compares the differences of the GUI page before and after inputting the text, and extracts the text field of the newly emerged widgets (e.g., a popup window) in the later GUI page, with examples shown in Figure 2. We also record the text input which makes the dynamic hint happens, which can help the LLM to understand the reason behind it.
#### 3.1.3. **Candidate Constraints Preparation**
Our pilot study in Section 2.1.2 summarizes the categories of constraints within and among the widgets. The information can provide direct guidance for the LLM in generating the valid inputs, for example, the constraint explicitly requires the input should be pure text (without special characters). We provide this list of all candidate constraints described in natural language as in Section 2.1.2 to the LLM.
#### 3.1.4. **Prompt Generation**
With the extracted information, we use three kinds of information to generate prompts for inputting into the LLM, as shown in Table 1. Generally speaking, it first provides the context information and the dynamic hints (if any) of the input widgets, followed by the candidate constraints, and then queries the LLM for the valid input. Due to the robustness of LLM, the generated prompt sentence does not need to fully follow the grammar.
After inputting the prompt, the LLM will return its recommended valid text input and its inferred constraints, as demonstrated in Figure 4
We then input it into the widget, and check whether it can make the app transfer to the new GUI page (i.e., valid input). If
Figure 3. Overview of InputBlaster.
the app fails to transfer, we iterate the process until the valid input is generated.
### Prompt Generation for Test Generator with Mutation Rule
Based on the valid input in the previous section, InputBlaster then leverages LLM to produce the test generator together with the mutation rule. As demonstrated in Figure 4, the test generator is a code snippet that can generate a batch of unusual inputs, while the mutation rule is the natural language described operation for mutating the valid inputs which automatically output by LLM based on our prompt and serves as the reasoning chain for producing the test generator. Note that the mutation rule here is output by LLM.
Each time when a test generator is produced, we can obtain a batch of automatically generated unusual text inputs, and will input them into the text widgets to check whether they have successfully made the mobile app crash. This test execution feedback (in Section 3.2.2) will be incorporated in the prompt for querying the LLM which can enable it more familiar with how the mutation works and potentially produce more diversified outcomes. We also include the inferred constraints in the previous section in the prompt (in Section 3.2.1), since the natural language described explanation would facilitate the LLM in producing effective mutation rules, for example, the inferred constraint is that the input should be in pure text (without special characters) and the LLM would try to insert certain characters to violate the constraint.
#### 3.2.1. **Inferred Constraints and Valid Input Extraction**
We have obtained the inferred constraints and valid input from the output of the LLM in the previous section, here we extract this information from the output message and will input it into the LLM in this section. We design a flexible keyword matching method to automatically extract the description between the terms like 'constraints' and 'the input' and treat it as the inferred constraints, and extract the description after the terms like 'input is' and treat it as the valid input, as demonstrated in Figure 4.
#### 3.2.2. **Test Execution Feedback Extraction**
After generating the unusual text inputs, we input them into the mobile app and check whether they can successfully trigger the app crash. This test execution information will be inputted into the LLM to generate more effective and diversified text inputs. We use the real buggy text inputs and the other unusual inputs (which don't trigger bugs) to prompt LLM in the follow-up generation. The former can remind the LLM to avoid generating duplicate ones, while the latter aims at telling the LLM to consider other mutation rules.
Besides, we also associate the mutation rules with the text input to enable the LLM to better capture its semantic meaning. As shown in Figure 4, we extract the content between the keywords "Mutation rule" and "Test generator" as mutation rules.
#### 3.2.3. **Prompt Generation**
With the extracted information, we design linguistic patterns of the prompt for generating the test generator and mutation rules. As shown in Figure 4, the prompt includes four kinds of information, namely inferred constraints, valid input, text execution feedback, and question. The first three kinds of information are mainly based on the extracted information as described above, and we also add some background illustrations to let the LLM better understand the task, like the inferred constraint in Figure 4. For the question, we first ask the LLM to generate the mutation rule for the valid input, then let it produce a test generator following the mutation rule. Due to the robustness of LLM, the generated prompt sentence does not need to follow the grammar completely.
### Enriching Prompt with Examples
It is usually difficult for LLM to perform well on domain-specific tasks as ours, and a common practice would be employing the in-context learning schema to boost the performance. It provides the LLM with examples to demonstrate what the instruction is, which enables the LLM better understand the task. Following the schema, along with the prompt for the test generator as described in Section 3.2, we additionally provide the LLM with examples of the unusual inputs. To achieve this, we first build a basic example dataset of buggy inputs (which truly trigger the crash) from the issue reports of open-source mobile apps, and continuously enlarge it with the running records during the testing process (in Section 3.3.1). Based on the example dataset, we design a retrieval-based
\begin{table}
\begin{tabular}{p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline \hline
**Id** & **Attribute** & **Description** & **Examples** \\ \hline
11 & ApoSume & The name of testing app & ApoSume = “Wullet” \\
12 & PageName & Activity name of the current GUI page & PageName = “User” \\
13 & InputWidget & The text input widgets denoted with the textual related fields & InputWidget = “Please input user name” \\
14 & NextWWidget & Nearby widgets denoted with their textual related fields & NextWWidget = “your income: [SEP] s” \\
15 & DynamicInt & Feedbacks in terms of an incorrect input & DynamicIntn = “password should contain letters” \\ \hline
16 & CandidateConstraints & Candidate constraints within or among widget(s) summarized in pilot study, organized into intra-constraint(explicit), intra-constraint(implicit), inter-constraint(implicit), and inter-constraint & CandidateConstraints = “intra-constraint(explicit): (s) Pure text (without special characters)...” \\ \hline
**Llogistic patterns of prompts** & & **Examples** \\ \hline
**P1** & Provide context information & We want to test the text input widgets on (PageName) page of (\(AppName\)) app which has (\(\#NumOfInputWidget\)) text inputs. The first input widget is (\(InputWidget\)), its context is (\(InputWidget\)), and its dynamic hint is (\(DynamicInt\)). The second input & We want to test the text input widgets on User page of/Wullet app which has 3 text inputs. The first input widget is “summer, its context is "Wulcome to..., and its dynamic hint is “Demainextlink”. \\ \hline P2 & Provide candidate constraints & There are 5 explicit intra-constraints (\(intra-constraint(explicit)\)) ; 5 implicit intra-constraints (\(intra-constraint(implicit)\)) ; 7 inter-constraints (\(inter-constraint\)) & There are 5 explicit intra-constraints: (1) United string length...? 7 inter-constraints: (1) Depainext and Arruda... \\ \hline P3 & Query LLM & Please generate a valid input based on the above information and provide the inferred constraints of each input. \\ \hline \hline \end{tabular}
\end{table}
Table 1. The example of extracted information and linguistic patterns of prompts for Module 1.
example selection method (in Section 3.3.2) to choose the most suitable examples in terms of an input widget, which further enables the LLM to learn with pertinence.
#### 3.3.1. **Example Dataset Construction**
We collect the buggy text inputs from GitHub and continuously build an example dataset that serves as the basis for in-context learning. For each data instance, as demonstrated in 4 %, it records the buggy text inputs and the mutation rules which facilitate the LLM understanding of how the buggy inputs come from. It also includes the context information of the input widgets which provides the background information of the buggy inputs, and enables us to select the most suitable examples when querying the LLM.
**Mining buggy text inputs from GitHub**. First, we automatically crawl the issue reports and pull requests from the Android mobile apps in GitHub (updated before September 2022). Then we use keyword matching to filter these related to the text inputs (e.g., EditText) and have triggered crashes. Following that, we then employ manual checking to further determine whether there is a crash triggered by the buggy text inputs by running the app. In this way, we obtain 50 unusual inputs and store them in the example dataset (There is no overlap with the evaluation datasets.). We then extract the context information of the input widget with the method in Section 3.1.1, and store it together with the unusual input. Note that, since these buggy inputs don't associate with the mutation rules, we set them as null.
**Enlarging the dataset with buggy text inputs during testing.** We enrich the example dataset with the newly emerged unusual text inputs which truly trigger bugs during InputBlaster runs on various apps. Specifically, for each generated unusual text input, after running it in the mobile apps, we put the ones which trigger crashes into the example dataset. We also add their associated mutation rules generated by the LLM, as well as the context information extracted in Section 3.1.1.
#### 3.3.2. **Retrieval-based Example Selection and In-context Learning**
Examples can provide intuitive guidance to the LLM in accomplishing a task, yet excessive examples might mislead the LLM and cause the performance to decline. Therefore, we design a retrieval-based example selection method to choose the most suitable examples (i.e., most similar to the input widgets) for LLM.
In detail, the similarity comparison is based on the context information of the input widgets. We use Word2Vec (Lightweight word embedding method) (Wang et al., 2019) to encode the context information of each input widget into a 300-dimensional sentence embedding, and calculate the cosine similarity between the input widget and each data instance in the example dataset. We choose the top-K data instance with the highest similarity score, and set K as 5 empirically.
The selected data instances (i.e., examples) will be provided to the LLM in the format of context information, mutation rule, and buggy text input, as demonstrated in Figure 4 %.
### Implementation
We implement InputBlaster based on the ChatGPT which is released on the OpenAI website3. It obtains the view hierarchy file of the current GUI page through UIAutomator (Wang et al., 2019) to extract context information of the input widgets. InputBlaster can be integrated by replacing the text input generation module of the automated GUI testing tool, which automatically extracts the context information and generates the unusual inputs.
Footnote 3: [https://beta.openai.com/docs/models/chatgpt](https://beta.openai.com/docs/models/chatgpt)
## 4. Experiment Design
### Research Questions
* **RQ1: (Bugs Detection Performance)** How effective of InputBlaster in detecting bugs related to text input widgets?
Figure 4. Example of how InputBlaster works.
For RQ1, we first present some general views of InputBlaster for bug detection, and then compare it with commonly-used and state-of-the-art baseline approaches.
* [leftmargin=*,noitemsep,topsep=0pt]
* **RQ2: (Ablation Study)** What is the contribution of the (sub-) modules of InputBlaster for bug detection performance?
For RQ2, We conduct ablation experiments to evaluate the impact of each (sub-) module on the performance.
* [leftmargin=*,noitemsep,topsep=0pt]
* **RQ3: (Usefulness Evaluation)** How does our proposed InputBlaster work in real-world situations?
For RQ3, we integrate InputBlaster with the GUI testing tool to make it automatically explore the app and detect unseen input-related bugs, and issue the detected bugs to the development team.
### Experimental Setup
**For RQ1 and RQ2, we crawl 200 most popular open-source apps from F-Droid (Bradbury et al., 2017), and only keep the latest ones with at least one update after September 2022 (this ensures the utilized apps are not overlapped with the ones in Sec 3.3). Then we collect all their issue reports on GitHub, and use keywords (e.g., EditText) to filter those related to text input. Finally, we obtain 126 issue reports related to 54 apps. Then we manually review each issue report and the mobile app, and filter it according to the following criteria: (1) the app wouldn't constantly crash on the emulator; (2) it can run all baselines; (3) UIAutomator (Zhu et al., 2017) can obtain the view hierarchy file for context extraction; (4) the bug is related to text input widgets; (5) the bug can be manually reproduced for validation; (6) the app is not used in the motivational study or example dataset construction. Please note that we follow the name of the app to ensure that there is no overlap between the datasets. Finally, 31 apps with 36 buggy text inputs remain for further experiments.
We measure the bug detection rate, i.e., the ratio of successfully triggered crashes in terms of all the experimental crashes (i.e., buggy inputs), which is a widely used metric for evaluating GUI testing (Bradbury et al., 2017; Bradbury et al., 2017; Bradbury et al., 2017). Specifically, with the generated unusual input, we design an automated test script to input it into the text input widgets, and automatically run the "submit" operation to check whether a crash occurs. If no, use the script to go back to GUI page with the input widget if necessary, and try the next generated unusual input. As long as a crash is triggered for a text input widget, we treat it as successful bug detection and will stop the generation for this widget. Note that our generated unusual input is not necessarily the same as the one provided in the issue report, e.g., -18 vs. -20, as long as a crash is triggered after entering the unusual inputs, we treat it as a successful crash detection.
For a fair comparison with other approaches, we employ two experimental settings, i.e., 30 attempts (30 unusual inputs) and 30 minutes. We record the bug detection rate under each setting (denoted as "Bug (%)" in Table 2 to Table 5), and also record the actual number of attempts (denoted as "Attempt (#)") and the actual running time (denoted as "Min (#)") when the crash occurs to fully understanding the performance.
**For RQ3, we further evaluate the usefulness of InputBlaster in detecting unseen crash bugs related to text input. A total of 131 apps have been retained. We run Ape (Zhu et al., 2017) (a commonly-used automated GUI testing tool) integrated with InputBlaster, for exploring the mobile apps and getting the view hierarchy file of each GUI page. We use the same configurations as the previous experiments. Once a crash related to text input is spotted, we create an issue report by describing the bug, and report them to the app development team through the issue reporting system or email.**
### Baselines
Since there are hardly any existing approaches for the unusual input generation of mobile apps, we employ 18 baselines from various aspects to provide a thorough comparison.
First, we directly utilize _ChatGPT_(Zhu et al., 2017) as the baseline. We provide the context information of the text input widgets (as described in Table 1 P1), and ask it to generate inputs that can make app crash.
Fuzzing testing and mutation testing can be promising techniques for generating invalid inputs, and we apply several related baselines. Feldt et al. (Feldt et al., 2019) proposed a testing framework called _GoldTest_, which generates diverse test inputs for mobile apps by designing regular expressions and generation strategies. In 2017, they further proposed an invalid input generation method (Zhu et al., 2017) based on probability distribution (PD) parameters and regular expressions, and we name this baseline as _PDinvalid_. Furthermore, we reuse the idea of traditional random-based fuzzing (Feldt et al., 2019; Bradbury et al., 2017), and develop a _RandomFuzz_ for generating inputs for text widgets. In addition, based on the 50 buggy text inputs from the GitHub dataset in Section 3.3.1, we manually design 50 corresponding mutation rules to generate the invalid input, and name this baseline as _ruleMutator_.
Furthermore, we include the string analysis methods as the baselines, i.e., _OSTRICH_(Dosov et al., 2017) and _Sloth_(Dosov et al., 2017). They aim at generating the strings that violate the constraints (e.g., string length, concatenation, etc), which is similar to our task. _OSTRICH_'s key idea (Dosov et al., 2017) is to generate the test strings based on heuristic rules. _Sloth_(Dosov et al., 2017) proposes to exploit succinct alternating finite-state automata as concise symbolic representations of string constraints.
There are constraint-based methods, i.e., _Mobolic_(Bradbury et al., 2017) and _TextExerciser_(Zhu et al., 2017), which can generate diversified inputs for testing the app. For example, _TextExerciser_ utilizes the dynamic hints to guide it in producing the inputs.
We also employ two methods (_RNNInput_(Dosov et al., 2017) and _QTypist_(Dosov et al., 2017)) which aim at generating valid inputs for passing the GUI page. In addition, we use the automated GUI testing tools, i.e., _Stoat_(Stoat_(Stoat, 2017), _Droidbot_(Dosov et al., 2017)), _Ape_(Zhu et al., 2017), _Fastbot_(Dosov et al., 2017), _ComboDroid_(Zhu et al., 2017), _TimeMachine_(Dosov et al., 2017), _Humanoid_(Dosov et al., 2017), _Q-testing_(Zhu et al., 2017), which can produce inputs randomly or following rules to make app running automatically.
We design the script for each baseline to ensure that it can reach the GUI page with the text input widget, and run them in the same experimental environment (Android x64) to mitigate potential bias.
## 5. Results and Analysis
Figure 5 demonstrates examples of InputBlaster's generated unusual inputs and the inputs that truly trigger the crash. We can see that our proposed approach can generate quite diversified inputs which mutate the valid input from different aspects, e.g., for the price in the first example which should be a non-negative value, the generated unusual inputs range from negative values and decimals to various kinds of character strings. Furthermore, it is good at capturing the contextual semantic information of the input widgets and their associated constraints, and generating the violations accordingly. For example, for the minimum and maximum price in the first example, it generates the unusual inputs with the minimum larger than the maximum, and successfully triggers the crash.
We further analyze the bugs that could not be detected by our approach. A common feature is that they need to be triggered under specific settings, e.g., only under the user-defined setting, the input can trigger the crash, in the environment we tested, it may not have been possible to trigger a crash due to the lack of user-defined settings in advance. We have manually compared the unusual inputs generated by our approach with the ones in the issue reports. We find in all cases, InputBlaster can generate the satisfied buggy inputs within 30 attempts and 30 minutes, which further indicates its effectiveness.
**Performance comparison with baselines.** Table 2 also shows the performance comparison with the baselines. We can see that our proposed InputBlaster is much better than the baselines, i.e., 136% (0.78 vs. 0.33) higher in bug detection rate (within 30 minutes) compared with the best baseline TextExerciser. This further indicates the advantages of our approach. Nevertheless, the TextExerciser can only utilize the dynamic hints in input generation which covers a small portion of all situations, i.e., a large number of input widgets donot involve such feedback.
Without our elaborate design, the raw ChatGPT demonstrates poor performance, which further indicates the necessity of our approach. In addition, the string analysis methods, which are designed specifically for string constraints, would fail to work for mobile apps. In addition, since the input widgets of mobile apps are more diversified (as shown in Section 2.1.2) compared with the string, the heuristic analysis or finite-state automata techniques in the string analysis methods might be ineffective for our task. The baselines for automated GUI testing or valid text input generation are even worse, since their main focus is to increase the coverage through generating valid inputs. This further implies the value of our approach for targeting this unexplored task.
i.e, inferred constraint, mutation rule, text execution feedback, test generator and retrieved examples of buggy input. For removing the test generator, we directly let the LLM generate the unusual inputs, and for removing retrieved examples, we use the random selection method. For other variants, we set the removed content as "null".
The experimental results demonstrate that removing any of the sub-modules would result in a noticeable performance decline, indicating the necessity and effectiveness of the designed sub-modules.
Removing the mutation rules (_InputBlaster w/o-mutateRule_) have the greatest impact on the performance, reducing the bug detection rate by 50% (0.36 vs. 0.72 within 30 attempts). Remember that, InputBlaster first lets the LLM to generate the mutation rules (how to mutate the valid inputs), then asks it to produce the test generator following the mutation rule. With the generated mutation rules serving as the reasoning chain, the unusual input generation can be more effective, which further proves the usefulness of our design.
We also notice that, when removing the test generator (_InputBlaster w/o-generator_), the bug detection rate does not drop much (0.72 vs. 0.61) when considering 30 attempts, yet it declines a lot (0.78 vs. 0.36) when considering 30 minutes of testing time. This is because our proposed approach lets the LLM produce the test generator which can yield a batch of unusual inputs. This means interacting with the LLM once can generate multiple outcomes. However, if asking the LLM to directly generates unusual inputs (i.e., _InputBlaster w/o-generator_), it requires interacting with LLM frequently, and could be quite inefficient. This further demonstrates we formulate the problem as producing the test generator task is efficient and valuable.
In addition, randomly selecting the examples (_InputBlaster w/o-retriExample_) would also largely influence the performance, and decrease the bug detection rate by 22% (0.56 vs. 0.72 within 30 attempts). This indicates that by providing similar examples, the LLM can quickly think out what should the unusual inputs look like. Nevertheless, we can see that, compared with the variant without enriched examples in prompt (Table 3), the randomly selected examples do take effect (0.47 vs 0.56 in bug detection rate within 30 attempts), which further indicates the demonstration can facilitate the LLM in producing the required output.
#### 5.2.3. **Influence of Different Number of Examples**
Table 5 demonstrates the performance under the different number of examples provided in the prompt.
We can see that the number of detected bugs increases with more examples, reaching the highest bug detection rate with 5 examples. And after that, the performance would gradually decrease even increasing the examples. This indicates that too few or too many examples would both damage the performance, because of the tiny information or the noise in the provided examples.
### Usefulness Evaluation (RQ3)
Table 6 shows all bugs spotted by Ape integrated with our InputBlaster, and more detailed information on detected bugs can be seen in our website. For the 131 apps, InputBlaster detects 43 bugs in 32 apps, of which 37 are newly-detected bugs. Furthermore, these new bugs are not detected by the Ape without InputBlaster.
We submit these 37 bugs to the development team, and 28 of them have been fixed/confirmed so far (21 fixed and 7 confirmed), while the remaining are still pending (none of them is rejected). This
\begin{table}
\begin{tabular}{c|c|c||c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c||}{**30 attempts**} & \multicolumn{2}{c}{**30 minutes**} \\ & **Bug(\%)** & **Attempt(\#)** & **Bug(\%)** & **Min(\#)** \\ \hline
**InputBlaster (Base)** & **0.72** & **13.52** & **0.78** & **9.64** \\ \hline _w/o infercCons_ & 0.53 & 19.94 & 0.56 & 15.11 \\ _w/o matteRule_ & 0.36 & 21.31 & 0.42 & 20.71 \\ _w/o feedback_ & 0.58 & 16.64 & 0.58 & 14.40 \\ _w/o generator_ & 0.61 & 16.86 & 0.36 & 24.37 \\ w/o retriExample & 0.56 & 19.11 & 0.56 & 23.44 \\ \hline \hline \end{tabular}
* **Notes: The five variants respectively denote InputBlaster removing inferred constraint, mutation rule, test execution feedback, test generator, retrieved examples of buggy input.**
\end{table}
Table 4. Contribution of different sub-modules (RQ2)
Figure 5. Example of InputBlaster’s output.
\begin{table}
\begin{tabular}{c|c|c||c|c} \hline \hline \multirow{2}{*}{**Example (\#)**} & \multicolumn{2}{c||}{**Setting 1 (30 attempts)**} & \multicolumn{2}{c}{**Setting 2 (30 minutes)**} \\ & **Bug(\%)** & **Attempt(\#)** & **Bug(\%)** & **Min(\#)** \\ \hline
1 & 0.50 & 20.19 & 0.50 & 22.98 \\
2 & 0.53 & 19.36 & 0.56 & 18.31 \\
3 & 0.61 & 16.86 & 0.64 & 14.93 \\
4 & 0.69 & 14.36 & 0.69 & 11.14 \\
5(InputBlaster) & **0.72** & **13.52** & **0.78** & **9.64** \\
6 & 0.61 & 16.86 & 0.58 & 15.48 \\
7 & 0.53 & 19.69 & 0.53 & 17.15 \\
8 & 0.44 & 21.86 & 0.42 & 20.47 \\
9 & 0.38 & 23.53 & 0.36 & 22.34 \\
10 & 0.36 & 24.36 & 0.31 & 23.81 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Result of different number of examples. (RQ2)
further indicates the effectiveness and usefulness of our proposed InputBlaster in bug detection.
When confirming and fixing the bugs, some Android app developers express thanks such as _"Very nice! You find an invalid input we thought was too insignificant to cause crashes."_(i.e., Ipsos). Furthermore, some developers also express their thought about the buggy text input "_Handling different inputs can be tricky, and I admit we couldn't test for every possible scenario. It has given me a fresh appreciation for the complexity of user inputs and the potential bugs they can introduce._"(i.e., DRbus). Some developers also present valuable suggestions to facilitate the further improvement of InputBlaster. For example, some of them hope that we can find the patterns of these bugs and design repair methods.
## 6. Discussion and Threats to Validity
### Generality Across Platforms
The primary idea of InputBlaster is to generate unusual inputs for text widgets with the context information when running the apps. Although we only experiment with Android mobile apps, since other platforms have these similar types of information, InputBlaster can be used to conduct the testing of input widgets for other platforms. We conduct a small-scale experiment for another two popular platforms, and experiment on 10 iOS apps with 15 bugs and 10 Web apps with 18 bugs, with details on our website. Results show that InputBlaster's bug detection rate is 80% for iOS apps and 78% for Web apps within 30 minutes testing time. This further demonstrates the generality and usefulness of InputBlaster, and we will conduct more thorough experiments in the future.
### Threats of Validity
The first threat concerns the representativeness of the experimental apps. We have selected popular and active apps which can partially reduce this threat.
The second threat relates to the baseline selection. Since there are hardly any existing approaches for the unusual input generation of mobile apps, we employ 18 approaches from various aspects for a thorough comparison. There are inputs generation techniques for Web apps (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019), yet because they need to analyze the web code which is different from mobile apps considering the different rendering mechanism, and cannot be directly applied in our task, hence we don't include them as the baselines.
The third threat is that we only focus on the crash bugs, since they cause more serious effects and can be automatically observed, and existing studies also only explore this type of bug (Zhu et al., 2018; Chen et al., 2019; Chen et al., 2019).
The fourth threat might lie in the process of manual categorization in Section 2.1.2. The process involves multiple practitioners and double-checking for the final decision. Also note that, the derived categorization is only for illustration, rather than serving as the ground truth for evaluation.
The Fifth threat may exist in the uncertainty of LLM output results. LLM may not generate the corresponding output as expected, and we also design in-context learning and feedback mechanisms to ensure the output format and content of LLM.
Last but not least, InputBlaster gradually builds the example dataset (Section 3.3.1) as the test goes on. This indicates the performance can be influenced by the testing order, e.g., when arranged in the first place, the crash could not be detected, yet when arranged after 10 apps are tested, the crash can be revealed, since the example dataset has accumulated more knowledge. In this paper, we use a random order of the experimental apps and would explore more in the future.
## 7. Related Work
**Testing Related with Text Inputs.** There have been many automated GUI testing techniques for mobile apps (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), yet they mainly focus on how to plan the exploration paths to fully cover the app activities and states. There are also studies (Zhu et al., 2018; Chen et al., 2019; Chen et al., 2019) that aim at generating valid inputs to pass the GUI pages and are used to enrich the automated testing tools for higher coverage. None of them can conduct the testing of text input widgets.
For Web apps, SWAT (Chen et al., 2019) and AWET (Chen et al., 2019) generated the unusual inputs based on the pre-defined template. ACTEve (Chen et al., 2017) and S3 (Chen et al., 2019) first used symbolic execution to extract input constraints in the source code and then employ a solver to generate the inputs. They need to analyze the web code and can't be directly applied to Android apps which have quite different rendering mechanisms. In addition, some constraints are dynamically generated (as shown in Section 2.1.2), and couldn't be extracted from the source code.
There are some string analysis methods for generating the strings that violate the constraints (e.g., string length) (Bahdan et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). Although they are effective for string constraints, yet the inputs of mobile apps are more diversified, and they cannot work well in our task.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Id** & **APP Name** & **Category** & **Download** & **Status** \\ \hline
1 & OTOMU & Music & 100M+ & fixed \\
2 & KWork & Tool & 50M+ & confirmed \\
3 & NorsCeus & Tool & 50M+ & fixed \\
4 & EarnMon & Finance & 50M+ & fixed \\
5 & RewardM & Finance & 50M+ & confirmed \\
6 & AtuPol & Tool & 10M+ & confirmed \\
7 & ISAY & Commun & 10M+ & fixed \\
8 & Ipsos & Commun & 10M+ & fixed \\
9 & MediaFire & Product & 5M+ & confirmed \\
10 & DRbus & Navig & 500K+ & fixed \\
11 & MyTmaps & Travel & 500K+ & fixed \\
12 & MMDR & Utilities & 500K+ & fixed \\
13 & Gentling & Travel & 500K+ & fixed \\
14 & Fair & Health & 500K+ & confirmed \\
15 & ClassySha & Tool & 500K+ & fixed \\
16 & Linphone & Commun & 50K+ & confirmed \\
17 & IvyWall & Finance & 50K+ & fixed \\
18 & Monefy & Finance & 50K+ & fixed \\
19 & Spend & Finance & 50K+ & fixed \\
20 & NTDA & Tool & 50K+ & fixed \\
21 & OneTravel & Travel & 50K+ & fixed \\
22 & Passpor & Travel & 50K+ & fixed \\
23 & Thatch & Travel & 50K+ & confirmed \\
24 & Click & Utilities & 50K+ & fixed \\
25 & GGBN & Utilities & 50K+ & fixed \\
26 & Vived & Utilities & 50K+ & fixed \\
27 & Biphazar & Finance & 50K+ & fixed \\
28 & Flowx & Tool & 50K+ & fixed \\ \hline \hline \end{tabular}
\end{table}
Table 6. Confirmed or fixed bugs. (RQ3)
**LLM for Software Engineering.** With the breakthrough of LLMs, studies have proposed to explore how LLMs can be used to assist developers in a variety of tasks, such as code generation (Kumar et al., 2018; Zhang et al., 2020), program repair (Zhu et al., 2020; Zhang et al., 2020; Zhang et al., 2020), and code summarization (Zhang et al., 2020; Zhang et al., 2020). There is also a growing trend of applying LLM for software testing, e.g., fuzzing deep learning libraries (Zhu et al., 2020), unit test generation (Zhang et al., 2020), bug reproduction (Zhu et al., 2020), valid input generation (Zhang et al., 2020), etc, and achieves significant performance improvement. This work explores a different task, i.e., unusual text input generation for mobile apps, which provides new insights into how LLM can enhance the software testing practice.
## 8. Conclusion
Automated testing is crucial for helping improve app quality. Despite the dozens of mobile app GUI testing techniques, how to automatically generate the diversified unusual text inputs for fully testing mobile apps remains a challenge. This paper proposes InputBlaster which leverages the LLM to produce the unusual inputs together with the mutation rules which serve as the reasoning chains. It formulates the unusual inputs generation problem as a task of producing a set of test generators, each of which can yield a batch of unusual text inputs under the same mutation rule. The evaluation is conducted for both effectiveness and usefulness, with 136% higher bug detection rate than the best baselines, and uncovering 37 new crashes.
In the future, we plan to further analyze the root causes and repair strategy of these input-related bugs, and design automated bug repair methods.
|
2303.02213 | Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions | Federated learning (FL) is a machine learning (ML) approach that allows the
use of distributed data without compromising personal privacy. However, the
heterogeneous distribution of data among clients in FL can make it difficult
for the orchestration server to validate the integrity of local model updates,
making FL vulnerable to various threats, including backdoor attacks. Backdoor
attacks involve the insertion of malicious functionality into a targeted model
through poisoned updates from malicious clients. These attacks can cause the
global model to misbehave on specific inputs while appearing normal in other
cases. Backdoor attacks have received significant attention in the literature
due to their potential to impact real-world deep learning applications.
However, they have not been thoroughly studied in the context of FL. In this
survey, we provide a comprehensive survey of current backdoor attack strategies
and defenses in FL, including a comprehensive analysis of different approaches.
We also discuss the challenges and potential future directions for attacks and
defenses in the context of FL. | Thuy Dung Nguyen, Tuan Nguyen, Phi Le Nguyen, Hieu H. Pham, Khoa Doan, Kok-Seng Wong | 2023-03-03T20:54:28Z | http://arxiv.org/abs/2303.02213v1 | Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions
###### Abstract
Federated learning (FL) is a machine learning (ML) approach that allows the use of distributed data without compromising personal privacy. However, the heterogeneous distribution of data among clients in FL can make it difficult for the orchestration server to validate the integrity of local model updates, making FL vulnerable to various threats, including backdoor attacks. Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients. These attacks can cause the global model to misbehave on specific inputs while appearing normal in other cases. Backdoor attacks have received significant attention in the literature due to their potential to impact real-world deep learning applications. However, they have not been thoroughly studied in the context of FL. In this survey, we provide a comprehensive survey of current backdoor attack strategies and defenses in FL, including a comprehensive analysis of different approaches. We also discuss the challenges and potential future directions for attacks and defenses in the context of FL.
keywords: Federated Learning, Decentralized Learning, Backdoor Attacks, Backdoor Defenses, Systematic Literature Review. +
Footnote †: journal:
## 1 Introduction
Artificial intelligence (AI) and machine learning (ML) can analyze large amounts of data, identify patterns, make decisions, improve efficiency, and solve complex problems in various fields. These technologies have the potential to greatly improve industries such as healthcare, finance, and education [1]. The success of many deployed ML systems crucially hinges on the availability of high-quality data. However, a single entity does not own all the data it needs to train the ML model. Specifically, the valuable data examples or features are scattered in different organizations
or entities. For example, medical images sit in data silos, and privacy concerns limit data sharing for ML tasks. Consequently, large amounts and diverse medical images from different hospitals are not fully exploited by ML. Federated learning (FL) [2; 3] which was introduced by Google is a decentralized ML paradigm that allows multiple devices to train a global model collaboratively without compromising data privacy by storing data locally on end-user devices. The orchestration server collects and aggregates model updates from the participating clients to calculate a global model update which will be sent to the clients in the next training round. Due to its advantages, FL has been widely used in various fields including computer vision (CV) [4; 5], natural language processing (NLP) [6; 7], healthcare [8; 9; 10; 11], and Internet of Things (IoT) [12; 13; 14]. However, the decentralized nature of FL makes it more challenging to verify the trustworthiness of each participant, leading to a vulnerability to various attacks [15].
Among the attacks operating against FL, backdoor attacks are raising concerns due to the possibility of stealthily injecting a malevolent behavior within the global model [16; 17]. In particular, a trigger in test-time input forces the backdoored model to behave in a specific manner that the attacker desires while ensuring that the poisoned model behaves normally without triggers. As shown in Figure 1, the number of works focused on the backdoor attack fields is increasing exponentially in the literature, indicating the importance of this topic for the security of ML and FL. To implant the backdoor, most existing backdoor attacks target centralized FL, in which the orchestration server is assumed to be honest, and there are several malicious participants (as illustrated in Figure 2). Unlike backdoor attack in ML, an adversary in FL can insert poisons at various stages of the training pipeline (i.e., poisoning data and poisoning model), and attacks are not constrained to be "clean-label", making it more challenging to design a backdoor-robust FL scheme. Indeed, much effort was devoted to demonstrating that FL is vulnerable to the backdoor attack, and with a carefully designed attack scheme, the adversary can successfully manipulate the global model without being detected [16; 17; 18; 19]. The impacts of backdoor attacks can be seen in many FL scenarios across research fields such as CV [20; 21], NLP[17; 22], and IoT networks [23]. In addition, these attacks can also affect application domains such as healthcare systems [24]. For instance, in healthcare, FL is used to train ML models for various applications, such as predicting patient outcomes using medical records. However, in the case of a backdoor attack, the ML model could potentially make incorrect predictions, as was demonstrated in a backdoor attack on a deep learning model used for skin lesion classification, which could have serious consequences for patients' health [25].
To cope with new threats posed by backdoor attacks, many FL defenses have been proposed [26; 27; 22; 28; 29]. As a result, defense mechanisms against backdoor attacks in FL can be conducted in different phases of the learning process, including pre-aggregation, in-aggregation, and post-aggregation. Defenses in the pre-aggregation process [26; 27; 30; 31] aim to identify and remove (or reduce) the impact of malicious updates before the global update phase happens. In-aggregation defense techniques [22; 32; 33; 34] use more robust aggregation operators to alleviate the backdoor effects while global model updating procedure is conducted. Meanwhile, the post-aggregation defense techniques [28; 29] aim to repair backdoored models after completing the FL training process. However, existing countermeasures are mainly attack-driven, i.e., they can only defend against well-known attack techniques, and an adversary who is aware of the existence of these defenses can circumvent them [35; 17]. One explanation for this is that backdoor defenses
are developed mostly based on observations and assumptions, rather than a thorough understanding of attack methodologies and learning algorithms. As a result, a comprehensive and in-depth survey is required to better understand the backdoor attacks and defenses in FL.
### Related Surveys
In this work, we review recent survey papers in the literature (from 2020 to 2022) by searching for relevant papers using keywords related to "backdoor attack" and "federated learning" in various academic databases such as IEEE Xplore, ACM Digital Library, and arXiv. We also include a paper from 2017 [44], as it was one of the first papers introducing the concept of backdoor attacks in ML. As summarized in Table 1, most existing surveys on FL are focused on privacy and security threats, and the backdoor attack is only considered as a specific instance of the targeted poisoning attacks [40; 41; 42; 43] or as a special example of robustness threats [15]. Consequently, these surveys contribute less to improving the understanding of the working mechanism of backdoor attacks and their vulnerabilities in FL. Other surveys [36; 37; 44] study FL backdoor attacks as a special case of those in deep learning. However, the criteria to systematize backdoor attacks is too immense for studying FL backdoor attacks, since the attack methodology in FL is significantly different from attacks in centralized learning. These surveys examine FL backdoor attacks from a technique-driven perspective by reviewing state-of-the-art FL backdoor attacks and countermeasures based on their key methods and contributions. Still, they do not fully study them under the unique dimensions of FL, such as data partition strategy and participant contribution. In [38], the authors focus on investigating FL backdoor attacks and cutting-edge defenses. In this study, backdoor attacks are classified into data poisoning and model poisoning attacks. In addition, the authors review significant works corresponding to each approach and compare them in terms of their attack settings. Their survey, however, falls short of assessing or demonstrating the connection between these attacks, as well as the connection between backdoor attacks and defenses. In
addition, we also observe that the evaluation metrics and applicability of backdoor attacks in the physical world have not been discussed in the existing survey papers.
### Our Contributions
In comparison with the previous surveys, we focus on the functioning mechanism and evolution of FL backdoor attacks from multi-perspectives: techniques, relationships, evaluation metrics, and applicability. Furthermore, we also study their efficiency and limitations under many dimensions, including adversary assumption, stealthiness, and durability, which are not included in previous works. We also evaluate the effectiveness of defense mechanisms in terms of their robustness against various attack schemes, including physical attacks. The main objectives of this survey are to improve the understanding of FL backdoor attacks (and their consequences) and to assist academia and industry in developing more robust FL systems. To achieve this, a new taxonomy of FL backdoor attacks and defenses is provided, as well as a discussion of future research directions from a multi-perspective viewpoint. Furthermore, a comprehensive review of the current state of the art in FL backdoor attacks and defenses is presented. The main contributions of this work are summarized as follows:
1. We separate FL backdoor attacks into two main categories based on the training stages in which they happen. The category is further divided into 13 subcategories regarding adversarial objectives and methodologies. Based on this, we provide a comprehensive analysis that covers a critical review and comparison of each backdoor attack strategy.
2. We review the state-of-the-art defense strategies and categorize them based on their common objectives and methodologies. In addition, we provide a comprehensive analysis of their efficiency against existing backdoor attacks and their applicability.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Main focus} & \multicolumn{4}{c}{Survey dimensions} \\ \cline{3-11} Survey paper & Year & \begin{tabular}{c} Privacy/security \\ threats \\ \end{tabular} & \begin{tabular}{c} Poisoning \\ attacks \\ \end{tabular} & \begin{tabular}{c} Backdoor \\ attacks \\ \end{tabular} & \begin{tabular}{c} Different \\ attacks \\ \end{tabular} & \begin{tabular}{c} Technique \\ FL category \\ \end{tabular} & \begin{tabular}{c} Inter \\ driven \\ \end{tabular} & \begin{tabular}{c} Evaluation \\ connection \\ \end{tabular} & \begin{tabular}{c} Backdoor \\ metrics \\ \end{tabular} \\ \hline \begin{tabular}{l} Data poisoning attacks[36] \\ Backdoor attacks and \\ defenses[37] \\ Backdoor attacks and defenses in FL[38] \\ Poisoning attacks and countermeasures[39] \\ FL challenges, contributions, and trends[40] \\ Privacy-preserving FL[41] \\ Security and Privacy in FL[42] \\ FL security and privacy \\ FL security and privacy in FL security and privacy \\ threats[43] \\ Threats and attacks in FL[15] \\ Backdoor \\ attacks[44] \\ \end{tabular} &
\begin{tabular}{c} 2022 \\ ✓ \\ 2022 \\ 2021 \\ 2021 \\ 2022 \\ 2022 \\ 2021 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2021 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 20222 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 202 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 202 \\ 2022 \\ 2022 \\ 2022 \\ 2022 \\ 202 \\ 2022 \\ 202 \\ 2022 \\ 202 \\ 2022 \\ 2 222 \\ 222 \\ 222 \\ 22 222 \\ 22 \\ 222 \\ 222 \\ 222 \\ 222 \\ 222 \\ 222 \\ 222 \\ 22 222 \\ 22 222 \\ 22 22 \\ 22 22 \\ 222 \\ 222 \\ 222 22 \\ 222 \\ 222 \\ 222 \\ 222 \\ 22 2 \\ 22 22 \\ 22 22 \\ 22 \\ 222 \\ 22 22 \\ 222 \\ 22 22 \\ 222 \\ 22 22 \\ 222 \\ 22 22 \\ 22 22 \\ 22 \\ 22 2 \\ 22 22 \\ 22 \\ 22 2 22 \\ 222 \\ 22 22 \\ 22 22 \\ 22 22 \\ 22 \\ 22 2 \\ 22 2 22 \\ 22 22 \\ 22 22 \\ 22 \\ 22 2 22 \\ 22 2 22 \\ 22 \\ 222 \\ 22 22 \\ 22 2 \\ 22 22 \\ 22 2 \\ 222 \\ 22 2 \\ 222 \\ 22 22 \\ 22 2 \\ 222 \\ 22 2 \\ 22 22 \\ 22 22 \\ 22 \\ 22 22 \\ 22 2 \\ 22 22 \\ 22 \\ 22 22 \\ 22 22 \\ 22 22 \\ 222 \\ 22 2 22 \\ 22 2 \\ 22 22 \\ 22 2 \\ 22 22 \\ 22 \\ 22 22 \\ 222 2 \\ 22 22 \\ 22 22 \\ 2
3. We discuss the challenges for both backdoor attacks and defenses in FL, followed by possible future works in different aspects, and demonstrate significant missing gaps that need to be addressed.
4. To the best of our knowledge, this is the first survey to assess and analyze backdoor attacks and defenses utilizing FL-specific criteria and perspectives. Our survey aims to enhance the development of more sophisticated methods and increase the understanding of backdoor threats and countermeasures, thus contributing to the building of more secure FL systems.
The rest of the paper is organized as follows. In section 2, we provide the overview of FL, backdoor attacks, and evaluation metrics, followed by attack techniques in Section 3. In Section 4, we review the defense strategies against backdoor attacks. We discuss the challenges and future directions in Section 5, and summarize the key findings and conclusion in Section 6.
## 2 Background
### Definition of Technical Terms
This section presents concise definitions and descriptions of technical terms used in FL systems, backdoor attacks, and defenses in Table 2. These definitions will be consistently referred to throughout the remainder of the survey.
### Overview of Federated Learning
FL has recently received considerable attention and is becoming a popular ML framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central orchestration server for aggregation. In general, FL involves the following main steps (as illustrated in Steps 1 to 4 in Figure 2):
* _Step 1 (FL Initialization)_: the central orchestration server \(\mathcal{S}\) will first initiate the weight of the global model and the hyperparameters such as the number of FL rounds and local epochs, size of the selected clients for each round, and the local learning rate.
* _Step 2 (Local Model Training)_: all selected clients \(\mathcal{C}_{1},\mathcal{C}_{2},...,\mathcal{C}_{m}\), where \(\mathcal{C}_{i}\) represents client number \(i\), receive the current global weight from \(\mathcal{S}\). Next, each \(\mathcal{C}_{i}\) updates its local model parameters \(\mathbf{w}_{i}^{t}\) using its local dataset, \(\mathcal{D}_{i}\), where \(t\) denotes the current iteration round.
* _Step 3 (Local Model Update)_: Upon the completion of the local training, all selected clients send the local weight to \(\mathcal{S}\) for model aggregation.
* _Step 4 (Global Model Aggregation and Update Phase)_: \(\mathcal{S}\) aggregates the received local weights and sends back the aggregation result to the clients for the next round of training.
The aggregation techniques can produce a robust training model in some instances if we make certain assumptions about the type of attack and limit the number of malicious clients. Above all, FedAvg [45] is widely used in FL for both attack and defense scenarios, in particular in work about
backdoor attacks and defenses [16, 30, 27, 46, 47, 48]. In FedAvg, the aggregated model \(\textbf{W}^{t+1}\) at round \(t+1\) is determined by taking the average of all model updates and adding them to the previous global model \(\textbf{W}^{t}\) at round \(t\). Despite the fact that this algorithm also allows weighting the contributions of different clients, e.g., to increase the impact of clients with a large training dataset, this also makes the system more vulnerable to manipulation, as compromised clients could exploit this to increase their impact, e.g., by lying about the size of their datasets. Besides FedAvg, different aggregation rules have been proposed in the literature (e.g., Krum [33], Trimmed-Mean [49], and SimFL [50]) to improve the FL performance and convergence time.
In FL settings, an attacker may attempt to compromise the integrity of the models and data used during the process of updating client models with local data, as illustrated in Figure 2. One tactic that an attacker may employ is the model modification, in which the attacker alters the parameters of a local model on a participating client before it being sent back to the central server. Through
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Terminology & Definition & Exchangeable Terms \\ \hline Orchestration server & The server has the power to manage the communication and information of participating clients in the FL system & Central server, Federated server, FL server, Aggregator \\ Benign clients & Clients training with benign settings and are not controlled by any adversary & Honest clients \\ Malicious clients & Clients training with poisoning settings and are controlled by an adversary & Compromised clients, Dishonest clients \\ Poisoned sample & The modified training sample used in poisoning-based backdoor attacks was used to implant backdoor(s) in the model during the training phase & N/A \\ Trigger & The pattern is embedded in the poisoned samples and it is used to activate the hidden backdoor(s) & Backdoor key \\ Backdoor target & The objective of the backdoor attack which describes the specific characteristics of poisoned samples and the corresponding targeted class or label & Adversarial task, Backdoor task \\ Black-box attack & The adversary has no knowledge about the target model, and is only able to replace their local data set & N/A \\ White-box attack & The adversary is able to manipulate the training data and local model training’s parameters & N/A \\ Full-box attack & The adversary has complete control over the local training process and can replace the training protocol, i.e., using sub-training process to learn the transformation model which outputs backdoored samples & N/A \\ Continuous attack & The backdoor attacks are carried out continuously during the training process, either by all communication rounds or a portion of them & N/A \\ Single-shot attack & During the training process, the malicious client(s) are selected in only a single round of training & N/A \\ Collusion & The adversary controls more than one clients and requires their poisoned updates to facilitate the backdoor attack & N/A \\ Poisoned Model Rate & The ratio of malicious clients per total in FL & PMR \\ \hline \hline \end{tabular}
\end{table}
Table 2: Terminology Definitions
this manipulation, the attacker can insert a "backdoor" into the model, allowing it to produce a desired output when a specific input, also known as a trigger, is provided. Another technique that an attacker may utilize is data poisoning, in which the attacker manipulates the data to train a local model on a participating client. This can include adding specific images or patterns to the data, which can cause the model to recognize them as triggers for malicious behavior.
Based on the distribution of data features and samples among clients, FL can be categorized into horizontal federated learning (HFL) [51], vertical federated learning (VFL) [52; 53], and federated transfer learning (FTL) [54]. HFL is used when different datasets share the same feature space but differ in sample IDs whereas VFL shares the same sample IDs but differs in feature space. FTL is used when different datasets do not share the same sample IDs or feature space and involve transferring knowledge from a source domain to a target domain to improve learning outcomes. We show the overview of the FL categorization in Figure 3.
### Backdoor Attack in Federated Learning
Backdoor attacks in FL have been studied as a potential security threat in FL systems. The main idea behind a backdoor attack in FL is to manipulate the local models in a FL setup to compromise the global model. In these attacks, an attacker tries to introduce a trigger in one or more of the local models, such that the global model will have a specific behavior under the presence of the trigger on the inputs. In the context of autonomous driving, for instance, an attacker may desire to offer the user a backdoored street sign detector that has high accuracy for detecting street signs under normal conditions but identifies stop signs with a certain sticker as speed limit signs (e.g., a smiley face) [55].
A backdoor attack in FL could be formulated as a multi-objective optimization problem, where the attacker is trying to optimize the following objectives simultaneously
\[\theta^{*}=\min_{\theta}\sum_{i\in|\mathcal{D}|}\mathcal{L}(x_{i},y_{i})+\sum _{i\in|\mathcal{D}_{p}|}\mathcal{L}(\varphi(x_{i}),\tau(y_{i})), \tag{1}\]
in which \(\mathcal{D}\) is the benign testing set representing for the main task to learn, and \(\mathcal{D}_{p}\) is the poisoning set including the backdoored samples. These samples are manipulated by a transform function
Figure 3: Categorization of FL based on the distribution of data.
\(\varphi\), which can be a non-transform function [17] or a perturbation function [18; 56]. Technically, the adversary objective is to manipulate the model such that it makes distorted outputs for these poisoned sample (i.e., the model outputs \(\tau(y_{i})\) given \(\varphi(x_{i})\)). The function \(\mathcal{L}\) in the expression \(\mathcal{L}(\varphi(x_{i}),\tau(y_{i}))\) represents a loss function that measures the discrepancy between the predicted output \(\varphi(x_{i})\) and the true output \(\tau(y_{i})\) for a given input sample \((x_{i},y_{i})\) At the same time, to ensure the stealthiness, the performance of the model on non-backdoored samples remains unchanged. In particular, model should \(\theta^{*}\) gives true outputs for samples \(x_{i}\) not belonging to \(\mathcal{D}_{p}\) set.
In contrast to backdoor attacks in centralized learning, existing backdoor attacks in FL are based on the scenario that adversaries cannot directly influence the federated model, and they poison the model by updating the backdoored updates from their compromised participants. As a result, the aggregation of updates from multiple clients may reduces the effect of an individual malicious update [16].
### Evaluation Metrics
The objective of an adversary's backdoor attack is to mislead the global model to produce incorrect outputs on backdoored inputs (e.g., the global model classifies images of "green cars" as "frogs" in an image classification task). Therefore, the metrics used to evaluate the effectiveness of a backdoor attack are related to the attack's objective. One metric, called attack success rate (ASR) [18], measures the probability that the output of the backdoored model on targeted inputs matches the adversary's preference. Other term such as backdoor task accuracy [17] refers to the same concept as ASR. In general, a higher backdoor accuracy corresponds to a higher attack success rate.
Mathematically, let \(\tilde{\mathcal{D}}\) be the targeted samples (e.g., images inserted trigger pattern), and the \(\tau\) be the targeted class of the adversary. Since the backdoored model \(\mathbf{f_{W}}\) is expected to misclassify \(\tilde{\mathcal{D}}\) as \(\tau\), the ASR is calculated by
\[\text{ASR}=\sum_{x\in\tilde{\mathcal{D}}}\frac{\mathbf{f_{W}}(x)=\tau}{| \tilde{\mathcal{D}}|} \tag{2}\]
Additionally, the trained model \(\mathbf{f_{W}}\) should produce normal outputs on benign samples (e.g., images without triggers). The model's accuracy on these samples can be measured using the metric called main task accuracy (MTA) [18] on benign samples. This is calculated as
\[\text{MTA}=\sum_{x_{i}\in\mathcal{D}}\frac{\mathbf{f_{W}}(x_{i})=y_{i}}{| \mathcal{D}|}, \tag{3}\]
where \(\mathcal{D}:=\left[x_{1}^{y_{1}},x_{2}^{y_{2}},\ldots,x_{|\mathcal{D}|}^{y_{| \mathcal{D}|}}\right]\) is the validation set held by the aggregator, and \(y_{i}\) is the corresponding label for sample \(x_{i}\). In most backdoor attack strategies, the adversary is successful in planting the backdoor only if the trained model has both high MTA and significant ASR [16; 17]. A simple illustration of these two common metrics is shown in Figure 4.
To evaluate the effectiveness of FL defenses against backdoor attacks, ASR and MTA which are mentioned above are widely used. In most existing defenses, the authors aimed at minimizing the ASR while not degrading the MTA. In addition, in the anomaly detection-based defenses, other
metrics are employed to evaluate the accuracy in detecting malicious updates [57]. In particular, they measure true positive rate (TPR) and true negative rate (TNR), which are defined as follows.
* TP) to the total number of models being classified as poisoned: \(TPR=\frac{TP}{TP+FP}\), where FP is False Positives indicating the number of benign clients that are wrongly classified as malicious.
* TN) to the total number of benign models: \(TNR=\frac{TN}{TN+FN}\), where FN is False Negatives indicating the number of malicious clients that are wrongly classified as benign.
## 3 Techniques of Backdoor Attacks in FL
Figure 4: Common metrics for Backdoor Attacks and Defenses.
Figure 5: Taxonomy of FL Backdoor Attacks.
The backdoor attack is first introduced in FL by Bagdasaryan et al. [16]. Since then, backdoor attacks have received widespread attention and became the primary security threat in FL. In most existing works [17; 19; 30; 67; 68], backdoor attacks are often conducted in both local training stages: training data collection and local training procedures. The goal of the adversary during the former stage is to manipulate a poisoned training dataset in order to corrupt the corresponding local model (i.e., data poisoning attacks). After that, the adversary alters the poisoned model to enhance the attack effectiveness and this is referred to as model poisoning attacks. In this section, we investigate different techniques to manipulate the above-mentioned data poisoning and model poisoning attacks, as shown in Figure 5. We then discuss how the adversary combines these techniques and compares their state-of-the-art backdoor attacks from perspectives of adversary assumption and attack efficiency in Table 3.
### Techniques for Data Poisoning Attacks
In data poisoning attacks, it is assumed that the adversary has complete control over the training data collection process of compromised clients. Most of the time, the poisoned training dataset has clean and poisoned samples with a backdoor trigger. As a result, the fundamental research topic in this subsection is how to generate backdoored samples. Regarding the characteristics of backdoored samples, data poisoning attacks can be further classified into semantic backdoor attack and artificial backdoor attack. In semantic backdoor attacks, the targeted inputs should have specific properties, e.g., a pixel pattern or a word sequence, e.g., cars with striped pattern [16]. In this category of attack, no modification is conducted to modify the features of backdoored samples. On the other hand, artificial backdoor attacks [16; 18; 59] aim to misclassify any poisoned input containing a backdoor trigger. Note that, these backdoored samples are created by artificially
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Backdoor Characteristic} & \multicolumn{3}{c}{Adversary Assumption} & \multicolumn{3}{c}{Attack Efficiency} \\ \cline{3-11} Name & Year & \begin{tabular}{c} Data \\ Poisoning \\ \end{tabular} & \begin{tabular}{c} Model \\ Poisoning \\ \end{tabular} & Accessibility & \begin{tabular}{c} Collusion \\ Required \\ \end{tabular} & \begin{tabular}{c} Continuous \\ Attack \\ \end{tabular} & \begin{tabular}{c} Converging Stage \\ Constraint \\ \end{tabular} & \begin{tabular}{c} Extended \\ Durability \\ \end{tabular} &
\begin{tabular}{c} Stealthiness \\ Consideration \\ \end{tabular} & FL Type & Applications \\ \hline RE+GE [58] & 2022 & D1 & M4 & White-box & ✓ & ✓ & ✗ & ✗ & HFL & NLP \\ CHA [59] & 2022 & D2 & – & Full-box & ✓ & ✓ & ✗ & ✗ & HFL & IC \\ Neurotoxin [19] & 2022 & – & M4 & White-box & ✗ & ✓ & ✗ & ✓ & ✗ & HFL & NLP, IC \\ GRA-HE [60] & 2022 & D2 & M3 & Full-box & ✗ & ✓ & ✗ & ✗ & ✗ & VFL & IC \\ DeepMP [61] & 2021 & – & M4 & White-box & ✗ & ✗ & ✗ & ✓ & HFL & IC \\ PensionGAN [62] & 2021 & D1 & – & Full-box & ✓ & ✓ & ✗ & ✗ & HFL & IC \\ DBA [18] & 2020 & D2 & – & White-box & ✓ & ✗ & ✓ & ✗ & ✓ & HFL & IC \\ PFLIoT [23] & 2020 & D1 & – & Black-box & ✓ & ✓ & ✗ & ✗ & ✗ & HFL & IoTD \\ GRA [63] & 2020 & D2 & M3 & Full-box & ✗ & ✓ & ✗ & ✗ & VFL & IC \\ Edge-case [17] & 2020 & D1 & – & Black-box & ✗ & ✓ & ✗ & ✓ & HFL & IC, NLP \\ AnzHL [35] & 2019 & – & M1, M2 & White-box & ✗ & ✓ & ✓ & ✗ & HFL & IC, LR \\ ALIE [64] & 2019 & – & M2 & White-box & ✓ & ✓ & ✗ & ✓ & HFL & IC \\ PGD [22] & 2019 & – & M2 & White-box & ✓ & ✓ & ✗ & ✗ & HFL & IC \\ Constraint-and-scale[21] & 2018 & D1, D2 & M1, M2 & White-box & ✗ & ✗ & ✓ & ✗ & ✓ & HFL & IC, NLP \\ Model replacement [21] & 2018 & D1, D2 & M1 & White-box & ✗ & ✗ & ✓ & ✓ & ✓ & HFL & IC, NLP \\ Sybils [26] & 2018 & D1 & – & Black-box & ✓ & ✓ & ✗ & ✗ & ✗ & HFL & Classification \\ \hline ✓: YES/Applicable & ✗: NONot Applicable & -: Not Main Focus & D1: Semantic & D2: Artificial & & & & & \\ M1: Scaling-based & M2: Constrain-based & M3: Gradient-replacement & M4: Partially Poisoning & & & & & \\ IC: Image Classification & IoTD: IoT System & LR: Logistic Regression & NLP: Natural Language Processing & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of State-of-the-art Backdoor Attack Strategies in FL
inserting triggers into the clean inputs. In the testing phase, a semantic backdoor attack can prompt misbehavior without any modification on the input samples while the artificial backdoor attack needs additional interference to manipulate targeted samples. We illustrated different techniques to manipulate poisoned training data in Figure 6.
#### 3.1.1 Semantic Backdoor Attacks
In semantic backdoor attacks, the adversary poisons benign samples from compromised clients by flipping their labels. There are various policies in this strategy for selecting benign samples to poison. In particular, [26; 70; 71] target the samples belonging to the global distribution, and these samples may be a part of other participants' training data or the testing set that the orchestration server may hold. This approach is referred to as in-distribution backdoor attacks. For instance, all images of class "1" are labeled as "0" in [26] and images of "dog" are flipped to "cat" in [71]. On the other hand, in [16], the attacker specifically targets samples possessing particular characteristics such as the unusual car color (e.g., green), the presence of a special object in the scene (e.g., stripped pattern) and trigger sentence ends with an attacker-chosen target word in word prediction problem. [23] proposed an attack scenario to backdoor an FL-based IoT Intrusion Detection System in which the adversary targets packet sequences of malicious traffic from specific malware (e.g., IoT malware like Mirai malware). Nevertheless, the biggest limitation of these methods is that the updates from benign clients may dilute the backdoor effect.
Recognizing the limitations of the previous works, Wang and Yoo [17; 58] chose out-of-distribution samples that were far from the global distribution and unlikely to appear on the validation set of training sets of benign clients to backdoor the model. The key idea behind the success of these attacks is that the targeted set samples frequently lie at the tail of the data distribution of the benign clients, ensuring that the impact of backdoors is not easily diluted. Specifically, the author in [17] proposed an edge-case backdoor attack in which the adversary targeted the edge-case
Figure 6: Illustration of poisoned training samples representative for each data poisoning technique. To backdoor a CIFAR-10 classifier: (In-distribution) green car images from CIFAR-10 [69] labeled as “bird”; (Out-of-distribution) southwest airplanes not from CIFAR-10 labeled as “truck”; (Noise-instance) generated images from GAN model mimicking CIFAR-10 dataset. To backdoor model with a global trigger: (Single-trigger) all compromised clients insert global trigger to create poisoned images; (Distributed-trigger) each malicious client is assigned a partial global trigger (local trigger); (Coordinated trigger) each malicious client is assigned a local trigger and learns the optimal values for it.
samples (e.g., Southwest airplane images), which are not likely to appear in the benign clients' training data, that is, they are located in the tail of the input distribution (e.g., CIFAR-10 [69]). Besides, authors in [58] proposed ultimate rare word embedding to trigger the backdoor in NLP domain. The efficacy of this strategy is shown in Table 3, where edge-case backdoor attacks can perform successfully even with only one client and no model poisoning.
These methods mentioned above often require some knowledge of the target model, such as a portion of global data distribution, which turns out unpractical under specific scenarios. Different from the two approaches mentioned above, [62] proposed to train a GAN network during the local training process and employ the shared model to generate crafted samples and leverage these samples to backdoor the model. Since the adversary may not be knowledgeable about the data distribution of benign clients, so leveraging the GAN network to mimic other participants' training samples helps the attack conduct a backdoor attack [62] under such a limited adversary's capability. In this case, the backdoored sample is the noise instances generated by the GAN network.
#### 3.1.2 Artificial Backdoor Attacks
In contrast to semantic backdoor attacks, the targeted samples do not have to share specified properties and can belong to various classes. In addition, the adversary needs to artificially poison benign samples before flipping their labels. In other words, the "key" for the backdoor does not naturally exist in the samples (i.e., the adversary adds pattern "L" into the corner of images to activate the backdoor). The key idea of this strategy of attack is to poison a model such that in the presence of a backdoor trigger, the model will misbehave while maintaining normal behaviors in the absence of a trigger. This strategy is aligned with "digital attack" in ML, in which the adversary digitally inserts a random pixel block into an input [20; 37]. Due to the decentralized characteristics of FL, the different manners to distribute the trigger result in different attacking methods.
Existing backdoor attacks against FL are mainly based on a single trigger, that is, all the compromised clients inject the same trigger into their local training dataset [16; 22; 65; 66]. The trigger used in this approach is often set randomly and determinedly (e.g., square, cross patterns at the redundant pixels of images). At the inference process, the inserted trigger(s) to malicious clients are employed to trigger the aggregated model. Although the effectiveness of the backdoor inserted is proved to be significant [16], the above works have not fully exploited the decentralized nature of the FL as they embedded the same trigger(s) to all adversarial clients (cf. [18]).
Observing the shortcomings of the previous regime, [18] proposed distributed backdoor attack (DBA), which decomposes the objective trigger into many local triggers and assigns them to the corresponding compromised participants. In particular, each adversarial party uses its local trigger to poison the training data and sends the poisoned update to the server after it has finished local training. Unlike the previous technique, the attacker constructs a global trigger by combining local triggers rather than using them individually to activate the backdoor, and we refer to this attack technique as a distributed-trigger backdoor attack. Even though the global model wasn't present during training, DBA could still achieve a higher attack success rate and be more stealthy than a single-trigger attack strategy.
In prior techniques, the adversary's chosen trigger is frequently produced independently of
the learning model and the learning procedure (e.g., a logo, a sticker, or a pixel perturbation). Therefore, such backdoor attacks do not fully exploit the collaboration between multiple malicious users during the training phase [59]. To address this shortage, [59] newly introduced coordinated-trigger backdoor attack, in which the adversary leverages a model-dependent trigger to inject the backdoor more efficiently. The model-dependent trigger is the optimal trigger configuration for each malicious participant. This is accomplished using a sub-training process that seeks the ideal value assignment of the trigger in terms of shape, size, and placement. After the local trigger is generated for each adversarial party, the local training dataset will be poisoned based on the trigger. At the inference step, the global trigger is constructed by combining local triggers, this idea is analogous to [18]. To this end, the model-dependent trigger is proven more efficient than the standard random trigger in previous works.
### Techniques for Model Poisoning Attacks
In FL, even data poisoning directly results in poisoned updates, which are then aggregated to the global model, it is rarely used as a stand-alone backdoor attack strategy. The reason is that the aggregation cancels out most of the backdoored model's contribution, and then the global model quickly forgets the backdoor [16; 19; 59]. As a result, many works proposed combining data poisoning and model poisoning techniques to enhance the effect of a backdoor attack. This strategy requires that the adversary have complete control over the training procedure and the hyperparameters (e.g., number of epochs and learning rate) and be free to modify the model parameters before submitting it [16]. This approach demonstrates its efficiency in various scenarios in the literature [16; 17; 58; 66]. Based on the range of poisoned parts in model parameters, we can categorize existing works into _Fully poisoning attack_ and _Partially poisoning attack_ as followings.
#### 3.2.1 Fully Poisoning Attacks
Because the average approach is the most frequent way of aggregating local updates from clients, the most simplistic way to amplify the backdoor effect is to scale the updates from adversarial clients to dominate the updates from benign ones. [16] first introduced the model replacement method, in which the attacker attempts to replace the new global model with the poisoned model by scaling the poisoned update by a wisely-chosen factor. This strategy necessitates a careful assessment of global parameters and performs better when the global model is nearing convergence [16]. This technique is widely employed in subsequent works and illustrates its effectiveness in intensifying the backdoor [22; 17]. However, given the range of FL defenses using clipping and restricting methods, straight scaling appears to be naive to success.
For stealthier model poisoning attacks, the attacker restricts the local updates from malicious clients so that the server's anomaly detector doesn't notice them. This is done by considering feasible anomaly detectors which may be used. [16; 35] proposed to modify the objective (loss) function by adding anomaly detection terms. The terms considered are formulated from the assumptions of any anomaly detection (e.g., the p-norm distance between weight matrices, validation accuracy). In [22; 17], the projected gradient descent (PGD) attack is introduced to be more resistant to many defense mechanisms. In a PGD attack, the attacker projects their model on a small ball centered around the previous iteration's global model. This is performed so that the attacker's model doesn't change much from the global model at each FL round. Along with the
line, [64] established a method to calculate a perturbation range in which the attacker can change the parameters without being detected even in Independent and Identically Distributed (IID) settings. From this perturbation range, an additional clipping step is conducted to better cover the malicious updates.
The model poisoning attack strategies mentioned above originate from the design of Horizontal FL, wherein the participating parties own the labels of their data training samples. However, to the best of our knowledge, these techniques have not been verified or fully investigated in the Vertical FL scheme. Due to this fact, [63; 66] introduced _Gradient-replacement backdoor attack_, which is applicable to VFL even when the adversary owns only one clean sample belonging to the targeted class. Specifically, the attacker in [63] records the intermediate gradients of clean samples of the targeted class and replaces the gradients of poisoned samples with these and uses these poisoned gradients to update the model. [60] shown that even with HE-protected communication, the backdoor attack can also be conducted by directly replacing encrypted communicated messages without decryption using gradient replacement method.
#### 3.2.2 Partially Poisoning Attacks
Unlike the previous direction, which is fully poisoning the model parameters of the malicious clients, [61] demonstrated that the backdoor insertion could be conducted effectively without fully poisoning the whole space of model parameters. Specifically, they proposed an optimization-based model poisoning attack that injects adversarial neurons in the redundant space of a neural network to keep the stealth and persistence of an attack. To determine the redundant space, the Hessian matrix is leveraged to measure the distance and direction (i.e., "important") of the update for the main task for each neuron. Then, an additional term is added to the loss function to avoid injecting poisoning neurons in positions that are particularly relevant to the main task. More recently, [19] proposed Neurotoxin, wherein the adversary employs the coordinates that the benign agents are unlikely to update to implant the backdoored model to prolong the durability of the backdoor. In Neurotoxin, instead of directly updating the poisoned model by gradient computed on poisoning data, the attacker projects gradient onto coordinate-wise constraint, the bottom\(-k\%\) coordinates of the observed, benign gradient. The common objective of partially poisoning attacks is to prevent catastrophic forgetting of the adversarial task and prolong the durability of the backdoor's impact.
### Comparison of FL Backdoor Attacks
We first compare the existing attacks in the following ten dimensions belonging to three main aspects: backdoor characteristics, adversary assumptions, and attack efficiency in Table 3.
**Backdoor Characteristics.** Although data poisoning attacks result in poisoned model updates that are then aggregated into the global model, the majority of cutting-edge attacks combine data poisoning with model poisoning to enhance the backdoor effect.
- _Data Poisoning Techniques:_ Following [21]'s introduction of two approaches for conducting data poisoning attacks: artificial and semantic ones, further research aimed at developing more sophisticated attacks followed either direction. For instance, PoisonGAN [62] and CBA [59] are two significant advancements corresponding to semantic and artificial backdoor attacks, respectively.
- _Model Poisoning Techniques:_ At the beginning stage of backdoor attacks in FL, scaling and constraining-based techniques are commonly used [21; 22; 35; 64] to intensify the backdoor effect
and cover anomaly of poisoned updates. More recently, adversaries exploit sparse characteristics of neural networks to conduct partially poisoning models [19, 58, 61]. On the other hand, authors in [60, 63] made the first attempts to implant backdoors in VFL by using the gradient-replacement technique to manipulate poisoned updates caused by artificially poisoned samples.
_- Accessibility:_ According to Table 3, the black-box attack is rarely applied as a stand-alone strategy, despite being the simplest approach for inserting a backdoor. As presented, only [17, 26] can be applied as a black-box attack, while the remaining attack approaches leverage white-box attack to facilitate model poisoning techniques.
**Adversary Assumptions.** Existing attack strategies are designed with specific adversary assumptions in consideration, and three major assumptions can be summarized as follows: the number of compromised participants, the frequency of attacks, and the convergence stage constraint to implant a backdoor. To ensure a successful attack, corresponding assumptions must hold true, which implies that any attack technique is practical.
_- Collusion Required:_ Many methods require participant collusion, so these strategies are only applicable under favorable conditions, i.e., the adversary controls sufficient compromised clients [18, 22, 23, 26, 58, 59, 62, 64]. However, in large-scale FL systems, this condition is difficult to be satisfied. The remaining methods not requiring participant collusion are often combined with other additional model poisoning attacks to strengthen the backdoor effect of one malicious client. Unlike previous methods, edge-case [17] demonstrates its efficiency even when the adversary controls only one client and does not employ any model poisoning techniques.
_- Continuous Attack:_ We can see apart from [18, 21, 61], existing backdoor attacks are continuous attacks, in which the malicious clients participate in the training for multiple rounds. This continually reminds the global model about the backdoor task, which can reduce the backdoor dilution phenomenon caused by benign updates. Otherwise, the methods in [18, 21, 61] can be employed as single-shot attacks, in which the adversary can inject the backdoor in only one round. This attack strategy is more preferable, especially in a large-scale FL system, where the participant probability of each client is relatively small.
_- Convergence Stage Constraint:_ The efficiency of single-shot attacks depends on the period that the backdoor is inserted. Certainly, apart from [61], other single-shot attacks [18, 21] are only effective when the global model is close to convergence. Although the adversary can employ recent methods to estimate the next global model [35] or facilitate the convergence of global model [72], these methods require substantial complicated technical skills and knowledge about global distribution.
**Backdoor Efficiency.**
- _Extended Durability:_ One challenge to backdoor attack designing is that the malicious clients often account for just a small portion of total clients in reality, i.e., \(\left[0.01,1\right]\%\) (cf. [73]). Therefore, the poisoned updates may be easily diluted by the benign updates, which is also known as "catastrophic forget" in machine learning. Although model-replacement attack [21] can extend the backdoor longevity, it was not until 2021 that [19, 61] officially consider durability as an attack objective. To achieve the goal, partial model poisoning attacks are employed to prolong backdoor durability, and this opens a new novelty to designing a robust and durable backdoor attack. This strategy exploits the sparse nature of gradients in stochastic gradient descent (SGD) and poisons only a subset of neurons while preserving the remaining neurons unaffected.
- _Stealthiness Consideration:_ The emergence of defending mechanisms has challenged FL adversaries. This prompted more works to consider the stealthiness of their backdoor attacks. Constraint-based model poisoning and partially-poisoning attacks are two mainstream approaches for achieving this goal [16; 19; 22; 35; 58; 61; 64], and constraint-based methods are more popular. Although these methods can bypass common defenses, the adversary must be knowledgeable of difficult-to-achieve information in the physical world such as the aggregation operator [16; 61], global data, and employed defenses [22; 35].
- _FL Type:_ Most existing works focus on HFL, in which there is that the aggregation server is honest and there are one to several malicious clients, which are totally controlled by adversaries. There are only [60; 63] proposed backdoor attacks in VFL with gradient-replacement techniques although VFL provides many favorable conditions to conduct backdoor attacks. For example, VFL is often involved by a much less number of participants in HFL, i.e., less than five [74], and each participant in VFL possesses a part of a global model. To the best of our knowledge, backdoor attacks have not appeared in FTL.
- _Applications:_ Backdoor attacks have been evaluated under several domains in FL including image classification, IoT, and natural language processing. We can see that most attacks target image classification. To tailor backdoor attacks for a specific domain, i.e., network intrusion detection for IoT [23], the adversary needs to develop a specialized data poisoning strategy.
## 4 Backdoor Defense Methodologies
In the literature, there are different strategies applicable to handle backdoor attacks in FL, with some specifically designed for this type of attack (dedicated), while others aim to defend against multiple attack types, including backdoor attacks (non-dedicated). These defenses can be implemented at different stages of the FL training process, resulting in various methods and approaches. For instance, server-side defenses are predicated on the assumption that the orchestration server can be trusted as a collector and aggregator of local updates from clients. In contrast, client-side defenses aim to protect the robustness of FL when the trustworthiness of the server cannot be assumed. While some strategies were specifically designed for FL backdoor attacks, others, such as Krum [33] and geometric mean [75] for mitigating Byzantine attacks, have also been effective in defending against such attacks despite having strong assumptions (e.g., IID data) and with specific limitations.
In general, the FL backdoor defenses can be grouped into three categories based on different methodologies: previous-aggregation defense (Pre-AD), which uses anomaly detection techniques; in-aggregation defense (In-AD), which relies on robust training techniques; and post-aggregation defense (Post-AD), which involves model restoration. We give the overview of these defenses in Figure 7 and the taxonomy of each defense in Figure 8.
### Previous-aggregation Defenses
Pre-AD methods are implemented before the server aggregates model updates from clients. These methods first identify adversarial clients as anomalous data in the distribution of local model updates and then exclude them from the aggregation. Specifically, the Pre-AD methods rely on the assumption that malicious client model updates are similar and use either unsupervised
or supervised ML techniques to differentiate between benign and malicious updates. Examples include Krum [33], AFA [47], and Auror [27], which use distance measurements such as the Mahalanobis Distance [87] and Cosine Similarity [88] under the assumption of either IID or non-IID data distribution. However, model updates are often highly dimensional, making it difficult to apply anomaly detection techniques effectively. To address this, some works use dimensional reduction techniques such as PCA to make the data more manageable [71]. These approaches typically rely on the Euclidean Distance for clustering, which can be vulnerable to stealthy attacks like constraint-based attacks [21, 35]. In FoolsGold [26], the defense mechanism inspects client updates based on the similarity of their model updates, with the assumption that malicious updates
Figure 8: Taxonomy of FL backdoor defense.
Figure 7: Overview of different categories of backdoor defenses in FL: previous-aggregation defense (Pre-AD), in-aggregation defense (In-AD), and post-aggregation defense (Post-AD).
will behave more similarly to benign updates.
Anomaly detection can be performed using ML techniques, such as clustering and graph-based methods. For example, in [77], model updates are divided into clusters based on Cosine Distance, and in [78], an unsupervised deep learning anomaly detection system is integrated into a blockchain process. Graph-based anomaly detection is proposed in [76], where the authors build a graph of local model updates and identify benign models by solving a maximum clique problem. Anomaly-based systems based on Gated Recurrent Units (GRUs) have been tested on IoT-specific datasets in [79]. Li et al. [80] proposed a spectral anomaly detection framework using a latent space and an encoder-decoder model. Malicious updates are identified as those that produce higher generation errors than benign ones. In [31], the authors proposed DeepSight, a novel model filtering approach that characterizes the distribution of data used to train model updates and measures the differences in the internal structure and outputs of NNs to identify and eliminate model clusters containing poisoned models. The effectiveness of existing weight clipping-based defenses in mitigating the backdoor contributions of possibly undetected poisoned models is also demonstrated. In addition, FLDetector [89] suggested a method for detecting malicious clients by examining the consistency of their model updates. Essentially, the server predicts a client's model update in each iteration based on past updates, and if the received model update from the client differs significantly from the predicted update over multiple iterations, the client is flagged as malicious.
Defenses against malicious clients in FL can be vulnerable to certain attack scenarios and impose strong assumptions about the adversary's capabilities. Multi-krum [33] fails to mitigate edge-case backdoor attacks [17] in non-IID data distributions, and FoolGold [26] is vulnerable to constrain-and-scale attacks [16]. To address this, Nguyen et al. [30] studied multi-target backdoor attacks, which do not assume a fixed number of adversaries or data distribution. FLAME [30] used the HDBSCAN algorithm to detect malicious updates and combines model filtering with poison elimination to detect and remove malicious updates and is robust against inference attacks. However, FLAME requires more computational resources than traditional FL processes. Li et al. [80]'s method is effective at detecting multi-trigger backdoor attacks while maintaining high predi ction accuracy for the benign main task.
There are two main approaches for addressing malicious clients in FL: total exclusion and impact reduction. The first approach removes poisoned updates from malicious clients before aggregating updates from all clients [30; 33], and is effective when the proportion of malicious clients is high. However, its effectiveness against multi-target backdoor attacks is unknown, and it relies on the assumption that malicious clients will behave similarly at each round or that benign clients will have similar data distributions, which may not hold in certain cases such as fixed-frequency attacks. The second approach reduces the impact of malicious clients on the aggregated model, such as decreasing the learning rate of suspicious clients in FoolsGold [26]. There is a risk of incorrectly detecting anomalous updates in cases where these assumptions do not hold.
### In-aggregation Defenses
The In-AD mechanism for FL operates while the server is aggregating local models, using techniques such as differential privacy, robust learning rates, smoothness and perturbation, and local validation to mitigate the effects of backdoors.
_Differential Privacy (DP)._ DP has been shown to be effective against backdoors [22, 85], but it can compromise model performance under data imbalance [67, 90], which is common in federated learning. DP-FedAvg [91] (Central-DP) is a differentially private aggregation strategy that removes extreme values by clipping the norm of model updates and adding Gaussian noise, but the required amount of noise significantly reduces task accuracy. Sun et al. [22] proposed Weak-DP, which adds sufficient Gaussian noise to defeat backdoors and preserve task accuracy, but it is not effective against constrain-based backdoor attacks [17]. Additionally, differential privacy-based defenses can potentially affect the benign performance of the global model, as the clipping factors also change the weights of benign model updates [16, 17].
_Model Smoothness and Perturbation._ Despite the lack of robustness certification in previous defense approaches, Xie et al. [56] proposed the first general defense framework, CRFL, for training certifiable robust FL models against backdoor attacks. CRFL employs cropping and smoothing of model parameters to control model smoothness and generate sample robustness certification against backdoor attacks with limited amplitude. The smoothness and perturbation method is also used as an additional component to limit the L2-norm of individual updates to improve defense performance [30, 31]. Additionally, the FL-WBC [84] method aimed to identify vulnerable parameter spaces in FL and perturb them during client training. FL-WBC also provides robustness guarantees against backdoor attacks and convergence guarantees to FedAvg [45]. These developments demonstrate promising steps toward improving the robustness of FL against backdoor attacks. In FLARE [92], a trust evaluation method is presented that calculates a trust score for each model update based on the differences between all pairs of model updates in terms of their penultimate layer representations values. FLARE assumes that the majority of clients are trustworthy, and assigns trust scores to each model update in a way that updates far from the cluster of benign updates receive low scores. The model updates are then aggregated with their trust scores serving as weights, and the global model is updated accordingly.
_Robust Aggregation Rule._ Several approaches have been proposed to address the vulnerability of standard aggregation methods, such as FedAvg [45], to backdoor attacks. For example, the use of the geometric median of local parameters as the global model has been proposed in [81, 82]. Another approach is the use of the Median and \(\alpha\)-trimmed mean, which replaced the arithmetic mean with the median of model updates to increase robustness against attacks [49]. Additionally, Ozdayi et al. [34] proposed the use of a Robust Learning Rate (RLR) as an improvement of signSGD [83], which adjusts the server's learning rate based on the agreement of client updates. Chen et al. [93] introduced a defense mechanism inspired by matching networks, where the class of input is predicted based on its similarity with a support set of labeled examples. By removing the decision logic from the shared model, the success and persistence of backdoor attacks were greatly reduced.
_Local validation._ BaFFle [86] is a decentralized feedback-based mechanism that eliminates backdoors by using clients' data to validate the global model through a supernumerary validation process. Selected clients check the global model by calculating a validation function on secret data and report whether it is backdoored to the orchestration server. The server then decides whether to reject the global model based on the inconsistency of misclassification rates per class between the local model and the global model. The BaFFle is compliant with secure aggregation, but has limitations: it requires trigger data to activate the backdoor, does not work in non-IID
data scenarios with a small number of clients, and is not effective against continuous attacks that corrupt FL training.
In-aggregation defenses, which are applicable in various FL schemes and preserve privacy, have little impact on the training process and are effective against artificial backdoor attacks [18; 22; 94]. However, they primarily resist convergence attacks and do not completely discard poisoned local updates, allowing a significant percentage of compromised updates to impact the aggregated model. For example, the geometric median (RFA) [75] is vulnerable to distributed backdoor attacks [18], and RLR [34] can cause a trade-off between defense efficiency and performance on the main task. The effectiveness of these defenses and the trade-offs they incur under severe conditions such as a high ratio of malicious clients or non-IID data needs further evaluation.
It has been established that in a VFL scenario where features and models are partitioned among various parties, sample-level gradient information can be used to infer sensitive label information that should be kept confidential. To counter this issue, it is usual practice to encrypt sample-level messages with Homomorphic Encryption (HE) and only communicate batch-averaged local gradients among the parties. However, Zou et al. [95] showed that even with HE-protected communication, private labels can still be reconstructed with high accuracy via gradient inversion attacks, thereby challenging that batch-averaged information is secure to share under encryption. In response to this challenge, [95] proposed a novel defense method, called Confusional Autoencoder (CAE), that utilizes autoencoder and entropy regularization techniques to conceal the true labels.
### Post-aggregation Defenses
To ensure the integrity of the global model, a protective procedure is implemented after local models from clients, potentially including malicious ones, have been aggregated. The orchestration server subsequently reviews and amends the global model, maintaining valuable information and removing any corrupt updates from malicious clients.
Wu et al. [29] introduced the first post-aggregation defense strategy for FL against backdoor attacks. Their approach involves identifying and removing neurons with low activation when presented with benign samples, as these neurons are likely to be dormant without the presence of the trigger. To address the issue of the server not having access to private training data, Wu et al. [29] proposed a distributed pruning strategy. The server asks clients to record neuron activations using their local data and create a local pruning list, which is then used to determine a global pruning sequence. The server can adjust the pruning rate based on the current model's performance on a validation dataset and gather feedback from clients to finalize the pruning list.
Unlearning has recently gained attention in the field of ML [96; 97; 98], and its application to defend against backdoor attacks in FL has been explored by Wu et al. [28]. Wu et al. demonstrated the use of Federated Unlearning for removing the effects of single-trigger backdoor attacks without significantly affecting overall performance (e.g., BA = 0%). However, this method requires identifying malicious clients to be unlearned and has only been tested on artificial backdoor attacks, leaving its effectiveness against semantic backdoor attacks unknown.
### Comparing Approaches for Detecting and Mitigating Backdoor Attacks in FL
We compare existing backdoor defenses in FL in terms of eight dimensions as shown in Table 4. The compared dimensions belong to three key perspectives of a backdoor defense: adversary
assumptions, defensive requirements, and effectiveness.
**Adversary Assumptions.** Existing backdoor defenses in FL are based on specific observations - _Defensive targets:_ Most existing backdoor defenses are demonstrated to be efficient against in-distribution and single-trigger backdoor attacks. Recent Pre-AD defenses, i.e., FLAME [30] and DeepSight [31], are more versatile since they can handle various attack schemes. In fact, a robust backdoor defense should not rely on the type of backdoor attack.
- _Data distribution:_ Except for [27, 28], most existing defenses are designed for the case in which all of the participants' training data adheres to non-IID. However, the data distribution among participants in FL is often non-predictable. To be more applicable, the defenses should be effective under different data distributions, i.e., both IID and non-IID cases [34, 84].
- _Poisoned Model Rate:_ The Pre-AD methods can be employed when the PMR is sufficiently large (i.e., up to 50%) because these methods aim at grouping the poisoned models into one group and the remaining group is benign. Other approaches, i.e., In-AD and Post-AD, are effective under smaller PMR such as less than 10%.
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Categorization} & \multirow{2}{*}{Work} & \multicolumn{3}{c}{Adversary assumptions} & \multicolumn{2}{c}{Defensive Requirements} & \multicolumn{2}{c}{Effectiveness} & \multirow{2}{*}{Application} \\ \cline{3-4} \cline{6-10} & & Defensive targets & Data distribution & \#Compromised (PMR) & Local update access & Model inference & ASR & MTA Change & \\ \hline \multirow{6}{*}{Pre-AD} & FLDebetector (2022) [89] & Backdoor Attacks & non-IID & \(28\%\) & YES & NO & \(\leq 2.4\%\) & \(\pm 1.5\%\) & IC \\ \cline{2-10} & FLAME (2022) [57] & Backdoor Attacks & non-IID & \(<50\%\) & YES & NO & \(0\%\) & \(\pm 0.5\%\) & \begin{tabular}{c} IoTD \\ IC/NWP \\ \end{tabular} \\ \cline{2-10} & DeepSight (2022) [31] & Backdoor Attacks & non-IID & \(\leq 45\%\) & YES & YES & \(0\%\) & \(\pm 0.5\%\) &
\begin{tabular}{c} IoTD \\ IC/NWP \\ \end{tabular} \\ \cline{2-10} & VAE (2020) [80] & In-distribution Single-trigger & non-IID & \(\leq 30\%\) & NO & NO & – & – & IC/ SA \\ \cline{2-10} & FoolGold (2018) [26] & In-distribution & non-IID & – & YES & NO & \(0\%\) & – & IC \\ \cline{2-10} & AUROR (2016) [27] & In-distribution & IID & \(\leq 30\%\) & YES & NO & \(2\%\) & \(\leq 5\%\) & IC \\ \hline \multirow{6}{*}{In-AD} & CAE (2022) [95] & Gradient-replacement & non-IID & – & NO & NO & – & – & IC \\ \cline{2-10} & CRFL (2021) [56] & Distributed-trigger & non-IID & \(\leq 4\%\) & YES & NO & – & – & F& RBF/ IC \\ \cline{2-10} & BaFHe (2021) [86] & In-distribution & non-IID & – & YES & YES & – & – & IC \\ \cline{2-10} & RLR (2021) [34] & Distributed-trigger Single-trigger & non-IID/IID & 10\% & YES & NO & \(\leq 9\%\) & \(<5\%\) & IC \\ \cline{2-10} & DP (2020) [16] & Single-trigger & non-IID & \(\leq 5\%\) & YES & NO & – & – & IC/ NLP \\ \cline{2-10} & Matching Networks (2020) [93] & Single-trigger & IID & \(25\%(1/4)\) & YES & NO & \(\leq 20\%\) & \(+5\%\) & IC \\ \cline{2-10} & FL-WBC (2020) [84] & In-distribution & non-IID/IID & \(\leq 50\%\) & YES & NO & – & \(\leq 10\%\) & IC \\ \cline{2-10} & Weak DP (2019) [22] & Single-trigger & non-IID & \(3.33\%\) & YES & NO & – & – & IC \\ \hline Post-AD & KD Unlearning (2022) [28] & Single-trigger & IID & \(10\%(1/10)\) & YES & NO & \(0\%\) & \(\pm 1\%\) & IC \\ \cline{2-10} & Pruning Neurons (2020) [29] & Distributed-trigger & non-IID & \(\leq 10\%\) & YES & NO & \(13\%\) & \(<2\%\) & IC \\ \hline Pre-AD: Previous-aggregation defense & In-AD: In-aggregation defense & Post-AD: Post-aggregation defense & & & & & \\ NLP: Natural Language Processing & IoTD: IoT intrusion detection & IC: Image Classification & & & & & \\ SA: Sentiment Analysis & BAF: Banking and Finance & NWP: Next Word Prediction & IID: Independent and Identically Distributed & & & & \\ PMR: Poissoned Model Rate & ASR: Attack Success Rate & MTA: Main Task Accuracy & –: Not Available & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: A Comparison of the State-of-the-art Methods for Defending against Backdoor Attacks in FL
**Defensive Requirements.** Unlike ML, the orchestration server is not eligible to access the local training data, so the information that can be analyzed to defend against backdoor attacks is the local updates and their corresponding inference outputs.
- _Local Update Access:_ Apart from [80; 95], other defenses need to analyze all local model updates. This leads to an issue with computation overhead. Instead of examining entire model parameters, efficient methods such as last-layer parameter analysis can be utilized to circumvent this issue [31].
- _Model inference:_ To facilitate their defenses, [31; 86] need to consider inference results from local model updates. Although these strategies demonstrate efficiency in defending against backdoor attacks, this requires considerable computation costs. As a result, the remaining methods not requiring local model inferences are more relevant when the computation capacity of the central server is limited.
**Effectiveness.** Another issue with these defense methods is that they rely on too many assumptions about the data distribution, number of clients participating, and number of attackers. This makes it difficult to make a fair comparison between different approaches.
- _ASR:_ The works [30; 31] can mitigate ASR from 100% to 0% with a little change in main task accuracy, but they rely on a strong assumption about the number of attackers to make a distinction for malicious models. For example, [30; 31; 80; 84] proposed defense methods that required a large percentage of malicious clients (up to 50%) to be present in order to effectively detect and exclude them. This highlights the importance of understanding the specific threat model and the distribution of malicious clients in a given scenario. Therefore, it is uncertain how well these methods will perform in a realistic world.
- _MTA Change:_ One issue with existing defenses in FL is the degradation of performance on the primary task. For example, methods such as [27; 34; 93] result in a reduction of accuracy around 5%. This underlines the importance of ongoing research and development of strong defense mechanisms to guarantee accurate and trustworthy model results.
- _Application:_ Most applications of backdoor attacks have been implemented in IC tasks [89; 80; 26; 27; 95; 86; 16; 93; 84; 22; 28; 29], although some have been observed in NLP tasks as well [16; 31; 57]. It is crucial for researchers and practitioners to remain vigilant in exploring the potential of backdoor attacks in various domains and to develop effective defense mechanisms to mitigate their impact.
### Confrontation between Backdoor Attacks and Defenses
Adversaries and defenders are engaged in a never-ending battle. The conflict between them deepens our understanding of backdoor attacks. Attackers are always looking for ways to make poisoned attacks more covert, effective, and resistant to countermeasures. As shown in Table 5, most defense strategies focus on the scenarios of in-distribution backdoor attacks, in which the adversary simply changes the label of targeted inputs into his expected one. These poisoned samples can appear in the other benign participants' training data. Although defense is often designed against multiple attacks, many attack strategies have not been addressed such as noise-instance, coordinated-trigger, and partially poisoning backdoor attacks.
On the other hand, each countermeasure approach is often applied to a group of attack strategies. Particularly, pre-aggregation methods (e.g., Krum [33], FoolsGold [26], and AUROR [27])
seem to be efficient under semantic backdoor attacks. Furthermore, in-aggregation methods are primarily utilized under artificial backdoor attacks, specifically single-trigger attacks. However, the more sophisticated attack strategies such as distributed triggers and coordinated triggers, have not been evaluated under the presence of these defenses.
## 5 Challenges and Future Research Directions
In this section, we first pinpoint aspects for designing a more efficient and robust backdoor attack. Then, we discuss existing disadvantages and corresponding potential research directions for developing backdoor defenses from multi-perspectives. The summary of future research directions is presented in Figure 9.
### Future Research Directions: Backdoor Attacks
**Backdoor Attacks with More Practical Assumptions.** Most of the existing backdoor attacks in FL rely on different assumptions, including assumptions about the percentages of compromised clients, the total number of FL clients, and the global distribution of training data. For instance, state-of-the-art attacks [17; 21] use benign samples drawn from global distribution to manipulate
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Attack Strategies & \multicolumn{2}{c}{Applicable Defenses} \\ \hline \multirow{5}{*}{Data poisoning} & \multirow{3}{*}{Semantic backdoor} & Out-of-distribution & DeepSight [31], FLAME [57], Kurn [33] \\ & & In-distribution & FoolSGold [26], VAE [80], AUROR [27], \\ & & & Clustered FL [77], PCA [71], BaFite [86], FL-WBC [84] \\ & & Noise-instance & N/A \\ \cline{2-4} & \multirow{3}{*}{Artificial backdoor} & Single-trigger & DP [16], RLR [34], Matching Network [93], Pruning Neurons [29] \\ & & Distributed-trigger & CRFL [56], FLAME [57], RLR [34], DeepSight [31], \\ & & & Pruning Neurons [29], FLDetector [89] \\ & & Coordinated-trigger & N/A \\ \hline \multirow{5}{*}{Model poisoning} & \multirow{3}{*}{Fully poisoning} & \multirow{3}{*}{Constrain based} & FLARE [92] \\ & & Gradient-replacement based & CAE [95] \\ \cline{1-1} \cline{2-4} & & Partially poisoning & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 5: Backdoor Attacks Strategies and Defense Methodologies in FL
Figure 9: Summary of Future Research Directions.
the poisoning dataset. Other attacks [17; 26; 62] require continuous participation of compromised clients or a large ratio of malicious clients. This assumption is challenged by [73] and has shown to be unpractical. Therefore, it would be interesting to explore the possibility of designing attack strategies that require limited assumptions and can be applied in various scenarios, such as in large-scale FL systems with limited knowledge about system operations. To fulfill this purpose, the adversary can exploit leakage information via shared global model [62; 72; 99] to mimic an auxiliary training dataset align with global data distribution to strengthen the backdoor impact of limited-capability adversaries. Besides, when an adversary controls only a small fraction of participants (i.e., less than \(0.1\%\)), it can consider designing a single-shot attack and prolong the backdoor durability.
**Stealthiness and Durability of Backdoor Attacks.** Most current attacks have not considered stealth or enhanced the stealth by constraining poisoned model updates submitted to the aggregation server. Still, they have not taken the ocular stealth of the attack into account. In the early studies, the trigger is apparent, resulting in poor visual quality, and it can be easily removed by humans [100]. The stealth of the backdoor attacks in FL can be improved from two perspectives. Instead of inserting a small pattern into the original inputs, the trigger should be imperceptible to avoid inspection during the inference procedure. To do this, a learn-able trigger generated by optimizing objective functions [59; 101] or transformation models [62; 102] is visually indistinguishable from benign samples. From the model poisoning perspective, the naive scaling-based methods are not stealthy and robust against existing defenses [30; 56]. This issue can be addressed by partially poisoning attacks that leverage redundant space within a neural network architecture to covertly implant a backdoor, while still allowing the attacker to scale up the poisoned updates [19; 61]. In addition, the durability of the backdoor should be intensely considered to avoid the backdoor dilution phenomenon. A robust backdoor attack strategy should well balance stealthiness and durability.
**Backdoor Attacks in Physical World.** Current attack strategies typically use an artificial procedure to insert a trigger for a backdoor, such as a small pattern in images during training and testing. However, these attacks can be affected by the loss of the trigger, such as when a camera captures an image from a display or printed photo. The effectiveness of such attacks depends on the location and appearance of the trigger, as discussed in [103]. Therefore, it is important to evaluate current backdoor threats in physical FL systems. A hybrid attack that works with both digital and physical triggers may be a promising approach for implementing effective backdoor attacks in FL. One of the feasible methods is generating a backdoor dataset with the physical object as a trigger and applying physical transformations to enhance the robustness of the injected backdoor in real-world scenarios [103; 104].
**Imperceptible Backdoor Attacks.** The practice of inserting hidden information into images in a way that is imperceptible to the human eye for FL is known as steganography [105; 106; 107; 108]. This involves concealing a message, image, or file within another message, image, or file without affecting its visible appearance. In the context of FL, steganography could be used to insert data or metadata into images for training ML models while preserving the privacy of sensitive data or transferring it between organizations without revealing its content. Potential approaches to image steganography include applying transformations that preserve visual appearance while encoding additional information, generating adversarial examples with hidden data using machine
learning, and developing algorithms for detecting and decoding hidden information in images. The limits of what can be encoded in images while maintaining their visual quality should also be investigated.
**Commencement of Backdoor Attacks in Other Domains and Architectures.** Backdoor attacks in FL have been mostly studied for image classification [18; 22; 59; 64] and next word prediction [21] tasks. However, existing schemes may not be directly transferable to other domains due to differences in sample nature. Customized strategies may be needed to conduct backdoor attacks in specialized domains such as smart cities [109] or IoT intrusion systems [23]. Some applications of FL, such as environmental monitoring [110] and reducing network congestion [111], lack study on backdoor attacks and require further investigation. HFL is the most attractive land for implanting backdoor attacks since local datasets have the same feature space yet are different from each other and the adversary can easily manipulate the labels for his own training samples. Since VFL and FTL have experienced great development in the industry [112; 113; 114], the presence of a backdoor attack in these scenarios will cause significant concern.
### Potential Research Directions on Defenses
**Differential Privacy in FL.** DP is a framework that protects the privacy of individuals in a dataset by adding noise to the data before it is released or used for analysis. It has been proposed for use in FL [16; 22] but has several limitations. DP requires a large number of clients to be effective, as the noise level needs to be high enough to mask the presence or absence of any individual client's data. It may also degrade model performance and may not prevent all types of privacy attacks, such as attribute inference and model inversion. Additionally, DP may not be suitable for all FL scenarios depending on the data being used and the client's privacy requirements. It is important to consider these limitations and trade-offs when using DP in FL settings.
**Rethinking Current Defenses in FL: Limitations and Uncertainties.** The current defenses in FL have limitations and uncertainties that must be addressed. Firstly, secure aggregation techniques [115], such as homomorphic encryption and secret sharing, are used in FL to combine model updates from multiple clients while preserving privacy. However, secure aggregation can also make FL systems vulnerable to poisoning attacks as individual updates cannot be inspected. Secondly, the effectiveness of adversarial training in non-IID settings remains uncertain, requiring further research. Finally, the field of FL, including VFL and FTL, is still in its early stages and requires further investigation to fully understand potential backdoor attacks and how to effectively defend against them. To mitigate these concerns, it is important to employ multiple layers of defense mechanisms and continuously monitor and audit the FL process to detect any malicious activity.
**Backdoor Defenses in Various AI-domains.** Backdoor attacks are generally easier to detect and defend against in the CV domain than in the NLP domain, according to empirical studies. For example, Wan et al.[116] found that ASR using the FedAvg algorithm was less than 75% effective with most defenses when one of ten clients was malicious in CV tasks. However, Yoo et al.[117] found that ASR was easily more than 95% effective on most attacks with most defenses when one of ten clients was malicious in NLP tasks. One reason for this difference may be that detecting NLP backdoors is more difficult. There is increasing interest in using FL in automatic speech recognition [118; 119; 120; 121], but the risk of backdoor attacks is a concern that needs to be
addressed. Future research may focus on developing effective strategies for defending against and detecting backdoor attacks in the automatic speech recognition domain.
**Fairness and Privacy Violation.** It is important that the application of a defense mechanism in an FL setting does not impact the fairness among the participating clients. For example, efforts to improve the robustness of FL systems may result in the unfair treatment of honest clients, as their updates may be rejected from the aggregation process if they lie far from the distribution of other updates, as discussed in [17]. This fact raises the question of the compensation between the fairness and robustness of FL systems in the presence of backdoor defenses. Additionally, some defense mechanisms rely on inspecting model updates to study the training data, which can increase the risk of membership inference and model inversion attacks [31; 57]. Therefore, it is important to carefully consider whether a specific defense mechanism is appropriate and to explore more secure defense strategies.
**Incorporating Interpretable Techniques into FL Models.** Interpretable techniques have been widely studied in the context of single-party ML models, such as decision trees, random forests, gradient-boosted trees, and deep neural networks [122; 123; 124; 125]. Most of these techniques have been developed to provide transparency into the decision-making processes of these models, with the goal of enhancing their interpretability and usability. However, their application to FL is relatively new. By providing transparency into the decision-making processes of FL models, interpretability techniques can help detect malicious clients and prevent backdoor attacks. For instance, studies show that saliency maps can reveal hidden triggers in single-party models and demonstrate the effectiveness of different defense methods against backdoors [126]. Similarly, visualization techniques can help to identify regions of the model's input space that are particularly susceptible to backdoor attacks and provide a way to test and validate the robustness of FL models.
**Computational Consumption of the Orchestration Server.** The deployment of defense mechanisms in FL requires significant computational resources, and it's crucial to ensure that it doesn't exceed the capacity of the orchestration server. Existing defense mechanisms often overlook the limitation of computational resources, leading to time delays and energy consumption. In future research, it's important to minimize resource consumption while deploying defense mechanisms in FL. For instance, for FL systems with a small number of clients, the local models can be verified one by one, but when the number of clients increases, this approach becomes impractical and consumes vast amounts of time and energy. An alternative solution is to deploy FL with multiple servers, distributing the task of verifying updates among them, which reduces resource consumption but brings new challenges such as communication costs and privacy leakage. Another promising solution is combining FL with blockchain technology, as proposed in [127], where clients upload updates to verifiers who select benign updates by voting and then aggregate and write the selected updates to blocks through the blockchain network.
### Discussion
**Exploring the Practical Benefits of Backdoor Attacks in Federated Unlearning.** We often consider backdoor attacks as a great threat in FL while ignoring its potential advantages. Indeed, a backdoor attack demonstrated its sake under the unlearning scenario, which is a technique in FL [28; 128; 129] focusing on removing or revoking access to data, participants, or parts of the
model, with the goal of improving the integrity and accuracy of the model. Backdoor triggers are utilized as an evaluation tool to assess the effectiveness of unlearning methods [130]. The client, who wants to opt out of the federation, uses a dataset that contains a fraction of samples with inserted backdoor triggers, making the global FL model vulnerable to the backdoor trigger. The goal of the unlearning process is to produce a model that decreases accuracy on samples with backdoor triggers while preserving good performance on clean samples. Future research on unlearning needs to focus more on investigating the impact of backdoor attack methods on model privacy and security.
**Investigating Various Backdoor Injection Strategies in Multi-Group FL.** Existing works often consider the homogeneous backdoor attack, in which the malicious participants have a common attack objective [17; 18; 21; 31]. This approach is not always relevant in the physical world. This raises a great concern about backdoor effects caused by multiple adversaries with different backdoor targets. For example, consider a model that recognizes two-digit numbers. It is possible to inject two new backdoor tasks into the model: one that sums up the digits and another one that multiplies them. Then, various endeavors from distinct backdoor tasks can be complementary or detrimental to one another. Moreover, the appearance of multiple backdoor tasks may have varying effects on the performance of the FL model, and it is important for future research to uncover these effects in order to improve the security and privacy of FL models. This research can aid in the development of better methods for detecting and mitigating backdoor attacks in FL, thus improving the overall integrity and robustness of the model.
**Integrating Multiple Defense Mechanisms in FL.** Previous works [30; 31] used a combination of methods in the Pre-AD and In-AD phases to mitigate backdoor attacks. These methods involve two layers: the first layer detects and excludes models that contain a well-trained backdoor, while the second layer uses a different approach in the In-AD phase to mitigate the attack. In future research, a combination of methods from different defense phases, such as Pre-AD and Post-AD, In-AD and Post-AD, or Pre-AD, In-AD, and Post-AD, can be studied to further improve the defense against backdoor attacks in FL.
## 6 Conclusion
In summary, backdoor attacks in FL pose a significant threat to the security and privacy of FL systems. These attacks can be triggered in various ways, including artificial and semantic triggers, and can be launched by a single client or a group of clients. To defend against these attacks, various approaches have been proposed, including pre-aggregation defenses, in-aggregation defenses, and post-aggregation defenses. Each of these approaches has its own advantages and limitations, and their effectiveness depends on the specific characteristics of the attack. Moreover, the robustness of these defenses in the face of various types of attacks, particularly in non-IID scenarios, remains an open research question. In the future, it will be important to continue developing more robust defense techniques that are effective against semantic backdoor attacks, improving the efficiency of defense techniques, studying the effectiveness of defenses under realistic attack scenarios, examining the impact of data heterogeneity on backdoor attacks and defenses, and investigating the impact of system-level factors on backdoor attacks and defenses. By addressing these research areas, it will be possible to make progress in understanding and addressing the risks of backdoor
attacks in federated learning systems and to develop more secure and effective defense strategies against a wide range of attacks. It is also important to consider the potential for physical backdoor attacks and to explore potential defenses against these types of attacks. In addition, research on the effectiveness of backdoor defenses in specific AI domains, such as automatic speech recognition, could be valuable in developing targeted and effective protection mechanisms.
## Credit Authorship Contribution Statement
**Thuy Dung Nguyen:** Methodology, Visualization, Writing - Original Draft, Writing - Review & Editing, **Minh Tuan Nguyen:** Methodology, Visualization, Writing - Original Draft, Writing - Review & Editing, **Phi Le Nguyen:** Writing - Review & Editing, **Huy Hieu Pham:** Writing - Review & Editing, **Khoa Doan:** Writing - Review & Editing, **Kok-Seng Wong:** Conceptualization, Project administration, Supervision, Writing - Review & Editing.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgment
This work was supported by VinUni-Illinois Smart Health Center (VISHC), VinUniversity.
|
2306.09598 | Design of a Teleoperated Robotic Bronchoscopy System for Peripheral
Pulmonary Lesion Biopsy | Bronchoscopy with transbronchial biopsy is a minimally invasive and effective
method for early lung cancer intervention. Robot-assisted bronchoscopy offers
improved precision, spatial flexibility, and reduced risk of cross-infection.
This paper introduces a novel teleoperated robotic bronchoscopy system and a
three-stage procedure designed for robot-assisted bronchoscopy. The robotic
mechanism enables a clinical practice similar to traditional bronchoscopy,
augmented by the control of a novel variable stiffness catheter for tissue
sampling. A rapid prototype of the robotic system has been fully developed and
validated through in-vivo experiments. The results demonstrate the potential of
the proposed robotic bronchoscopy system and variable stiffness catheter in
enhancing accuracy and safety during bronchoscopy procedures. | Xing-Yu Chen, Xiaohui Xiong, Xuemiao Wang, Peng Li, Shimei Wang, Toluwanimi Akinyemi, Wenke Duan, Wenjing Du, Olatunji Omisore, Lei Wang | 2023-06-16T02:56:04Z | http://arxiv.org/abs/2306.09598v3 | # Design of a Teleoperated Robotic Bronchoscopy System for Peripheral Pulmonary Lesion Biopsy
###### Abstract
Bronchoscopy with transbronchial biopsy is a minimally invasive and effective method for early lung cancer intervention. Robot assisted bronchoscopy offers improved precision, spatial flexibility, and reduced risk of cross-infection. This paper introduces a novel teleoperated robotic bronchoscopy system and a three-stage procedure derived for robot-assisted bronchoscopy. The robotic mechanism allows for translation, rotation, bending of a bronchoscope, and as well as the control of a novel variable stiffness catheter for tissue sampling. Rapid prototype of the robotic system has been fully developed and characterized with in-slice and in-vivo animal experiments in this study. We conducted the studies to evaluate the robot's design concept and feasibility of usage for transbronchial biopsy. The results demonstrate the potential of the proposed robotic bronchoscopy system in enhancing accuracy and safety during bronchoscopy procedures.
Robotic bronchoscopy, Robot assisted surgery, Teleoperation
## I Introduction
Lung cancer is the third most common cancer, and it has a significantly high morbidity and mortality rates worldwide. This medical condition has been consistently related to an annual death toll of 1.9 million people in the last five years [1]. This type of cancer usually start with the development of repellulmonary nodules in the lungs. Thus, early detection and treatment of pulmonary nodules are sought for effective clinical intervention. This is leading to complete cures for many early-stage lung cancer patients thus reducing the death rates of lung cancer [2]. The conventional methods of carrying out lung biopsy involve the use of percutaneous or bronchoscopic needles [3]. Typically, percutaneous needle biopsy involves inserting a needle into the lung lesion through the chest skin to obtain biopsy tissue. Clinical practice and evaluation shows that percutaneous needle biopsy carries a high risk of complications such as pneumothorax and significant bleeding [4]. Consequently, transbronchial biopsy an alternative as it is done through minimal invasion and possess relatively lower risks [5].
Robotic technology with improved precision, spatial flexibility, and dexterity has the potential to enhance minimal invasive surgeries [6]. Robot-assisted minimal invasive surgeries (RAMIS) enable faster, safer, and more convenient navigation of surgical tools for intraluminal, endoluminal and transluminal interventions without the need for multiple or wide incisions [7]. The absence of large incisions during RAMIS offers numerous advantages such as improved cosmology due to small incision or no visible scarring, reduced postoperative pain, and avoidance of general anesthesia for patients [8]. Similarly, doctors are able to perform different interventions without been exposed to the operational risks [9]. Recently, there have been notable advancements in the development of robot-assisted bronchoscopy systems. A prominent example is the Monarch(tm) platform (Auris Health, Redwood City, USA). The system incorporates inner bronchoscope and outer sheath equipped with electromagnetic (EM) for teleoperated navigation guidance. Ion(tm) Endoluminal System (Intuitive Surgical, Sunnyvale, CA, USA) is another iconic bronchoscopy platform used for the intervention of periulomomary nodules. Rather than using EM, the system employs shape-sensing technology for tool navigation. Typically, shape-sensing Bragg grating fiber and video scope are utilized for navigation guidance during interventions. A common drawback is the lack of direct visualization system during tissue sampling with these systems [10]. The recently FDA approved Galaxy(tm) system developed by Noah Medical (San Carlos, USA) offers real-time navigation and lesion updates through tomosynthesis during lung interventions. It is worthnoting that the latter is augmented with a readily available C-arm for fluoroscopy; thus, exposing surgeons to X-ray radiation which are capable of causing surgeons head and neck cancer [11][12].
In addition to commercially available robotic bronchoscopy systems, there have been notable academic efforts in the development of robots for transbronchial lung biopsy. Swaney et al. proposed a robotic system utilizing concentric tube and a steerable needle with magnetic tracking, enabling precise movement through the bronchial wall [13]. Similarly, Amack et al. designed a concentric tubular robot with a compact, modular, and multi-stage mechanism to deploy a steerable needle through a standard flexible bronchoscope [14]. Duan et al. developed a bronchoscope robot with a small end
effector composed of a nickel-titanium tube, achieving three degrees of freedom motion [15]. These research efforts have predominantly focused on innovating mechanical structures with focus on novel designs for continuum robots or flexible robots. The transbronchial robots adopted unique design typologies, and offer range of functionalities that contribute to the advancement RAMIS in lung cancer interventions. Taken an alternative path, we have designed a robotic system for intraluminal bronchoscopy. This paper is aimed to present the design details of the robotic system and characterization of its functionalities. Typically, the platform integrates a new robotics mechanism equipped with various biopsy tools which can be used to coordinate a three-stage bronchoscopy routine. The procedure follows initial insertion of a bronchoscope robot, dynamic adjustment of variable stiffness catheters (VSAC catheters), and tissue sampling. To ensure accurate positioning, an endoscopic video and EM tracking fused navigation system is employed. The feasibility and practicality of the proposed robotic system are assessed through simulations and in-vivo experiments.
## II Material and Methods
The system consists of a teleoperated surgical robot designed for trans-respiratory diagnosis, along with a corresponding master-slave control system.
### _System Overview_
Fig. 1 illustrates the proposed bronchoscopy robotic system. Our architecture follows mounting the actual tool manipulator on a passive robotic arm to ensure stability during teleoperation by surgeons. For a typical bronchoscopy intervention, the manipulator will be positioned adjacent to a patient with the EM tracking system located next to them. Thus, the robotic system can the simulate surgeons' realistic operating mode during the intervention. During the procedure, the surgeon holds a tablet console to control a flexible visual bronchoscope, inserts a flexible tube into the patient's mouth, and utilizes the robot for advancements, rotations, and adjustment of the bending angle of the front end of the flexible bronchoscope. The robot also enables control of the biopsy forceps for feeding and tissue sampling. Surgical tools such as biopsy forceps, biopsy needles, and cytologic brushes reach the target position through a 2.6 mm-diameter working channel.
### _Bronchoscope Manipulator_
The robotic system incorporates various components as shown in Fig. 2, including a flexible bronchoscope, bronchial biopsy instruments, VS-Catteters, and several tablets functioning as master consoles (Microsoft Surface Pro 8). The instructions from surgeons are wirelessly transmitted to robot controller through TCP/IP protocol. Subsequently, the robot responds to surgeons' commands by navigating the flexible bronchoscope along a given trajectory. In addition, the robot utilizes multimodal navigation, combining EM tracking and visual navigation, to accurately locate the position of the bronchoscope in real time. Employing a multi-operator strategy, the robot is controlled through scheduling arrangement and weight distribution, which enables mentor surgeons and trainee surgeons observe the same surgical site and collaboratively control the surgical instruments simulatanously.
Based on analysis of the flexible bronchoscopy tools, the mechanism is designed to replicate the conventional way of steering bronchoscope and biopsy forceps during manual intervention. The design of the embedded system is based on our advancement in endovascular interventional surgical robot [16]. Following the structure of bronchoscopy robot (in Fig. 2), we installed a Nvidia(r) Jetson AGX Orin(tm) Developer Kit, which is programmed for controlling the electric slider (Panasonic(r), Osaka, Japan), rotary motor (Orientalmotor(r), Tokyo, JP), motor drivers, gear sets, and biopsy forceps. Unlike existing robotic bronchoscopy systems, our newly proposed system integrates a commercial bronchoscope in operating room to improve cost efficiency and reduce usage complexity.
The bronchoscope (UEWorld(tm), Xianju, China) used in our work can be replaced easily by other flexible endoscopes. This videoscope has an external diameter of 5.2 mm and working channel of 2.6 mm, with the ability of \(160^{\circ}\) upward and \(130^{\circ}\)
Fig. 1: Illustration of the bronchoscopy system mounted on a robotic arm, a trachea guide apparatus is inserted into patient’s mouth to stabilize interventional direction.
Fig. 2: Structure of the bronchoscope robot.
downward bending angle. Commercial biopsy forceps can be installed in the forceps container. Several motors are mounted with friction wheel to deliver biopsy forceps and VS-Catheters, as shown in Fig. 3.
### _Multimodal Navigation System_
We have installed 6-DoF EM sensors to enable real-time navigation, tracking, and three-dimensional localization of the surgical instruments. For this purpose, an EM tracking system (NDI Aurora, Waterloo, Canada) with a field generator that produces EM field with known geometry, is integrated with the robotic system. This provides information about the position and orientation (roll, pitch, and yaw) of the EM sensor. To integrate endoscopic video and EM tracking information, a multimodal navigation system is employed, utilizing the open-source software CustusX [17]. The software includes a toolbox of navigation features, image-processing algorithms, and connections to external hardware for image-guided therapy. By combining images, tracked surgical instruments, and computer display, a comprehensive navigation system is created, enabling real-time identification of the direction and position of the tip of the bronchoscope.
### _Variable Stiffness Catheter_
A catheter with variable stiffness is designed and utilized to facilitate the extension of surgical instruments through the working channel of an endoscope, thereby reducing the potential risk of unintentional tissue penetration [18]. The catheter design was aimed to maintain optimal flexibility for seamless traversal of tortuous intra-luminal pathways in the human body, while also ensuring sufficient rigidity to provide adequate support during tissue biopsy. Specifically, we developed novel VS-C catheter which is composed of low melting point alloy (LMPA) to enhance the catheter's flexibility in the context of RAMIS and expand the accessible area for bronchoscopic biopsy forceps. Unlike the bronchoscope, the VS-Ccatheter can be inserted into narrower bronchi and offers more flexible control with dynamic stiffness. The VS-Catheter consists of a hollow flexible inner tube and an outer tube incorporating an interlayer infused with LMPA (Field's metal) at a temperature of 47\({}^{\circ}\)C. Additionally, a heat-generating resistance wire is helically wound around the flexible inner tube. This wire serves the purpose of melting the Field's metal, thereby providing variable stiffness functionality to the catheter. The resistance wire and the Field's metal together constitute a stiffness measurement circuit, and a controller-based methodology is employed to achieve continuous adjustability of the catheter's stiffness.
The structural design of the catheter exhibits simplicity, ease of manufacture, and cost efficiency. It utilizes Field's metal phase change technology to achieve controllable stiffness. Importantly, the cooling process of the catheter occurs naturally at the physiological temperature of the human body, eliminating the need for external stimuli and ensuring safety and reliability. The catheter incorporates two circuits: the heating circuit and the measurement circuit, enabling real-time adjustment of the catheter's stiffness to accommodate the intricate anatomical environment, thereby enhancing surgical safety.
## III Novel Robotic Bronchoscopy Procedures
The complete procedures of robot-assisted bronchoscopy involve several steps. These are acquisition of imaging data, segmentation, registration, and 3D construction of the bronchus, as shown in Fig. 5. The process begins with acquiring imaging
Fig. 4: Structure of proposed VS-Catheter, consist of hollow flexible tube fulfilled with LMPA and heating wire.
Fig. 3: Robotic Bronchoscopy System. (A) Robot and bronchus phantom; (B) Biopsy forceps manipulator; (C) Bronchoscopic video. (D) EM-Sensor based navigation system; (E) Embedded control system, electrical devices and motors; (F) Biopsy forceps and VS-Catheters delivery device.
data, typically through CT or MRI scans, to create a 3D map of the patient's airways. This map is integrated into the system to provide a visual representation of the surgical workspace. After the registration is complete, the physician utilizes the 3D map to plan bronchoscopy procedure. After setting up the robotic manipulator on the passive arm and calibrating it, the physician can remotely control the robotic system with the multimodal navigation system. The surgeon identifies location of suspicious areas of tumors or lesions, and determines the optimal route in bronchus to reach them. Table I presents essential mechanical parameters of the bronchoscopic tools.
The application of the VS-Ccatheter involves a three-stage bronchoscopy surgical procedure, as shown in Fig. 6. In the first stage, the clinician maneuvers the robot to navigate the bronchoscope to a target site. The navigation process involves motions like translation, rotation, and tip bending. The VS-catheter is heated (under safe temperature 53degC) to decrease its stiffness. For stage two, the softened VS-catheter is inserted via the bronchoscope's working channel. Its dynamic stiffness can be adjusted based on factors like heartbeat and respiration. Compared to the bronchoscope, the VS-catheter can be inserted into thinner bronchi and provides more flexible control. Once the softened catheter reaches the precise location of the targeted lesion, the power supply to the heating resistance wire is discontinued, allowing the low-melting-point alloy to cool naturally and restore the catheter's stiffness. Lastly, tissue sampling is performed using the biopsy forceps manipulators in the third stage. The physician can thereby examine patient's airways, search for abnormalities or suspicious areas, and perform tissue biopsy for further testing. Surgeons have the capability to control these three-stage processes using tablets and tele-operate the robot. This emphasizes the importance of robotic surgery, as it enables surgeons to manipulate multiple medical tools without the need for shift changes, ensuring stability within the trachea.
## IV Evaluation and Experiment
We conducted in-silico and in-vivo experiments to demosntrate the usability of the proposed robotic system for bronchoscopy procedures. The robot's model was imported from Solidworks into Adams View 2020 to analyze its mechanical transmission via dynamic simulation. The results in Fig. 7(A) depict the change in velocity and acceleration over time during axial displacement of the bronchoscope. The velocity change follows a sinusoidal function, initially accelerating and then decelerating. The peak velocity reaches 30 \(mm/s\) and the acceleration fluctuates within the range of 124.7 \(mm/s^{2}\). This decreasing-then-increasing pattern allows the linear sliding table to smoothly reach the desired target position without being constrained by driving speed. Fig. 7(B) presents discrete diagrams illustrating the speed changes of the worm gear under different driving speeds. It is evident that radial rotation is more stable at low speeds, while speed fluctuations become more pronounced at high speeds. This observation aligns with the actual operation of the worm during slow biopsy surgery. The driving speed (red circle) is maintained at approximately 1000 \(d/s\). Fig. 7(C) and (D) simulate the workspace and distribution probability of the end-effector using Cosserat theory in Matlab, respectively. These simulations provide insights into the range and likelihood of end-effector positions during the surgical procedure. Proper cooperation with the mechanical arm ensures that, in theory, the end effector can access all positions within the bronchi.
Similarly, in-vivo animal experiment was carried out in a swine animal with a weight of 30 kg. The study was approved by Institutional Review Board and Ethics Committee of Shenzhen Institutes of Advanced Technology (AAS 201205P). As show in Figure 8, the robotic system place next the animal as described in II-A. The system was controlled by two operators using tablets with different priorities, one with full authority,
Fig. 5: Proposed three-stage bronchoscopy surgical procedures, with initial insertion, dynamic adjustment, and tissue sampling.
Fig. 6: Three-stage bronchoscopy surgical procedure composed of robotic bronchoscope, VS-Ccatheter, and tissue sampling by biopsy forceps.
another only with biopsy forceps control authority. Two EM sensors were attached to the swine's fore breast and the tip of the bronchoscope, respectively. We successfully performed a biopsy procedure in which small tissue was clamped and removed from the tertiary bronchus of the swine. The stiffness of the catheter was adjusted by energizing the resistance wire, providing adaptability based on the specific requirements of the procedure. The usability and effectiveness of the designed bronchoscopy surgical robot were demonstrated. When taking a biopsy sample during the experiment, the presence of the VS-Ccatheter makes the biopsy process more stable, and the implementation of the VS-Ccatheter enabled more flexible control of the biopsy forceps within the bronchus.
## V Conclusion
This paper introduces a novel teleoperated robotic bronchoscopy system and three-stage bronchoscopy procedure with VS-CCatheters. The feasibility of the proposed robotic system is explored through kinematic simulations and analysis of the reachable workspace. The experimental study has validated the design concept and the feasibility of proposed robotic system. However, it is important to note that further validation is required through comprehensive preclinical studies and additional in-vivo tests involving surgeons. These subsequent evaluations will provide more insights into the practicality and efficacy of the system in real-life surgical scenarios.
|
2310.14549 | Multimodal Graph Learning for Modeling Emerging Pandemics with Big Data | Accurate forecasting and analysis of emerging pandemics play a crucial role
in effective public health management and decision-making. Traditional
approaches primarily rely on epidemiological data, overlooking other valuable
sources of information that could act as sensors or indicators of pandemic
patterns. In this paper, we propose a novel framework called MGL4MEP that
integrates temporal graph neural networks and multi-modal data for learning and
forecasting. We incorporate big data sources, including social media content,
by utilizing specific pre-trained language models and discovering the
underlying graph structure among users. This integration provides rich
indicators of pandemic dynamics through learning with temporal graph neural
networks. Extensive experiments demonstrate the effectiveness of our framework
in pandemic forecasting and analysis, outperforming baseline methods across
different areas, pandemic situations, and prediction horizons. The fusion of
temporal graph learning and multi-modal data enables a comprehensive
understanding of the pandemic landscape with less time lag, cheap cost, and
more potential information indicators. | Khanh-Tung Tran, Truong Son Hy, Lili Jiang, Xuan-Son Vu | 2023-10-23T04:05:19Z | http://arxiv.org/abs/2310.14549v1 | # Multimodal Graph Learning for Modeling Emerging Pandemics with Big Data
###### Abstract
Accurate forecasting and analysis of emerging pandemics play a crucial role in effective public health management and decision-making. Traditional approaches primarily rely on epidemiological data, overlooking other valuable sources of information that could act as sensors or indicators of pandemic patterns. In this paper, we propose a novel framework called MGLAMEP that integrates temporal graph neural networks and multi-modal data for learning and forecasting. We incorporate big data sources, including social media content, by utilizing specific pre-trained language models and discovering the underlying graph structure among users. This integration provides rich indicators of pandemic dynamics through learning with temporal graph neural networks. Extensive experiments demonstrate the effectiveness of our framework in pandemic forecasting and analysis, outperforming baseline methods across different areas, pandemic situations, and prediction horizons. The fusion of temporal graph learning and multi-modal data enables a comprehensive understanding of the pandemic landscape with less time lag, cheap cost, and more potential information indicators.
## Introduction
Pandemics are global outbreaks of infectious diseases that affect many people across continents. The COVID-19 pandemic is one of the most significant pandemics of our time, impacting millions of individuals worldwide and causing lasting effects on our society. In order to _combat_ pandemics, it is crucial to develop efficient solutions that facilitate the comprehension of their transmission and containment. This requires tracking and evaluating the evolution of pandemics through efficient monitoring and analysis of online resources that provide rich information, reflecting public knowledge and perceptions in a timely manner. For instance, the volumes of social media interests can serve as early indicators of COVID-19 waves [1, 2], and users' content can unveil diverse perspectives on regulations, such as quarantine measures or vaccination strategies. Understanding these signals can help policymakers to combat pandemics by recognizing trends in their spread and impact on the population, as well as the efficacy of current countermeasures.
Traditional pandemic monitoring involves tracking hospital admissions, laboratory testing, and death rates, but can be expensive and lag in providing real-time disease spread updates. Compartmental models like SIR [3] and statistical analyses such as ARIMA [4] and Prophet [6] use past data for predictions, are common approaches. However, these statistical models rely on assumptions and might lack data to precisely estimate factors like reproduction number in pandemic planning and forecasting.
Time-series forecasting with deep learning is one of the most effective methods for tracking pandemics' evolution. Because of their data-driven learning process, deep learning-based methods are highly accurate in analyzing time-series data to identify
Figure 1: Examples showing different stances on social media reacting to the pandemic and government regulations [6].
patterns and trends that can help predict future outbreaks through historical statistics. By using deep learning algorithms, we can analyze large amounts of data quickly and accurately, making it easier to identify patterns and trends that might be missed by other methods. Recent works leveraged deep learning-based methods to learn statistics from earlier time stamps as prediction to forecast the COVID-19 pandemic incidence, achieving better performance comparing to traditional methods [7].
One of the limitations of previous works in pandemic forecasting is that they frequently rely entirely on epidemiological data and ignore other information that might act as sensors or indicators of the pandemic's patterns and evolution. Data from search engines, for example, can be used to monitor how individuals are looking for information about pandemics [8, 9]. More crucially, social media data can be utilized to monitor how people are reacting to and feeling about a pandemic [10, 11]. Although previous research has examined the connection between social media usage and pandemic trends [12], there has been little use of deep learning techniques to predict and track the spread of the epidemic. We can gain a more complete view of how an epidemic is evolving and how effective various treatments might be by including external knowledge from social media into pandemic forecasting models. For instance, by monitoring social media data, we may pinpoint public health campaigns to regions where people are most worried about a pandemic. Fig. 1 illustrates our motivated examples, which show different stances of social media users on the COVID-19 pandemic and on government regulations.
During pandemics, social media has emerged as a key source of information, offering real-time updates on the spread of diseases and people's responses to them. Therefore, we investigate how pandemic tracking and analysis using deep learning algorithms can benefit from external knowledge from multi-modality, including social media and government regulations. More specifically, we construct graph-structured data from social media, treating each user as a node representing the current epidemic status. We dynamically capture interactions between users using temporal graph learning. Graph learning, particularly Graph Neural Networks (GNNs) [13, 14], is an important branch of machine learning that deals with learning from and representing a variety of real-world relational data, including citation networks, social networks, knowledge graphs, etc. Incorporating graph learning techniques and graph-structured representations offers a promising approach to overcome limitations of previous works in pandemic forecasting, where graphs can capture the structural and semantic information of the pandemic domain [15, 16, 17].
In this work, we introduce MGL4MEP, a neural framework for forecasting and analyzing developing pandemics using big data sources and deep learning methods, including graph neural networks. We utilize the extremely recent COVID-19 pandemic and their effects on multiple areas as a case study. In order to trace and predict the evolution of the pandemic, we investigate the relationship between the pandemic risk factors and all other relevant data sources such as social media. Our framework will support many end users like politicians, policy makers, and general population for reference by providing complementary analysis and forecast information, leading to more effective crisis preventive and reaction times. Our contributions in this work are summarized as follows:
* We propose a multi-modal neural framework named MGL4MEP for COVID-19 pandemic tracking and prediction,
* We extract and combine data from multiple sources, including social signals and government stringency signals as additional indicators to monitor pandemic trends and predict future evolution,
* We investigate the correlation and impacts of these multi-modal data on pandemic forecasting using deep learning and graph learning methods,
* We conduct extensive experiments on multiple areas affected by the pandemics to show the usefulness and effectiveness of our proposed framework.
Source codes of our framework and reproducible baselines are made publicly available at [https://github.com/KhanhTungTran/MGL4MEP](https://github.com/KhanhTungTran/MGL4MEP) for future research and benchmarking purposes.
## Results
### Baselines
We evaluate our proposed approach against several baselines that employ different techniques, including statistical, machine learning, and deep learning approaches.
* Numerical analysis: (i) AVG: average of the whole history are used to predict the future; (ii) AVG_WINDOW: average statistics of current prediction window are utilized to predict the future; and (iii) LAST DAY: the statistics of the current day are used as prediction.
* Machine learning-based models: (i) LIN_REG: Ordinary least squares Linear Regression fits a line to training samples for predicting future cases; (ii) GP_REG: Gaussian Process Regressor is a non-parametric regression model utilizes Gaussian processes; (iii) RAND_FOREST and (iv) XGBOOST: tree-based models.
* Statistical models: (i) ARIMA[4], a simple autoregressive moving average model leverage the entire history sequence as input; and (ii) PROPHET[5], similar to ARIMA but with strong seasonality characteristics.
* Deep learning models without graph topology: (i) A straightforward LSTM model, uses the sequence of the most recent \(d\) days as its input, (ii) SE\({}_{transformer}\) and (iii) SRE\({}_{transformer}\), baseline models using the popular transformer architecture for learning on extracted text embeddings from social media data. Self-attention is calculated between tokens of different users (extracted using the same pre-trained language model as MGL4MEP models). Then, the final embeddings are fused with LSTM for processing the time-series and making the final predictions.
* entity) from 1500 users for the default setting, and (iii) MGL4MEP\({}_{SRE}\) is our final model with input from three different modality, including statistics as in traditional model, and regulations and social media data.
### Implementation details
The proposed framework was implemented in PyTorch[18], and experiments were carried out on an NVIDIA 3090Ti GPU. We train the model for a maximum of 300 epochs with early stopping. All models are optimized with AdamW optimizer[19], \(10^{-3}\) initial learning rate, a batch size of 16, and input sequence length of 7. These hyperparameters are set empirically through grid search. All experiments are repeated 5 times with different seeds. The last 20% time steps of the dataset are used as hold-out test set. Details regarding the data collected and used are described in following sections. _Mean Absolute Error (MAE)_, _Root Mean Squared Error (RMSE)_, _Mean Absolute Percentage Error (MAPE)_, and _R squared (\(R^{2}\))_ are metrics used to evaluate and compare between models.
\[MAE=\frac{\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}|}{n} \tag{1}\]
where \(y_{i}\) and \(\hat{y}_{i}\) denote the \(i\)th statistics from ground truth data and predicted values of the models, and \(n\) is the total number of samples in the test set. The MAE metric indicates the average variance between the predicted values and the ground truth in the dataset (lower is better).
\[RMSE=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}{n}} \tag{2}\]
The RMSE metric in Equation 2 is the standard deviation of the residuals (prediction error) (lower is better).
\[MAPE=\frac{1}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y}_{i}|}{y_{i}}\times 100 \tag{3}\]
The MAPE given in Equation 3 tells us about the mean of the total percentage errors (lower is better).
\[R^{2}=1-\frac{RSS}{TSS} \tag{4}\]
Finally, the Coefficient of Determination (R-squared metric) provides an insight into the similarity between real and predicted data, where the closer to 1 the R squared value is, the better. Here, \(RSS=\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}\) and \(TSS=\sum_{i=1}^{n}(y_{i}-\overline{y})^{2}\) denote the Residual Sum of Squares and the Total Sum of Squares, respectively.
### Results
The evaluation results for short-term predictions are presented in Table 1, where it can be observed that for horizon values of 1 or 3, our proposed approaches underperform the baseline LSTM neural network method, which is the closest competitor to our proposed models. However, the performance of our models significantly improve on the New York state dataset, as shown in Table 2. The reason for this improvement may lie in the fact that the \(R^{2}\) score, which is calculated on the predictions for each state, being close to perfection (0.9820 for the horizon of 1) on the California dataset, but the \(R^{2}\) score on New York dataset is significantly lower, indicating a higher level of difficulty for learning and prediction on the latter dataset. In general, the MGL4MEP\({}_{SE}\) method achieves the most impressive results for short-term forecasting, followed by the MGL4MEP\({}_{SRE}\) model. This can be explained as the time lag between government efforts and their impact on the real-world situation spans multiple weeks. Incorporating this information for short-term prediction may adversely affect the effectiveness of the model. Furthermore, upon comparison with SE\({}_{transformer}\) and SRE\({}_{transformer}\), which utilize the transformer architecture
without correlation matrices for processing social media data, we observe that our methodology incorporating graph neural networks featuring spatial-temporal characteristics distinctively surpass these approaches. This outcome highlights the efficacy and suitability of our novel proposed approaches, both in constructing input graph structures and in learning algorithms, for addressing these types of multi-modality domains.
With respect to long-term prediction, our models achieve the best results across all three horizons with significant gaps compared to all other methods, as illustrated in Table 3 and 4. Generally, the best performed approaches are MGL4MEP\({}_{SRE}\) and MGL4MEP\({}_{SE}\) models, achieving 42.47%, 34.21%, and 10.62% lower MAE with horizon equals 14, 21, 28 days ahead than the best basline methods for California dataset, and 11.94%, and 7.50% lower MAE for horizon 14 and 21, on New York dataset. Moreover, for long-term prediction, MGL4MEP\({}_{SR}\) models are able to obtain better results than the simple baseline LSTM models, compared to the results and analysis on short-term predictions. The model's capability on forecasting the long-term trajectory of the pandemic means that it can provide valuable information and insights for government and policy makers on planning ahead and making informed, timely response to the pandemic.
We conduct comprehensive ablation studies to investigate the impact of the size of the input social media graph on our COVID-19 prediction model. In particular, we reduce the number of users selected for building the graph from the original 1,500 users to 1,000 and 500, respectively. Our hypothesis is that as we decrease the number of users, the amount of information provided by the social media data would be significantly reduced. The results in Table 5 for the California region confirm our hypothesis, showing a degradation in performance with a decrease in the number of nodes for both short-term and long-term forecasting. The findings have implications for future research in that it is critical to take the size of the input social media network into account when developing a model for predicting COVID-19 instances. The size of the social media graph directly influences the richness and diversity of the data captured, allowing the model to capture a more nuanced understanding of
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multicolumn{1}{c|}{Number of days ahead} & \multicolumn{4}{c|}{1} & \multicolumn{4}{c|}{3} & \multicolumn{4}{c}{7} \\ \cline{2-13} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\uparrow\) & R\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\(\uparrow\) \\ \hline AVG & 1735.92 & 2543.38 & 41.75 & -0.0729 & 1739.06 & 2552.49 & 42.13 & -0.0899 & 1726.68 & 2547.36 & 42.76 & -0.1460 \\ LAST\_DAY & **200.24** & 450.19 & **3.23** & **0.9820** & 381.77 & 730.72 & **6.44** & **0.9451** & 638.60 & 1231.95 & 12.18 & 0.8110 \\ AVG\_WINDOW & 396.87 & 753.38 & 7.26 & 0.9352 & 527.41 & 990.43 & 10.08 & 0.8787 & 767.76 & 1337.56 & 15.76 & 0.7324 \\ \hline LIN\_REG & 239.45 & 511.21 & 3.59 & 0.9761 & 465.00 & 853.38 & 6.61 & 0.9264 & 848.42 & 1649.43 & 13.13 & 0.6523 \\ GP\_REG & 236.90 & 509.94 & 3.55 & 0.9762 & 457.30 & 843.30 & 6.50 & 0.9281 & 812.90 & 1585.71 & 12.53 & 0.6769 \\ RAND\_FOREST & 224.04 & 475.28 & 3.46 & 0.9785 & 504.18 & 906.01 & 7.44 & 0.9135 & 1251.66 & 2639.35 & 18.71 & 0.1089 \\ XGBOOST & 265.20 & 581.56 & 4.46 & 0.9657 & 633.48 & 1166.90 & 10.35 & 0.8193 & 1511.12 & 3244.66 & 22.97 & -0.3955 \\ \hline ARIMA & 1536.31 & 2286.63 & 57.07 & -1.1243 & 1653.35 & 2431.21 & 61.05 & -1.3625 & 1784.52 & 2718.03 & 68.02 & -2.0135 \\ PROHET & 1919.07 & 3661.94 & 44.74 & -2.0139 & 186.03 & 23618.93 & 44.44 & -1.9664 & 1732.04 & 312.04 & 31.450 & -1.9019 \\ LSTM & 207.62 & **403.71** & 4.41 & 0.9780 & **349.35** & **663.60** & **6.46** & 0.9395 & 831.11 & 1468.21 & 13.76 & 0.6564 \\ SE\_transformer_ & 1026.56 & 1700.72 & 20.89 & 0.4242 & 975.09 & 1634.44 & 19.59 & 0.5352 & 1258.66 & 2136.51 & 23.93 & 0.1751 \\ SRE\_transformer_ & 1437.27 & 3226.74 & 26.91 & 0.075 & 1193.21 & 1996.99 & 22.67 & 0.1598 & 1101.02 & 1845.40 & 21.61 & 0.3398 \\ \hline MGL4MEP\({}_{SR}\) (ours) & 265.77 & 470.65 & 5.16 & 0.9725 & 4601.3 & 783.43 & 8.13 & 0.9217 & 825.72 & 1283.63 & 14.40 & 0.7568 \\ MGL4MEP\({}_{SR}\) (ours) & 510.36 & 854.95 & 9.73 & 0.9039 & 651.32 & 1081.28 & 12.59 & 0.8242 & **577.07** & **1005.08** & **11.07** & **0.8160** \\ MGL4MEP\({}_{SRE}\) (ours) & 479.11 & 788.15 & 9.41 & 0.9022 & 534.51 & 913.40 & 10.69 & 0.8755 & 636.86 & 1086.27 & 12.08 & 0.7960 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on California State for short-term predictions.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline \multicolumn{1}{c|}{Number of days ahead} & \multicolumn{4}{c|}{1} & \multicolumn{4}{c|}{3} & \multicolumn{4}{c}{7} \\ \cline{2-13} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\(\uparrow\) \\ \hline AVG & 549.85 & 842.34 & 32.40 & -3.8537 & 557.52 & 850.98 & 32.42 & -3.7875 & 566.90 & 861.35 & 33.01 & -3.9149 \\ LAST\_DAY & 714.36 & 1243.29 & 21.06 & -0.5183 & 694.63 & 1146.65 & 22.49 & -0.6301 & 514.94 & 917.25 & 15.75 & **0.1593** \\ AVG\_WINDOW & 493.52 & 824.75 & 15.68 & 0.216 & 505.44 & 839.23 & 16.18 & **0.1809** & 538.03 & 881.71 & 17.01 & 0.1386 \\ \hline LIN\_REG & 521.59 & 897.58 & 15.95 & 0.2211 & 485.24 & 806.84 & 15.54 & 0.2991 & 502.77 & 855.60 & 16.31 & 0.1978 \\ GP\_REG & 523.64 & 899.53 & 15.86 & 0.2199 & 486.08 & 807.60 & 15.40 & 30.308 & 504.14 & 855.73 & 16.22 & 0.2073 \\ RAND\_FOREST & 1057.99 & 1966.79 & 29.76 & -2.0817 & 1134.78 & 2433.44 & 32.10 & -3.4917 & 607.52 & 1074.55 & 211.9 & -0.2863 \\ XGBOOST & 1307.55 & 3071.69 & 33.1 & -5.9887 & 1445.72 & 3411.18 & 40.63 & -7.7175 & 9
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline Number of days ahead & \multicolumn{4}{c|}{14} & \multicolumn{4}{c|}{21} & \multicolumn{4}{c}{28} \\ \cline{2-13} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) \\ \hline AVG & 576.57 & 878.21 & 36.03 & -7.4231 & 577.12 & 878.64 & 37.71 & -9.4983 & 555.38 & 854.57 & 38.84 & -9.9887 \\ LAST\_DAY & 608.37 & 1096.63 & 19.75 & -0.5469 & 568.43 & 1041.60 & 21.58 & -0.9110 & 649.36 & 1128.66 & 25.78 & -1.2729 \\ AVG\_WINDOW & 577.85 & 950.80 & 19.97 & -0.3342 & 561.28 & 918.91 & 21.37 & -0.6601 & 576.23 & 941.92 & 23.50 & -0.9414 \\ \hline LINE\_REG & 505.95 & 091.68 & 20.44 & -0.5076 & 542.15 & 917.15 & 23.91 & -1.5887 & 577.39 & 984.80 & 29.15 & -3.5869 \\ GP\_REG & 505.94 & 900.15 & 20.24 & -0.4716 & 539.68 & 913.79 & 23.59 & -1.5046 & 575.04 & 979.01 & 28.76 & -3.4214 \\ RAND\_FOREST & 962.95 & 1649.01 & 34.08 & -3.4251 & 1267.22 & 2077.25 & 42.03 & -5.2131 & 1399.23 & 2165.87 & 51.17 & -10.8534 \\ XGBOOST & 1242.85 & 2078.66 & 49.98 & -10.5987 & 1753.07 & 2790.31 & 65.61 & -15.9673 & 1688.39 & 2611.17 & 87.04 & -63.6253 \\ \hline ARIMA & 554.70 & 927.95 & 220.8 & -0.8887 & 546.87 & 918.12 & 32.62 & -1.3794 & 529.60 & 902.33 & 25.27 & -1.6065 \\ PROPHET & 1579.78 & 2307.45 & 64.20 & -13.8988 & 1575.09 & 2298.21 & 63.94 & -15.1467 & 1584.90 & 2306.69 & 63.77 & -15.9274 \\ LSTM & 270.61 & 434.96 & 11.08 & -0.6286 & 288.95 & 452.27 & 10.12 & -0.4052 & **280.53** & **443.59** & 11.53 & -1.022 \\ SE\_transformer_ & 393.33 & 596.95 & 12.67 & -1.2141 & 318.92 & 506.33 & 11.41 & -0.8533 & 324.25 & 511.64 & 11.89 & -1.1520 \\ SRE\_transformer_ & 393.19 & 643.84 & 12.71 & -1.6740 & 366.67 & 560.50 & 13.42 & -1.5570 & 368.12 & 564.89 & 11.52 & -0.7978 \\ \hline MGLMEPS\_R_ (ours) & 267.15 & 430.25 & 10.42 & -0.5040 & **254.03** & **411.54** & **9.74** & **-0.3239** & 315.59 & 504.57 & 10.86 & -0.7395 \\ MGLMEPS\_R_ (ours) & **238.29** & **382.46** & **9.477** & **-0.2762** & 267.29 & 431.71 & 10.04 & -0.3655 & 311.72 & 503.53 & 11.29 & -0.8625 \\ MGLMEPS\_R_ (ours) & 278.71 & 460.88 & 10.44 & -0.3482 & 287.07 & 479.69 & **9.73** & -0.3946 & 324.18 & 520.57 & **10.56** & **-0.6624** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on New York State for long-term predictions.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline Number of days ahead & \multicolumn{4}{c|}{14} & \multicolumn{4}{c|}{21} & \multicolumn{4}{c}{28} \\ \cline{2-13} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\)\(\uparrow\) \\ \hline AVG & 1625.33 & 2397.80 & 43.58 & -0.4115 & 1550.94 & 2299.96 & 44.92 & -1.2001 & 1620.44 & 2367.44 & 47.82 & -3.4092 \\ LAST\_DAY & 1016.07 & 1679.24 & 22.26 & 0.3751 & 1454.70 & 2418.62 & 34.59 & -1.2151 & 1854.50 & 3011.52 & 47.58 & -5.5959 \\ AVG\_WINDOW & 1190.22 & 1943.52 & 26.61 & 0.1444 & 1582.72 & 2578.18 & 38.55 & -1.6106 & 1988.98 & 3196.71 & 52.41 & -6.9129 \\ \hline LINE REG & 1073.60 & 1882.31 & 20.53 & 0.0937 & 1393.40 & 2469.55 & 28.95 & -2.1887 & 1668.01 & 2917.30 & 36.16 & -7.5977 \\ GP\_REG & 1004.46 & 1740.18 & 19.67 & 0.1889 & 1357.33 & 2433.31 & 28.26 & -2.1518 & 1629.51 & 2837.48 & 35.48 & -7.5078 \\ RAND\_FOREST & 2800.75 & 6404.53 & 44.94 & -8.4948 & 5310.35 & 11285.29 & 96.23 & -61.9137 & 6253.21 & 1161.095 & 123.79 & -119.0820 \\ XGBOOST & 2905.03 & 6253.72 & 51.53 & -9.3011 & [6129.52 & 12290.41 & 11.19 & -70.4298 & 6426.62 & 1194.64 & 1239.65 & -119.2901 \\ \hline ARIMA & 2492.92 & 2303.26 & 81.65 & -4.2380 & 2660.01 & 3076.61 & 96.55 & -9.7618 & 307.48 & 4167.76 & 111.46 & -23.8195 \\ PROPHET & 1417.82 & 2917.99 & 41.58 & -2.0017 & 1081.00 & 2227.87 & 39.39 & -2.4306 & 848.07 & 1726.27 & 37.21 & -3.4281 \\ LSTM & 962.75 & 1464.46 & 21.91 & 0.3687 & 855.98 & 1324.40 & 20.05 & -0.2023 & 981.55 & 1557.89 & 22.00 & -1.2480 \\ SE\_transformer_ & 1520.80 & 2248.76 & 23.58 & -0
public sentiments, behaviors, and trends related to the pandemic.
Table 6 illustrates the ablation study on different number of nodes for the input social media graph of our COVID-19 prediction model for New York dataset. The results are consistent with experiment on California dataset, where a degradation in performance with a decrease in the number of nodes for both short-term and long-term forecasting can be seen.
In order to assess the usefulness and effectiveness of our proposed methods in different stages of the pandemic, we perform an additional experiment by collecting data for another 150 days, thereby increasing the total amount of data collected. We then train and evaluate the same models and baselines on this new dataset, the results of which are presented in Table 7 and visualized in Fig. 2. It is worth noting that this new test set for California state exhibits a higher variance compared to the previous forecasting range, where statistics gradually decrease with sharp changes. Nevertheless, our models outperform the baselines and achieve the best performance on both test sets. These results indicate the robustness and generalizability of our approaches in combating the COVID-19 pandemic in different stages of its evolution.
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline Number of days ahead & \multicolumn{4}{c|}{7} & \multicolumn{4}{c}{14} \\ \cline{2-10} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) \\ \hline LSTM - without social network info. & 366.56 & 593.31 & 12.51 & -0.8850 & 270.61 & 434.96 & 11.08 & -0.6286 \\ \hline MGL4MEP\({}_{SE}\) - 500 nodes & 371.54 & 596.19 & 12.40 & -1.3730 & 240.97 & 395.27 & **9.49** & -0.3487 \\ \hline MGL4MEP\({}_{SE}\) - 1000 nodes & 346.40 & 570.16 & 12.65 & -1.5580 & 249.31 & 405.01 & 9.68 & -0.3383 \\ \hline MGL4MEP\({}_{SE}\) - 1500 nodes & **336.77** & **549.74** & **11.40** & **-0.7996** & **238.29** & **382.46** & **9.47** & **-0.2762** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on sufficient amount of social media users for modeling society interaction and factor. We train our MGL4MEP\({}_{SE}\) model with different number of nodes (users) for social network graph and evaluate each model’s performances on New York data.
\begin{table}
\begin{tabular}{l|c c c|c c|c c c} \hline \hline Number of days ahead & \multicolumn{4}{c|}{7} & \multicolumn{4}{c}{14} \\ \cline{2-10} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) \\ \hline LSTM - without social network info. & 366.56 & 593.31 & 12.51 & -0.8850 & 270.61 & 434.96 & 11.08 & -0.6286 \\ \hline MGL4MEP\({}_{SE}\) - 500 nodes & 371.54 & 596.19 & 12.40 & -1.3730 & 240.97 & 395.27 & **9.49** & -0.3487 \\ \hline MGL4MEP\({}_{SE}\) - 1000 nodes & 346.40 & 570.16 & 12.65 & -1.5580 & 249.31 & 405.01 & 9.68 & -0.3383 \\ \hline MGL4MEP\({}_{SE}\) - 1500 nodes & **336.77** & **549.74** & **11.40** & **-0.7996** & **238.29** & **382.46** & **9.47** & **-0.2762** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on sufficient amount of social media users for modeling society interaction and factor. We train our MGL4MEP\({}_{SE}\) model with different number of nodes (users) for social network graph and evaluate each model’s performances on New York data.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline Number of days ahead & \multicolumn{4}{c|}{7} & \multicolumn{4}{c}{14} \\ \cline{2-10} & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & MAPE\(\downarrow\) & R\({}^{2}\uparrow\) \\ \hline LSTM - without social network info. & 831.11 & 1468.21 & 13.76 & 0.6564 & 962.75 & 1464.46 & 21.91 & 0.3687 \\ \hline MGL4MEP\({}_{SE}\) - 500 nodes & 687.43 & 1139.88 & 13.74 & 0.7783 & 630.78 & 1049.50 & 13.75 & 0.6631 \\ \hline MGL4MEP\({}_{SE}\) - 1000 nodes & 626.24 & 1047.17 & 12.74 & 0.7918 & 595.77 & 1051.18 & 12.74 & 0.6636 \\ \hline MGL4MEP\({}_{SE}\) - 1500 nodes & **577.07** & **1005.08** & **11.07** & **0.8160** & **583.17** & **1035.97** & **12.10** & **0.6798** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on sufficient amount of social media users for modeling society interaction and factor. We train MGL4MEP\({}_{SE}\) with different number of nodes (users) for social network data and evaluate each model’s performances on California data.
## Discussion
Forecasting results.In the real world, COVID-19 is a complex pandemic, and many factors that cannot be seen by previous statistics can lead to different future scenarios. For example, if a government implements a strict lockdown, the number of new cases will likely decrease. However, if a government lifts all restrictions, the number of new cases will likely increase. This is why it is important to consider multiple information sources when forecasting the spread of COVID-19. Our proposed method, MGL4MEP and its variants incorporate multiple information sources effectively, leading to better performances, lower errors, and sustained accuracy, particularly in long-term predictions, compared to other popular forecasting models. MGL4MEP enjoys these benefits due to it being able to learn the dynamic relationships between various factors that affect the spread of COVID-19, such as official government policy, and social stances against the pandemic or situation. Moreover, our ablation results clearly demonstrate that the availability of more information significantly enhances the reliability of our forecasting models. Comparisons between our methods and baselines that do not utilize the graph structure of social media data also highlight the efficacy of our graph-structure generation process and temporal graph learning framework. The results not only underline the effectiveness in learning and extracting information of the models, but also emphasize on the usefulness of input multimodal data.
Additionally, our experiment results show that MGL4MEP is adaptable and robust to different situations and history. This means that it can be used to forecast the spread of COVID-19 in different countries and regions, even if the pandemic is evolving rapidly. The predictions made by MGL4MEP can be leverage by different beneficial group such as the authorities to develop appropriate strategies in order to deal with the spread of this pandemic. An example is that MGL4MEP can be used to forecast the impact of different government policies on the spread of the virus.
Finally, it's important to highlight the automated nature of the forecasting process with MGL4MEP. The entire process can be automated and seamlessly updated whenever new information becomes available. This automation is possible due to our framework's reliance on openly accessible data from the Internet, which can be efficiently gathered through automated web crawling.
Model limitations.Although some of the proposed models performed well according to certain metrics, we found several shortcomings in the models that we tested. One limitation is that MGL4MEP takes time to realize the trend depending on the dynamic at the considered area, as information from different sources, such as the effects of government policies takes time to be reflected in the data. For example, in the case of New York state, it takes effect immediately while in the case of California, it takes about 14 days. Additionally, the underlying factors that affect the infection of COVID-19 are diverse, and it can be difficult to capture all of them through the multiple data sources used by MGL4MEP. Another limitation of MGL4MEP is that similar to other deep learning methods, it is a black-box model. This means that we cannot easily understand how it makes its predictions, and can make it difficult to trust or explain the model's predictions.
Future research directions.There are several ways in which MGL4MEP can be improved in the future. An interesting future research direction is to enrich our framework with more information regarding the pandemic situation, such as regional age population, mobility, or virus variants. Another direction can be interested in explainability methods, such as identifying important nodes or features through temporal graph learning, or understanding the most valuable factors that affect the
Figure 2: Forecasting results on test set for the newly collected period of California state dataset (a) Infectious cases. (b) Hospitalized cases.
forecasting results. This would make us more confident in the predictions of the model and would help us to better understand the dynamics of the pandemic.
## Related Work
### Pandemic forecasting
Traditional approaches leverage statistical models have been widely used to forecast COVID-19. These approaches involve analyzing past epidemic data using statistical and time-series methodologies to identify patterns and trends, which can then be used to forecast future outbreaks. Methods like autoregressive integrated moving average (ARIMA)[4] and Prophet[5] are effective at identifying trends in stationary time-series data and handling periodic patterns, respectively. One alternative to forecast pandemics, such as COVID-19 pandemic, is through the use of compartmental models[20, 21, 22]. These models divide a population into compartments, such as susceptible, exposed, and infected individuals (SIR model), and use mathematical equations to describe the dynamic and transitions between each group. However, it is important to acknowledge that leveraging statistical models for pandemic forecasting has its limitations, as they presumptively assume a linear relationship between past and future time-series. Such methods rely on certain assumptions and may lack the data necessary to accurately address all the relevant issues.
Deep learning have been applied to make predictions about the spread of COVID-19 pandemic and achieve high performances[7, 23]. With the enormous dataset of records such as infected and hospitalized cases collected on a daily basis, deep learning is considered a suitable approach, as neural networks can learn and update from data effectively. Sequential models such as Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM)[24] have been applied and seen high performances for forecasting COVID-19 pandemic, both at world-wide or country level[25, 26], and more fine-grained levels including state and county levels[27, 28, 29, 30]. Unlike conventional approaches, deep learning can incorporate external knowledge and adapt to changing circumstances, improving their predictive capabilities. These approaches, or fusion of multiple neural network models, can incorporate a wide range of data sources, including social media, to provide a more comprehensive view of the pandemic and its potential impacts.
To our best knowledge, previous research has only attempted to integrate basic indicators or indices, overlooking the dynamic, intricate information contained in user-generated content[8, 12, 31]. Incorporating these valuable signals into neural networks remains a challenge but has the potential to provide a more comprehensive view of the pandemic and its potential impacts[1, 9, 32].
### Leveraging external resources for time-series forecasting
Previous studies have explored the connection between social media interests and pandemic trends. In[9], the authors highlight a strong correlation between peak of search volume on COVID-19 pandemic and the development of the pandemic, upto 20 days earlier than the issuance of official warnings. The authors of[1] also discovered a close connection between the evolution of the COVID-19 crisis and social media user's sentiments toward different phases of the pandemic. Another work[33] makes use of the social impact of media coverage to support the compartment model for pandemic prediction. Post-processed indicators such as internal movement index and economic response have been incorporated as additional input features to sequence models for forecasting future statistics[34, 35]. Differing from them, our method considers every aspect of user response through social media and government regulations against the pandemic. To achieve greater accuracy in pandemic forecasting, we analyze individual tweets and search for relevant social events.
There has been a significant amount of interest in effectively leveraging social media as an external knowledge source for more accurate pandemic forecasting. In[31], the authors used tweet count (the amount of tweets related to COVID-19) per day as an additional input to an LSTM model and achieve better results than using statistics only. Taking a step further, in[12], the collected tweets are then further extracted into two main features, representing user sentiment and topic of interest. These features are used as additional input features to an ARIMAX model, which is an extension of ARIMA. Furthermore, in[36], important keywords are extracted and curated into a keyword cloud to present the most important information for each day and input to a MLP module for pandemic prediction. Perhaps the most relevant works to this paper are[16, 17] where the authors extract the most popular keywords per day and view them as a graph structure and employ graph algorithms to learn on those representations.
In this study, in contrast to prior works, we incorporate data from multiple different sources, with social media as an important knowledge source where we build graph structure with each user as node, or an indicator on the current status of the epidemic, and dynamically represent the interaction between them through temporal graph neural networks[37, 38, 13]. Our approach comprehensively considers various aspects of user responses on social media and government regulations pertaining to the pandemic.
### Temporal graph neural networks forecasting models
Graph neural networks (GNNs) have gained significant attention in various learning tasks, such as image recognition [39, 40], estimating quantum chemical computation [41, 42, 43], predicting protein interfaces [44], etc. GNNs generalize the concept of convolution neural networks to non-Euclidean domains, allowing for local operations on the nodes and edges of a graph [45, 13]. The most popular GNNs is Message Passing Neural Networks (MPNNs) [42] in which the graph convolution is defined via the message passing scheme that propagates and then aggregates the vectorized information between each node and its local neighborhood.
To handle evolving features and connectivity over time, temporal graph neural networks have been introduced. Unlike static graphs, temporal graphs are usually represented by a sequence of node interactions over continuous time instead of an adjacency matrix. Temporal GNNs aim to capture both the temporal and structural information of the temporal graphs by introducing a node memory that represents the state of the node at a given time, acting as a compressed representation of the node's past interactions. Temporal GNNs combine graph encoding techniques with time-series encoding architectures such as LSTM and Transformers, forming a powerful deep learning framework. They find applications in various domains, such as traffic prediction, where they outperform traditional methods by incorporating spatial relationships of road networks and temporal dynamics of traffic conditions [46, 47, 48]. In the analysis of brain networks, temporal GNNs utilize invasive techniques like electrocorticography (ECoG) to uncover temporal patterns and gain insights into brain network dynamics [48].
In our apporach, by leveraging the temporal and structural aspects of graph representation in social media data, temporal GNNs enhance modeling capabilities for understanding evolving complex systems and forecasting pandemic statistics.
## Preliminaries
### Time-series forecasting
Originally proposed in [24], Long short-term memory (LSTM) has been the dominant recurrent network architecture for learning from sequences of data. Unlike standard feedforward neural networks, LSTM can process and retain the temporal correlations between adjacent time steps, due to its feedback connections. For a historical time step \(t\), the output \(y_{t}\) will not only depend on \(x_{t}\) but also from previous iterations through hidden state \(h_{t-1}\) and memory variable \(c_{t-1}\):
\[\Gamma_{u} =\sigma(\mathbf{W}_{hu}h_{t-1}+\mathbf{W}_{xu}x_{t}+b_{u}) \tag{5a}\] \[\Gamma_{f} =\sigma(\mathbf{W}_{hf}h_{t-1}+\mathbf{W}_{xf}x_{t}+b_{f})\] (5b) \[\tilde{c}_{t} =\tanh(\mathbf{W}_{hc}h_{t-1}+\mathbf{W}_{xc}x_{t}+b_{c})\] (5c) \[c_{t} =\Gamma_{u}\odot\tilde{c}_{t}+\Gamma_{f}\odot c_{t-1}\] (5d) \[\Gamma_{o} =\sigma(\mathbf{W}_{ho}h_{t-1}+\mathbf{W}_{ho}x_{t}+b_{o})\] (5e) \[h_{t} =\Gamma_{o}\odot\tanh(c_{t}) \tag{5f}\]
where \(\Gamma_{u}\) and \(\Gamma_{f}\) are "update gate" and "output gate", calculated through a sigmoid (\(\sigma\)) activation function to determine the percentage of new memory \(\tilde{c}_{t}\) to keep and the percentage of old memory \(c_{t-1}\) to forget, respectively. The "output gate" \(\Gamma_{o}\) allows information to be revealed appropriately due to the sigmoid function then the weights are updated by the element-wise multiplication of \(\Gamma_{o}\) and memory cell \(c_{t}\) activated by the non-linear tanh function.
A simpler, more intuitive version of LSTM called Gated-Recurrent Unit (GRU) [49], combined the cell memory and the hidden state variable into \(h_{t}\) to transfer information. Therefore, a GRU only has two gates, a "reset gate" and an "update gate".
\[\Gamma_{u} =\sigma(\mathbf{W}_{hu}h_{t-1}+\mathbf{W}_{xu}x_{t}+b_{u}) \tag{6a}\] \[\Gamma_{f} =1-\Gamma_{u}\] (6b) \[\tilde{h}_{t} =\tanh(\mathbf{W}_{hu}h_{t-1}+\mathbf{W}_{xb}x_{t}+b_{h})\] (6c) \[h_{t} =\Gamma_{u}\odot\tilde{h}_{t}+\Gamma_{f}\odot h_{t-1} \tag{6d}\]
Finally, the last hidden state variable \(h_{t}\) can be used to predict the corresponding output value \(\hat{y}_{t}\) through a fully connected layer with \(softmax\) activation function:
\[\hat{y}_{t}=softmax(\mathbf{W}_{ho}h_{t}) \tag{7}\]
### Temporal graph learning algorithms
Graph neural networks (GNNs) are a class of neural networks that operate on graph-structured data. Graphs are a powerful method to represent many types of data, such as social networks, biological networks, and traffic flows. GNNs are capable of learning the relationships between nodes in a graph. They generalize the concept of convolutional neural networks to
non-Euclidean domains by defining local operations on the nodes and edges of a graph. A typical GNN layer operate on input graph \(\mathcal{G}=(\mathbf{X},\mathbf{E},\mathbf{A})\) can be formulate as in Equation 8.
\[\mathbf{Y}=g_{\mathbf{W}}\star\mathbf{X}\approx(\mathbf{I_{N}}+\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D} ^{-\frac{1}{2}})\mathbf{X}\mathbf{W} \tag{8}\]
where \(\mathbf{X}\in\mathbb{R}^{N\times D_{X}}\) represents node matrix, each of the \(N\) nodes has \(D_{X}\) features, and \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is a weighted adjacency matrix encoding set of edges \(\mathbf{E}\). The graph convolution operator \(\star\) can be approximated by first-order Chebyshev polynomial expansion and generalized to high-dimensional [45, 50] with learnable parameter \(\mathbf{W}\in\mathbb{R}^{D_{X}\times D_{Y}}\).
Temporal Graph Neural Networks are an extension of GNNs that can handle temporal graphs, i.e., graphs that change over time. Unlike static graphs, temporal graphs are usually represented by a sequence of node interactions over continuous time instead of an adjacency matrix. Temporal GNNs aim to capture both the temporal and structural information of the temporal graphs by introducing a node memory that represents the state of the node at a given time, acting as a compressed representation of the node's past interactions. In this work, we follow the framework of recent studies, including GCRN [37], AGCRN [47], and MPNN LSTM [15], that utilize recurrent neural network on top of graph convolution operators. We leverage a simplified approach and use GRU as the recursive network architecture.
\[\mathbf{\Gamma}_{U} =\sigma(g_{\mathbf{W}_{H}\nu}\star\mathbf{H}_{t-1}+g_{\mathbf{W}_{X}\nu}\star \mathbf{X}_{t}+\mathbf{b}_{U}) \tag{9a}\] \[\mathbf{\Gamma}_{F} =1-\mathbf{\Gamma}_{U}\] (9b) \[\tilde{\mathbf{H}}_{t} =\tanh(g_{\mathbf{W}_{H}\nu}\star\mathbf{H}_{t-1}+g_{\mathbf{W}_{X}\mu}\star \mathbf{X}_{t}+\mathbf{b_{H}})\] (9c) \[\mathbf{H}_{t} =\mathbf{\Gamma}_{U}\odot\tilde{\mathbf{H}}_{t}+\mathbf{\Gamma}_{F}\odot\mathbf{ H}_{t-1} \tag{9d}\]
This framework allows MGL4MEP to learn the dynamic interactions between entities, act as nodes, or indicators to the current status of the pandemic and between different time stamps throughout the evolution of the pandemic.
### Pre-trained Language models
Since we are dealing with free-text data to capture population's reactions to the pandemic, more specifically, user-generated content sourced from social media, it is crucial to extract meaningful information before constructing the graph-structured representation of the data. In recent years, large pre-trained language models have revolutionized the field of natural language processing such as Bidirectional Encoder Representations from Transformers (BERT) [51]. These models have demonstrated exceptional capabilities in understanding and generating human language. BERT's underlying architecture, based on Transformer [52], employs self-attention mechanisms to capture dependencies between words or tokens in a sentence. This enables BERT to comprehend the contextual information of a word based on its surrounding words, leading to more accurate language understanding and representation. By pre-training BERT on massive amounts of textual data and a wide variety of tasks, such as masked language modeling and next sentence prediction, the model learns a rich language representation that can be fine-tuned for specific downstream tasks.
Building upon recent advancements in applying large pre-trained models to domain-specific data [53, 54], we leverage BertTweet [55], a variant of BERT as our main feature extractor for text embeddings. The model has been trained on a large amount of Twitter data, especially including a sub-set of COVID-19 related data, and its effectiveness in capturing the nuanced meanings and signals conveyed in text data has been well-established. By leveraging BertTweet, we can obtain high-quality features that accurately represent the semantic content of social media posts surrounding the COVID-19 pandemic, and enables us to discover valuable patterns and trends that contribute to a comprehensive understanding of the social media landscape during the global health crisis.
## Methodology
### Mgl4mep
In this section, we present the multi-modality framework and techniques employed in our study to effectively extract and model multi-modal data for COVID-19 forecasting. Our approach aims to harness the power of social media data, specifically user-generated content, and government stringency index, to gain valuable insights to the evolution of the pandemic. We describe the key components of our framework, including data pre-processing, feature extraction, graph-based representation with temporal graph learning, and multi-modal learning. The main components of our proposed framework are depicted in Figure 3.
One goal of our multi-modality framework is to effectively incorporate signals from human-generated text data through social media platform, offering valuable reflections of the population's response to the pandemic, as exampled in Figure 1. This importance can also be underscored by the sheer amount of Covid-19 discussion over time, strongly correlating with
pandemic statistics, as shown in Figure 4. Details about our social media data collection process is provided in the next Section. Extracting meaningful features from text data is crucial for constructing a comprehensive understanding of the information shared on social media platforms. While other works proposed another direction of extracting indices like sentiment, this might discard necessary information, such as sentiment about COVID-19, but not about current government response to the pandemic. As discussed in previous sections, by employing pre-trained language model, specifically BertTweet, as text feature extractor, we ensure to capture rich insights contained within user-generated content, especially in relation to COVID-19 pandemic. Our framework enables integrating various modalities to capture the complex and temporal dynamics of emerging pandemics. We obtain temporal embedding for each user by utilizing BertTweet on their text data as follows:
\[x_{t,user_{i}}=\sum_{j\in\{t,user_{i}\}}\text{BertTweet}(post_{j}), \tag{10}\]
where \(t\) denotes the timestamp, \(user_{i}\) denotes the i-th user in our social media data, and \(post_{j}\), \(j\in(t,user_{i})\) denotes the text
Figure 4: Comparison between amount of tweets posted and number of new COVID-19 cases per day. (a) in California state. (b) in New York state.
Figure 3: Overall architecture of MGL4MEP - a multi-modal framework for enhanced pandemic forecasting with external resources. MGL4MEP incorporates both pandemic related metrics and population’s reactions on social media into the forecasting to better capture the dynamic properties of emerging pandemics.
obtained from the tweets of the i-th user at time \(t\). The inclusion of user interactions and shared information across users is crucial for further analysis. In order to capture the correlations and dependencies between users, it is imperative to construct a graph-structured representation of the pre-processed data. This graph-based approach allows us to model the interactions and information flow through graph neural networks, capturing the dynamics and valuable insights related to the ongoing pandemic from social media signals. Hence, we introduce an end-to-end learning algorithm to discover the underlying graph structure that captures the correlation among time-series in a data-driven manner. More specifically, define node embeddings extracted from pre-trained language model as \(\mathbf{X}_{t}^{\mathcal{G}}:=[x_{t,user_{1}},x_{t,user_{2}},\ldots,x_{t,user_{N}}]^ {T}\in\mathbb{R}^{N\times D_{X}}\), and the continuous adjacency matrix can be calculated as the dot-product similarity matrix of the node embeddings: \(\mathbf{A}_{t}^{\mathcal{G}}=\mathbf{X}_{t}^{\mathcal{G}}\cdot\mathbf{X}_{t}^{\mathcal{G}} \in\mathbb{R}^{N\times N}\). However, to enable effective learning with temporal graph learning algorithms, there are two downsides with this approach: first, large embedding dimension lead to incorrect adjacency matrix calculation. Second, may include information not directly related to our downstream task, and take up resources in training and evaluating. Inspired from AGCRN [47], we employed a node-specific learnable embeddings that allows us to map input dimension to a lower intermediate embedding dimension:
\[g_{\mathbf{W},\mathbf{E}}\star\mathbf{X}=(\mathbf{I}_{\mathbf{N}}+\text{softmax}(\text{ReLU}(\mathbf{E }\cdot\mathbf{E}^{T}))\mathbf{X}\mathbf{E}\mathbf{W}, \tag{11}\]
where \(g\) denotes the filter parameterized by W and E, while \(\star\) denotes the graph convolution operator. \(\mathbf{E}\) is a learnable intermediate node embedding matrix, \(\mathbf{E}\in\mathbb{R}^{N\times D_{emb}}\). The input node matrix is multiplied with the node embedding \(\mathbf{E}\), resulting in an updated representation, where \(\mathbf{E}\) is learnable, meanings that the representation is specific for each node and its pattern. Then, the integrated node embeddings are further multiplied by the weight matrix \(\mathbf{W}\) to incorporate the influence of the node-specific features. Moreover, we replace the normalized graph Laplacian matrix [50] by computing the inner product of the intermediate node embedding matrix \(\mathbf{E}\) with its transpose \(\mathbf{E}^{T}\). This operation captures the pairwise relationships between node embeddings and produces a matrix of shape \(N\times N\). We apply the rectified linear unit (ReLU) activation function to introduce non-linearity and ensure positive values in the resulting matrix. The softmax function is then applied to normalize the values across matrix row, ensuring that the row sums to 1. This step allows us to obtain a valid probability distribution representing the importance or relevance of each node, or each user, with respect to others.
This graph convolution operation is plugged into the framework in Equation 9, and the final temporal graph learning algorithm is shown in Equation 12:
\[\mathbf{\Gamma}_{U}^{\mathcal{G}} =\sigma(g_{\mathbf{W}_{H\mathcal{U}},\mathbf{E}_{H}}\star\mathbf{H}_{t-1}^{ \mathcal{G}}+g_{\mathbf{W}_{X\mathcal{U}},\mathbf{E}_{X}}\star\mathbf{X}_{t}^{\mathcal{G} }+\mathbf{b}_{U}) \tag{12a}\] \[\mathbf{\Gamma}_{F}^{\mathcal{G}} =1-\mathbf{\Gamma}_{\mathbf{U}}^{\mathcal{G}}\] (12b) \[\mathbf{\hat{H}}_{t}^{\mathcal{G}} =\tanh(g_{\mathbf{W}_{H\mathcal{H}}},\mathbf{E}_{H}+\mathbf{H}_{t-1}^{ \mathcal{G}}+g_{\mathbf{W}_{H\mathcal{H}}},\mathbf{E}_{X}\star\mathbf{X}_{t}^{\mathcal{G} }+\mathbf{b}_{H})\] (12c) \[\mathbf{H}_{t}^{\mathcal{G}} =\mathbf{\Gamma}_{U}^{\mathcal{G}}\odot\mathbf{\hat{H}}_{t}^{\mathcal{G}} +\mathbf{\Gamma}_{F}^{\mathcal{G}}\odot\mathbf{H}_{t-1}^{\mathcal{G}} \tag{12d}\]
where \(g\) denotes learnable weights with respect to different embeddings. To complement the multi-modal nature of our framework, we incorporate government stringency features that provide valuable insights into the pandemic response at a regional level. Government stringency features capture the level of restrictions, policies, and interventions implemented by authorities to mitigate the spread of COVID-19. These features serve as an important contextual signal to enhance the understanding of the evolving dynamics in our model.
Specifically, we utilize the raw data and formula proposed in [56] to compute an indicator on the level of government stringency. However, recognizing the complexity of this domain, we compare and analyze each individual indicator, as well as the averaged general stringency index, to identify the most suitable indicator for the current pandemic situation. In Figure 5, we present the correlation levels between index and the number of new COVID cases for two indicators, with different time lags. The results suggest a strong relationship between the restriction on internal movement to the status of the pandemic. Hence, in the refined version of our framework, we leverage this specific indicator as a measure of government stringency.
Since this indicator can be represented as a vector for each day, similar to the statistical metrics of the pandemic, we can employ a sophisticated recurrent neural network (i.e., Equation 5 and Equation 6) to learn solely on this feature. Alternatively, we have the option to combine, or concatenate it with the pandemic statistics and learn through a unified recurrent network. Through extensive experimentation, we have found that the latter approach yields superior performance, and thus, it is our final choice for incorporating the government stringency indicator into our framework.
Finally, in order make accurate predictions, it is crucial to integrate the information from multiple modalities in our framework. We achieve this by fusing the embeddings obtained from different modalities, namely statistical features, government stringency features, and social-media graph-based features. The fusion process is performed using the equation:
\[\hat{y}_{t+T}=\text{softmax}(\mathbf{W}(\mathbf{H}_{t+T}^{(stat,reg)}\oplus\mathbf{H}_{t+T} ^{\mathcal{G}})) \tag{13}\]
where \(\mathbf{H}_{t+T}^{(stat,\,reg)}\) is the learned embeddings of recurrent neural network for statistical metrics, \(\hat{y}_{t+T}\) represents the predicted value for the time step \(t+T\), where \(T\) is the forecasting horizon. Embeddings from various domain, capturing the relevant information for each modality, are fused using the concatenation operator \(\oplus\) to create a unified feature representation.
Using the aforementioned equation to integrate the embeddings from multi-modality, our system successfully combines a variety of information sources while utilizing the complementary nature of different modalities for enhanced forecasting performance. This comprehensive approach enables us to capture the intricate dynamics and interdependencies within the data, leading to more accurate and reliable forecasts for the future evolution of the pandemic.
### Multimodal data collection process
In this study, as shown in Table 8, we utilized three different types of data sources to gain insights into the COVID-19 pandemic and its development.
* **COVID-19 Statistic Data.** We leverage the statistic dataset from Johns Hopkins University [57] with 450 data points from August 1, 2020, to November 30, 2021. Each data point is represented as the number of confirmed COVID-19 infections or serious, hospitalized cases in a given area per day. Our final task is time-series forecasting on this multi-variate statistics with different horizons to predict the trajectory of the pandemic. Then, trained models can be a valuable tool in responding to the pandemic, as it can support policymakers give better decisions about how to allocate resources, implement public health measures, and prepare for the future.
* **COVID-19 Government Responses and Regulations Data.** The stringency index data [56] is a valuable resource in understanding the level of government response to the COVID-19 pandemic. The index is represented as a numeric value between 0 to 100 and includes nine different indicators, such as the closure of schools and workplaces, cancellation of public events, restrictions on gatherings, and orders to shelter in place. Fig. 5 displays the correlation values between the stringency index and record restrictions on internal movement between regions and the daily statistics of new infected cases. Interestingly, both time-lag horizons exhibit a clear trend of correlation values peaking at around 30 days. Moreover, the correlation of record restrictions on internal movement with new infected cases is consistently higher than that of the stringency index. This is also observed when considering new hospitalized cases. The found correlation trends imply that the current government response can act as a valuable indication to forecast how the epidemic will develop in the future.
* **Social Media Data.** We crawl a total amount of more than 74 million tweets using Twitter API and tweets ids of all tweets related to COVID-19 released by Banda et al. [58]. The original authors leveraged Twitter Stream for collecting all tweets in the category of COVID-19 chatter, with over 4 million tweets a day. We filtered out tweets with geo-location tags in either California state or New York state in this exploratory study. Moreover, we filtered out all tweets that are not in English. We randomly keep all tweets from 1,500 different users for each location. The distributions of tweets over time with respect to statistics of newly confirmed cases are illustrated in Fig. 4. A strong correlation between the two time-series can be recognized, although there is a noisy period at the start. This is likely due to the initial confusion and fear surrounding the appearance of COVID-19, which led to a high volume of discussions about the virus worldwide.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Data Source** & **Features** \\ \hline JHU CSSE COVID-19 Data [57] & Daily COVID-19 Statistics \\ \hline Oxford Covid-19 Government Response & Government Stringency Index \\ Tracker [56] & \\ & Rate of Change of Stringency Index over a time period \\ & Restrictions on Internal Movement Indicator \\ & Rate of Change of Restrictions on Internal Movement Indicator over a time period \\ \hline Twitter [58] & Daily user-generated contents from users with topics-of-interest related to COVID-19 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Input features used for proposed approaches
As the situation became more stable and people gained a better understanding of the pandemic affection on their own regions, the amount of tweets posted became more relevant and had a higher correlation with users' areas of residence. To account for this, we excluded data from the initial few months of the pandemic and only collected data starting from August 1, 2020.
## Conclusion
In this work, we present a novel framework named MGL4MEP that combines temporal graph neural networks and multi-modal data for accurate pandemic forecasting. By integrating various big data sources, including social media content, we effectively capture the complex dynamics of emerging pandemics. Our framework outperforms traditional approaches by leveraging the potential of pre-trained language models and generating graph-structured data. Extensive experiments conducted with multiple variants of our proposed method demonstrate the effectiveness of our framework in providing timely and comprehensive insights into the pandemic landscape. The fusion of temporal graph learning and multi-modal data enables a deeper understanding of the evolving patterns and indicators, leading to more informed public health management and decision-making. Our approach offers a promising direction for leveraging big data in pandemic research and provides a foundation for future advancements in the field.
|
2303.14471 | HQ3DAvatar: High Quality Controllable 3D Head Avatar | Multi-view volumetric rendering techniques have recently shown great
potential in modeling and synthesizing high-quality head avatars. A common
approach to capture full head dynamic performances is to track the underlying
geometry using a mesh-based template or 3D cube-based graphics primitives.
While these model-based approaches achieve promising results, they often fail
to learn complex geometric details such as the mouth interior, hair, and
topological changes over time. This paper presents a novel approach to building
highly photorealistic digital head avatars. Our method learns a canonical space
via an implicit function parameterized by a neural network. It leverages
multiresolution hash encoding in the learned feature space, allowing for
high-quality, faster training and high-resolution rendering. At test time, our
method is driven by a monocular RGB video. Here, an image encoder extracts
face-specific features that also condition the learnable canonical space. This
encourages deformation-dependent texture variations during training. We also
propose a novel optical flow based loss that ensures correspondences in the
learned canonical space, thus encouraging artifact-free and temporally
consistent renderings. We show results on challenging facial expressions and
show free-viewpoint renderings at interactive real-time rates for medium image
resolutions. Our method outperforms all existing approaches, both visually and
numerically. We will release our multiple-identity dataset to encourage further
research. Our Project page is available at:
https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/ | Kartik Teotia, Mallikarjun B R, Xingang Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, Christian Theobalt | 2023-03-25T13:56:33Z | http://arxiv.org/abs/2303.14471v1 | # HQ3DAvatar: High Quality Controllable 3D Head Avatar
###### Abstract.
Multi-view volumetric rendering techniques have recently shown great potential in modeling and synthesizing high-quality head avatars. A common approach to capture full head dynamic performances is to track the underlying geometry using a mesh-based template or 3D cube-based graphics primitives. While these model-based approaches achieve promising results, they often fail to learn complex geometric details such as the mouth interior, hair, and topological changes over time. This paper presents a novel approach to building highly photorealistic digital head avatars. Our method learns a canonical space via an implicit function parameterized by a neural network. It leverages multiresolution hash encoding in the learned feature space, allowing for high-quality, faster training and high-resolution rendering. At test time, our method is driven by a monocular RGB video. Here, an image encoder extracts face-specific features that also condition the learnable canonical space. This encourages deformation-dependent texture variations during training. We also propose a novel optical flow based loss that ensures correspondences in the learned canonical space, thus encouraging artifact-free and temporally consistent renderings. We show results on challenging facial expressions and show free-viewpoint renderings at interactive real-time rates for medium image resolutions. Our method outperforms all existing approaches, both visually and numerically. We will release our multiple-identity dataset to encourage further research.
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
images and produce multi-view consistent renderings. These features make implicit representations suitable for the general task of 3D scene reconstruction and rendering, including human face digitization.
Neural implicit representations (Mildenhall et al., 2022; Park et al., 2019) and, in particular, NeRF have been used for face digitization due to its high level of photorealism (Athar et al., 2022; Gafni et al., 2021; Zheng et al., 2022). Here, one of the main challenges is how to model complex facial motions. Faces are dynamic objects and are often influenced by the activation of facial expressions and head poses. An early adaptation of NeRFs, applied to the human face, represents such motion by simply conditioning the implicit function, represented as an MLP, on 3DMM parameters (Gafni et al., 2021). While this produces interesting results, it has a few limitations, primarily the inability of such 3DMMs to reconstruct high-frequency skin deformations and model the mouth interior. In follow-up methods, a common approach is to model motion by learning a canonical space via template-based deformation supervision (Athar et al., 2022; Zheng et al., 2022). However, this kind of supervision limits the ability of these methods to accurately model regions not represented by the underlying parametric model e.g. the mouth interior.
Mixture of Volumetric Primitives (MVPs) (Lombardi et al., 2021) combines the advantage of mesh-based approaches with a voxel-based volumetric representation that allows for efficient rendering. Specifically, it utilizes a template-based mesh tracker to initialize voxels and prune empty spaces. Here, a primitive motion decoder modifies the initialized positions of the primitives. This method produces state-of-the-art results with the highest level of photorealism, mainly due to its hybrid voxel-NeRF representation as well as its capability to train on multi-view video data. However, finding the optimal orientation of the primitives solely based on a photometric reconstruction loss is highly challenging. As a result, this method produces inaccurate reconstructions and artifacts in regions exhibiting fine-scale details such as the hair. It is also expensive to train, requiring around 2.5 days when trained on an NVIDIA A40 GPU.
In this paper, we present a novel approach for producing high-quality facial avatars at state-of-the-art level of photorealism. Our approach uses a voxelized feature grid and leverages multiresolution hash encoding. It is trained using a multi-view video camera setup and, at test time, drives the avatar via a monocular RGB camera. Unlike related methods (Gao et al., 2022; Lombardi et al., 2021), our approach does not require a template to aid in modeling scene dynamics or pruning of empty space. Instead, we learn a fully implicit canonical space that is conditioned on features extracted from the driving monocular video. We regularize the canonical space using a novel optical flow based loss that encourages artifact-free reconstructions. Our model can be rendered under novel camera viewpoints and facial expressions during inference (see Fig. 1, left). It produces highly photorealistic results and outperforms state-of-the-art approaches (Gao et al., 2022; Lombardi et al., 2021; Park et al., 2021), even on challenging regions such as the scalp hair. Our contributions are summarized as follows:
* We present a method that leverages a multiresolution hash table to generate volumetric head avatars with state-of-the-art photorealism. The avatar is trained using multi-view data and is driven by a monocular video sequence at test time. The core of our method is a canonical space conditioned on features extracted from the driving video.
* We propose a novel optical flow based loss to enforce temporal coherent correspondences in the learnable canonical space, thus encouraging artifact-free reconstructions.
* Our model training time is 4-5 times faster than the state of the art (Lombardi et al., 2021). We show a result with 2K resolution for the first time in literature. We also show a setting for rendering our results in real time (see Fig. 1, bottom right).
* We have collected a novel dataset of 16 identities performing a variety of expressions. The identities are captured using a multi-view video camera setup with 24 cameras. Our multi-view video face dataset is the first captured at 4K resolution, and we will release it to encourage further research.
* We show that the high level of photorealism of our model can even generate synthetic training data at high fidelity, opening the door to generalizing the image encoder to arbitrary input views for driving the avatar.
We evaluate our approach visually and numerically against ground truth data. Here, we ablate our method with different design choices to illustrate their importance in the overall performance. Our approach outperforms existing methods (Gao et al., 2022; Lombardi et al., 2021; Park et al., 2021) visually and numerically, including a multi-view implementation of (Gao et al., 2022; Park et al., 2021).
## 2. Related Work
This section reviews prior work on photorealistic human head avatar generation, including approaches using monocular or multi-view RGB data. Early methods are based on explicit 3D scene representations, while recent ones leverage implicit representations.
### Monocular Head Avatar Generation
Several monocular avatar generation methods rely on explicit 3D models to estimate or regress a 3D face (Gecer et al., 2019; Lattas et al., 2022; Lin et al., 2020; Ren et al., 2022; Shamai et al., 2019; Tewari et al., 2018; Thies et al., 2019; Tran et al., 2019; Yamaguchi et al., 2018) or a 3D head containing the face, ears, neck, and hair (Cao et al., 2016; Ichim et al., 2015; Nagano et al., 2018) with photorealistic appearance from 2D images. These methods employ a statistical deformable shape model (a.k.a. 3DMM) of human faces (Cao et al., 2014; Gerig et al., 2018; Li et al., 2017), which provides parametric information to represent the global shape and the dynamics of the face. However, explicit model-based approaches often generate avatars with coarse expressions or facial dynamics and usually lack a detailed representation of the scalp hair, eyes, and/or mouth interior e.g. tongue. Other approaches attempt to synthesize dynamic full head avatars in a video via generative 2D neural rendering, driven via sparse keypoints (Meshry et al., 2021; Wang et al., 2021) or dense parametric mesh priors (Chandran et al., 2021; Kim et al., 2018; Tewari et al., 2020; Thies et al., 2019). These methods usually utilize GANs to translate parametric models into photorealistic 2D face portraits with pose-dependent appearance. Still, these methods
struggle with fine-scale facial details, and they fail to generate 3D-consistent views.
Recent advances in neural implicit models for personalized head avatar creation from monocular video data have shown great promise. Most approaches learn deformation fields in a canonical space using dense mesh priors (Athar et al., 2022; Gao et al., 2022; Grassal et al., 2022; Zheng et al., 2022). Here, [14] leverages multi-level hash tables to encode expression-specific voxel fields efficiently. However, it still needs to regress to an intermediate expression space defined via 3DMM. While the above methods generate photo-realistic 3D heads with full parametric control, reconstructions can lack dynamics and fine-scale geometrical details, and they cannot handle extreme expressions. On the other hand, our approach is not 3DMM based and thus can model complex geometry and appearance under novel views. This is attributed to our learnable fully implicit canonical space conditioned on the driving video, as well as a novel scene flow constraint.
### Multi-view Head Avatar Reconstruction
A number of approaches leverage multi-view video data to create view-consistent and photorealistic human head avatars with a high level of fidelity. In the literature, we identify approaches that can reconstruct avatars from sparse views (\(<=10\) high-resolution cameras) or require dense multi-camera systems with dozens of high-resolution views to achieve high-quality results. Due to the large volume of high-resolution video data, recent approaches have also focused on reducing computational and memory costs. Strategies such as efficient sampling (Wang et al., 2021) and empty space pruning (Lombardi et al., 2021) have been proposed. We also adopt these strategies for efficient and highly detailed rendering at high resolutions.
Sparse multi-view methodsA line of research investigates lightweight volumetric approaches that aim at reducing the number of input views while attempting to preserve the reconstruction fidelity of dense camera approaches. Sparse methods often resort to a canonical space representation (Park et al., 2021), which serves as a scene template for learning complex non-linear deformations. Pixel aligned volumetric avatars (PAVA) (Raj et al., 2021) is a multi-identity avatar model that employs local, pixel-aligned neural feature maps extracted from 3D scene locations. KeypointNeRF (Mihajlovic et al., 2022) is another generalized volumetric avatar morphable model that encodes relative spatial 3D information via sparse 3D keypoints. At inference, both PAVA and KeypointNeRF can robustly reconstruct unseen identities performing new expressions from 2 or 3 input views. TAVA (Li et al., 2022) encodes non-linear deformations around a canonical pose using a linear blend skinning formulation. TAVA requires 4-10 input views to train a personalized model. While these approaches can generate photorealistic avatars with plausible dynamic deformations from sparse input views, they cannot generate fine-scale details and are sensitive to occlusions, producing renderings artifacts. We demonstrate that regions that undergo sparse sampling can still be reconstructed at high fidelity by imposing temporal coherency via optical flow.
Dense multi-view methodsEarly work with dense setups, called Deep Appearance Models (DAM) learn vertex locations and view-specific textures of personalized face models via Variational Autoencoders (Lombardi et al., 2018). Pixel Code Avatars (PiCA) (Ma et al., 2021) improve upon DAM by decoding per-pixel renderings of the face model via an implicit neural function (SIREN) with learnable facial expression and surface positional encodings. Most recent dense approaches adopt volumetric representations, such as discrete voxel grids (Lombardi et al., 2019), hybrid volumetric models (Lombardi et al., 2021; Wang et al., 2021), or NeRFs (Wang et al., 2022). Here, hybrid approaches combine coarse 3D structure-aware grids and implicit radiance functions, locally conditioned on voxel grids (Wang et al., 2021) or template-based head tracking with differentiable volumetric ravamarching (Lombardi et al., 2021). In (Wang et al., 2022), a morphable radiance fields framework for 3D head modeling, called MoRF, is proposed. This framework learns statistical face shape and appearance variations from a small-scale database, though it demonstrates good generalization capabilities. While dense methods produce photo-realistic avatars, renderings tend to exhibit inaccuracies and blur artifacts, especially for complex structures and in infrequently observed areas, such as the scalp hair and mouth interior. Besides, most dense approaches rely on head priors, either mesh tracking or coarse voxel grids, and thus, they are prone to reconstruction errors and have limited representation power, e.g., handling details, mouth interior, and hair. Our approach overcomes existing limitations by solely relying on a well-constrained canonical representation that preserves expression semantics and scene flow correspondences.
### Generalized 3D Consistent Neural Representations
Modeling 3D-aware scenes with implicit models has been active research in recent years. Popular methods are NeRFs (Mildenhall et al., 2022) and neural Signed Distance Functions (SDFs) (Park et al., 2019), both parameterize the 3D space using multi-layer perceptrons (MLPs). Since such methods are often computationally expensive, efficient feature and/or scene space encodings, such as hash grids (Fridovich-Keil et al., 2022; Muller et al., 2022) or trees (Takikawa et al., 2021; Yu et al., 2021), have been proposed to boost performance. In the literature, generalized implicit models for head avatar reconstruction are learned from a large corpus of 2D face images with varying pose and facial shape using neural SDFs (Or-El et al., 2022; Ramon et al., 2021), GAN-based NeRFs (Chan et al., 2021; Deng et al., 2022; Gu et al., 2022) or hybrid volumetric approaches with tensor representations (Chan et al., 2022; Wang et al., 2021). Generalized models often lack personalized details. However, they have proven themselves to be robust priors for downstream tasks, such as landmark detection (Zhang et al., 2022), personalized face reenactment (Bai et al., 2022) and 3D face modeling (Abdal et al., 2023).
We remark that NeRFs have stood out as superior implicit representations for head avatar creation as they excel at reconstructing complex scene structures. Some recent prior-free NeRF-based methods focus on generating detailed avatars from very sparse 2D imagery, e.g., using local pixel-aligned encodings (Mihajlovic et al., 2022; Raj et al., 2021), while others model dynamic deformations
when working with unstructured 2D videos by warping observed points into a canonical frame configuration [14, 15] or modeling time-dependent latent codes [16, 17]. We remark that dynamic approaches, while achieving impressive results, are designed to memorize the scene representations and cannot control the model beyond interpolations. In addition, some approaches build upon dynamic NeRF approaches by incorporating parametric models, e.g., 3DMMs [14, 15], as input priors to enable full facial control [16, 17].
## 3. Method
Let \(\{I_{j}^{t}\}\) (\(j=1\dots N,i=1\dots M\)) be multi-view frames of a person's head performing diverse expressions, where \(N\) is the number of frames and \(M\) is the total number of cameras. Our goal is to create a high-quality volumetric avatar of the person's head, which can be built in a reasonable time and rendered under novel views and expressions at unprecedented photorealism and accuracy. Humans are capable of performing extremely diverse and extreme expressions. Our model should be able to capture these in a multi-view consistent manner with a high degree of photorealism. As shown in Fig. 2 (a), we have 4 components. Our model drives the avatar from a monocular image encoded via a CNN-based image network \(E_{Y}\). We then have an MLP-based deformation network \(A_{\theta}\), which can map a point in the world coordinate system to a canonical space conditioned on the image encoding. We learn features in the canonical space using a multiresolution hash grid \(A_{\alpha}\). The features in the grid are interpreted to infer color and density values using an MLP-based network \(A_{\beta}\). Given any camera parameters, we use volumetric integration to render the avatar. In the following, we provide details about the capture setup and data pre-processing step (Sec. 3.1), describe the scene representation of our model (Sec. 3.2), and formulate various objective functions used for model training (Sec. 3.4).
### Data Capture Capture
Our approach is trained using multi-view images captured from a 360-degree camera rig. The rig is equipped with 24 Sony RXO II cameras, which are hardware-synced and record 4K resolution videos at \(25\) frames per second. The cameras are positioned in such a way that they capture the entire human head, including the scalp hair. The rig is covered by LED strips to ensure uniform illumination. In our setup, we recorded a total of \(16\) identities performing a wide variety of facial expressions and head movements. Please see Fig. 3 for a sample identity captured from multiple viewpoints. For a more detailed description of our dataset, please refer to Sec. 4.1.
_Preprocessing_. Cameras are calibrated using a static structure with a large number of distinctive features. Here, we use Metashape [12] to estimate the extrinsic and intrinsic parameters. We also perform background subtraction using the matting approach of Lin _et al._[16] to remove any static elements from the scene, e.g., wires, cameras, etc. To simplify background subtraction, a diffused white sheet was placed inside the rig, with holes for each of the camera lenses.
### Scene Representation
We parameterize our model using Neural Radiance Fields inspired by the state-of-the-art novel view synthesis method NeRF [18]. Since the original method is slow to train and render, we utilize a multiresolution hash grid-based representation to make our model efficient, akin to instant NGP [18]. As both original NeRF and instant NGP were proposed for static scene reconstruction, we seek to model the dynamic performance of the head, including facial expressions. To this end, we represent our
Figure 2. Left: To extract a robust encoding that parameterizes the dynamics of the head, we pass a driving image through a CNN encoder to obtain a low dimensional vector \(e\). A deformation network \(A_{\theta}\) conditioned on \(e\) deforms the input coordinates \(y(x)\), where \(y(.)\) denotes positional encoding. We then use multiresolution hash encoder \(A_{\alpha}\) to encode the deformed points in the canonical space, and feed the features from the hash grid, and encoding \(e\) as input to a radiance field network \(A_{\beta}\), which outputs density and color values. By combining these values through volume rendering, we are able to render the avatar under unseen input and camera viewpoints. Right: We impose a novel scene flow based constraint by utilizing the optical flow at frame \(t\) and \(t+1\) (see Eq. 5). Such constraints enforce good correspondences in the canonical space, thus reducing rendering artifacts.
model, \(A\) as
\[A:(x,v,e)\rightarrow(c,\sigma)\, \tag{1}\]
where \(x\in\mathbb{R}^{3}\) is a point in \(3D\), \(v\in\mathbb{S}^{2}\) is the viewing direction, \(e\in\mathbb{R}^{256}\) represents the latent vector obtained from the image encoding network \(E_{y}\). This latent vector parameterizes deformations due to expressions and head movements. Furthermore, \(c\) and \(\sigma\) are the color and density values, respectively. Mathematically, instant NGP parameterizes \(A\) with two modules. The first module is based on a multiresolution hash grid, denoted \(A_{\sigma}\), and the second module is parameterized by an MLP, denoted \(A_{\beta}\). The latter takes features looked up from \(A_{\sigma}\) and decodes a given point \(x\) and view direction \(v\) into \(c\) and \(\sigma\). To model dynamic variations of the input driving performance, we introduce another module, denoted \(A_{\theta}\), which takes as input a point in world space and expression latent vector, and regresses a deformation field that converts the world point \(x\) to a canonical space, as follows:
\[x_{o}=A_{\theta}(x,e)+x. \tag{2}\]
We learn the radiance field in this canonical space using \(A_{\sigma}\) and \(A_{\beta}\), and parameterize the operator \(A_{\theta}\) using a linear MLP. One could also naively provide the driving image latent code directly to \(A_{\beta}\) instead of modeling a deformation field to canonical space. However, we show in our experiments (see Sec. 4.4) that such a naive parameterization creates artifacts. Thus, learning a deformation field is critical in reducing the artifacts.
Once we have the radiance field representation of the scene, we use standard volumetric integration to synthesize color \(C\) for each ray \(r(t)=o+td\), with near and far bounds \(t_{n}\) and \(t_{f}\), as follows:
\[C(r) =\int_{t_{n}}^{t_{f}}T(t)\sigma(r(t))c(r(t))dt\,\] \[\text{where}\quad T(t) =\exp(-\int_{t_{n}}^{t}\sigma(r(s))ds). \tag{3}\]
_Efficient ray marching._ As in instant NGP, we improve efficiency by skipping regions that do not contribute to the final color based on the coarse occupancy grid. The occupancy grid typically spans \(64^{3}\) resolution, with each cell represented by a single bit. The occupancy grid is updated at regular intervals by evaluating the density of the model in the corresponding region in space. The high in each bit represents the corresponding \(3D\) region that has density above a certain threshold. Note that only these regions contribute to the final rendering. As our scene is dynamic, we make certain changes to suit this setting. We initialize \(G\) separated occupancy grids corresponding to \(G\) uniformly sampled frames. We update each of these grids independently for \(200,000\) iterations. Then, we take the union of all the grids to create a single occupancy grid that we utilize for the rest of the training and novel view synthesis.
Figure 3: An example of our camera rig capturing the same expression from 16 different viewpoints.
### Encoder
Our model is conditioned on a latent vector \(e\) to drive the avatar. In the literature, some methods use expression parameters obtained from face tracking using an existing morphable model (Ahtar et al., 2022; Gafni et al., 2021). Other methods parameterize the latent vector obtained from an image encoder (Raj et al., 2021). Using an image encoder is advantageous since it can capture diverse expressions as opposed to expression parameters obtained from a 3DMM. Typically, tracking pipelines utilize linear morphable models that have limited expressivity and are prone to tracking errors (B.R. et al., 2021). In this paper, we rely on image encoder \(E_{Y}\) to parameterize the dynamics of the human head because it allows us to capture diverse and extreme expressions faithfully, which is the main focus of our paper. We parameterize \(E_{Y}\) using a CNN-based network, which receives as input an image I and outputs the encoding vector \(e\). Specifically, we adopt a pre-trained VGG-Face model (Parkhi et al., 2015) as our encoder and add a custom linear layer at the end. During training, we finetune all the VGG layers as well as the custom layer.
### Objective Function
Given the above representation of our model, we learn the parameters of \(E_{Y},A_{\theta},A_{\alpha}\), and \(A_{\beta}\) modules in a supervised manner using multi-view image and perceptual constraints as well as dense temporal correspondences:
\[\mathcal{L}=\mathcal{L}_{L2}+\lambda_{perc}\mathcal{L}_{perc}+\lambda_{of} \mathcal{L}_{of}. \tag{4}\]
Reconstruction LossesGiven camera extrinsic and model representation, we render images and employ image reconstruction loss, \(\mathcal{L}_{L2}\) using L2 loss between ground truth and rendered images. This term introduces multi-view constraints to train our model. However, L2 loss alone could result in missing some high-frequency details, which are perceptually very important. As a result, we introduce a widely used patch-based perceptual loss \(\mathcal{L}_{perc}\), based on a pre-trained VGG Face network (Parkhi et al., 2015). We use the output of the first \(6\) layers obtained from an input patch size of \(64\times 64\) to compute this loss term.
Optical flow based LossAs our dataset consists of sparse views and hash grid-based representation has localized features, a model trained only with \(\mathcal{L}_{L2}\) and \(\mathcal{L}_{perc}\) losses tend to overfit training views, resulting in artifacts when rendering novel views. To mitigate it, we propose a novel loss term \(\mathcal{L}_{of}\) based on pre-computed \(2D\) optical flow between concurrent frames. The motivation behind this loss term is to propagate pixel correspondences to the 3D canonical space with the aim to regularize the dynamic scene and mitigate the model's artifacts when trained with sparser views. We achieve this by enforcing the canonical points of neighboring temporal frames to be close to each other for the points near the surface of the avatar.
Mathematically, let \(p^{t},p^{t+1}\) be the corresponding pixels between consecutive frames obtained using \(2D\) optical flow. For these pixels, we first obtain their corresponding expected depth values through volume rendering. The corresponding \(3D\) points \(x^{t},x^{t+1}\) associated with expected depth can be considered to be close to the surface. We find the corresponding points in the canonical space using \(A_{\theta}\), as defined in Eq. 2. Let \(x^{t}_{o}\) and \(x^{t+1}_{o}\) be the corresponding points in the canonical space. We enforce all such points to be close between them by employing an L1 loss, similar to (Kasten et al., 2021):
\[\mathcal{L}_{of}=\|x^{t}_{o}-x^{t+1}_{o}\|_{1}. \tag{5}\]
Please refer to Fig. 2 (b) for an illustration of the proposed loss term.
### Implementation Details
We use \(5\) layered MLP as our deformation network \(A_{\theta}\). We provide hash encoding parameters and their ranges used in our experiments in Tab. 1. Our radiance field network \(A_{\beta}\) is parameterized by a \(5\) layer-deep MLP. We set \(\lambda_{perc}=0.1\) and \(\lambda_{of}=0.2\) in our experiments. We also follow a PyTorch implementation (Tang, 2022) of instant NGP (Muller et al., 2022) to employ error map-based pixel sampling while training, for better convergence. Specifically, we maintain a \(128\)x\(128\) resolution error map for each training image, which is updated in every iteration to reflect the pixel-wise \(L_{2}\) error. This is then used to sample rays where errors are the highest at each iteration. Finally, we update our encoder \(E_{Y}\), deformation network \(A_{\theta}\), hash grid \(A_{\alpha}\) and radiance field \(A_{\beta}\) with learning rates \(1e-5\), \(1e-3\), \(1e-2\) and \(1e-3\), respectively. Our model is trained for \(500,000\) iterations. We have observed that model convergence is faster than in MVP (Lombardi et al., 2021). It takes about \(12\) hours to converge, as opposed to the \(50\) hours required by MVP with the same GPU resources.
## 4. Experiments
In this section, we show the effectiveness of our high-quality volumetric head avatar reconstruction method in synthesizing novel dynamic expressions and views at high fidelity and resolution. We show two main applications our approach enables, namely dynamic free-view synthesis from arbitrary monocular viewpoints as well as renderings at different image resolutions, including FHD. We also perform a thorough analysis of our modeling choices and conduct quantitative and qualitative evaluations with state-of-the-art baselines. We refer the reader to the supplemental for video results.
### Datasets
Our multi-view video dataset consists of \(16\) subjects, including \(14\) males and \(2\) females, and most of them are in their \(208\) or \(30\)s. The subjects have short to long-length hairstyles. Male subjects either are shaved or have stubble or hairy beads. A collage of the recorded subjects is shown in Fig. 4, top. To build our dynamic dataset, we instructed subjects to perform random expressive faces during \(2\) minutes and/or recite \(47\) phonetically balanced sentences. Among the \(16\) subjects, \(4\) have only performed expressions, \(1\) has only
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Values \\ \hline Number of levels & \(16\) \\ Max. entries per level (hash table size) & \(2^{14}\) \\ Number of feature dimensions per entry & \(2\) \\ Coarest resolution & \(16\) \\ Finest resolution & \(2048\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Different parameters used for defining the hash grid.
performed reciting, while 11 have performed both. We will release our full multi-view video dataset to foster future research on head avatar generation. For all of our experiments reported next, we utilize \(18\) views, each containing \(1500\) frames at 960x540 resolution, to train our personalized models and generate results, unless stated otherwise. We processed 6 subjects covering a wide variety of our dataset e.g. gender, expressions, facial hair, movements, scalp hair, ethnicity, etc.
### Qualitative and Quantitative Results
Fig. 5 shows dynamic expression synthesis of \(4\) personalized avatar models on test sequences, while Fig. 6 illustrates free viewpoint synthesis of \(5\) personalized models. Note that the generated views represent interpolations from training views. In both figures, the avatars are driven by a frontal-looking monocular RGB video. Our approach achieves high-quality renderings of head avatars under novel camera viewpoints and for challenging novel expressions. Tab. 2 shows that our approach on average obtains high PSNR (over 31 dB) and low reconstruction errors on test sequences based
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metrics & Without & Without & Without & Ours \\ & canonical & image feature & optical flow & \\ & space & conditioning & based loss & \\ \hline PSNR \(\uparrow\) & 29.24 & 29.64 & 29.38 & **31.23** \\ \hline L1 \(\downarrow\) & 3.61 & 3.64 & 3.32 & **2.79** \\ \hline SSIM \(\uparrow\) & 0.8698 & 0.8744 & 0.8517 & **0.8837** \\ \hline LPIPS \(\downarrow\) & 0.1408 & 0.1191 & 0.1200 & **0.1130** \\ \hline \end{tabular}
\end{table}
Table 2. Ablation study: Image quality and perceptual metrics for different design choices. L1 measures the absolute error of unnormalized RGB images. Our full method produces the best results (see bold text).
Figure 4. Top: Visualization of all identities captured in our multi-view camera setup. Our dataset captures a variety of facial hair, scalp hair, expressions, and ethnicities, among others. Bottom: Example of meta data released with our dataset.
on different image quality and perceptual metrics. Please see the supplemental for video results.
### Applications
_Avatar Synthesis from an Arbitrary Monocular Viewpoint_. In previous experiments, we have shown that we can drive our head avatar using a monocular video captured from a frontal view. Here we further show an application where we can drive our head avatar from an arbitrary viewpoint. To achieve this, we define a fine-tuning scheme described as follows: First, we synthesize a training dataset from a novel viewpoint, say \(\hat{v}\), with the personalized avatar model described in Sec.3. This dataset contains the same dynamic expressions used for training. Then, we finetune the image encoder with this synthetic video stream for 100k iterations. Note that the deformation and radiance field networks as well as the multiresolution hash encoding remain unchanged. Once the image encoder has been fine-tuned, we can drive the personalized avatar model with the real data stream coming from the viewpoint \(\hat{v}\). In our experiments, \(\hat{v}\) is a held-out viewpoint not used when training the avatar model.
Fig. 7 compares frontal renderings of Subject 3's avatar model, driven from two video streams with unseen expressions: One driven from a frontal view camera and another driven from a held-out bottom view. Our method produces high-fidelity renderings regardless of the driving video viewpoint, and the rendered expressions faithfully reproduce those shown in the driving video. This demonstrates that our personalized avatar model can generate photo-realistic renderings from arbitrary viewpoints at high fidelity. These renderings can be used as a good approximation of real images to fine-tune the image encoder from arbitrary driving viewpoints. Note that this experiment paves the way for learning high-fidelity personalized avatars that can be driven from video captured in the wild.
_FHD Image Synthesis._ Our multiresolution hash grid encoding allows for training a personalized avatar model at full HD resolutions, which surpasses the capabilities of state-of-the-art approaches. Our method can render HD images (960x540) at about 10 fps and FHD (1920x1080) images a bit below 3 fps. Fig. 8 compares renderings of personalized models trained at HD and FHD resolutions. Both models generate visually similar facial features and details, though the FHD model produces criger results, as expected. Overall our approach scales well, and the decrease in runtime is near linear. Fig. 9 shows that our approach can also run on a resolution of 480x270 in real time (\(25\) fps) while still maintaining high fidelity in the reconstructions. Note that the reported runtimes are based on a single NVIDIA A100 GPU. Please see the supplemental video for more results.
### Ablative Analysis
We demonstrate our main contributions and the influence of design choices via a number of ablation studies. Specifically, we study our novel optical flow based loss, learned image based feature conditioning of the canonical radiance field network, and canonical space representation. We also analyze the influence of perceptual loss and error map based pixel sampling in the reconstruction quality. Note that for these experiments, we train our personalized avatar
Figure 5. Qualitative results: Dynamic expression changes. _Bottom to top. Subject \(1\)_, \(2\), \(3\), and \(4\).
models on \(18\) views, while we keep out \(2\) views for our quantitative evaluations.
Fig. 10 shows the reconstruction quality of our method and different modeling choices for a fixed unseen expression and a novel camera viewpoint rendering (a held-out view). Here, the error map (bottom row) represents a pixel-wise mean square error (MSE) of head renderings in RGB color space. Fig. 11 further compares our approach with the same design choices, for a fixed expression but under dynamic novel viewpoint synthesis. Note that dynamic viewpoints are interpolated from different camera viewpoints. From these results, we can observe that without conditioning the canonical space on the driving image features the reconstruction has blurry artifacts all over the mouth. Without the optical flow based loss, blocky artifacts and/or inconsistent fine-scale details appear in sparsely sampled regions, such as hair, eyelids, and teeth. Note that a canonical space representation is required for proper encoding of facial dynamics; otherwise, artifacts emerge. The error heatmap visualization in Fig. 10 (bottom row) provides a quantitative measurement of the error distribution, showing that our approach with all design choices achieves the best rendering quality. Tab. 2 shows the average reconstruction error over the entire test set (200 frames) for different well-established image-based quality metrics. We adopt similar metrics to that of MVP [11]. We measure the Manhattan distance \(L1\) in the RGB color space, PSNR, SSIM [23], and LPIPS [10]. Overall our approach attains the best numerical results. This study confirms that our key modeling choices optimize the rendering quality. We also show in Fig. 12 that the perceptual loss and error map based sampling improve the rendering results. While we have noticed that these components help in improving rendering quality, we do not emphasize them as a contribution.
Fig. 6: Qualitative results: Dynamic novel view synthesis for different subjects. _Bottom to top: Subject \(5\)_, \(2\), \(3\), \(6\), and \(4\).
### Comparisons with the State of the Art
In this section, we compare our approach with a recent multi-view state-of-the-art method, called MVP (Lombardi et al., 2021), which
Fig. 8. Avatar synthesis at different resolutions. _Left to right:_ Model trained at HD and FHD resolutions, respectively.
Fig. 7. Avatar synthesis from different driving viewpoints. _Top:_ Frontal view driving video and frontal rendering. _Bottom:_ Bottom view driving video and frontal rendering.
Fig. 9. Real-time rendering (\(25\) fps) at 480x270 resolution
produces detailed avatars with high fidelity under a similar setup to ours. We disregard direct comparisons with state-of-the-art sparse multi-view approaches since they tend to lack fine-scale details or are prone to artifacts for novel viewpoint synthesis (see Sec. 2). In addition, we provide baseline comparisons with an adaptation of a template-free dynamic representation, called HyperNeRF (Park et al., 2021), and a multi-level hash table-based approach for expression encoding, called NeRFBlendShape (Gao et al., 2022). We will call our multi-view and image-driven adaptation of these approaches HyperNeRF++ and NeRFBlendShape++.
To train NeRFBlendShape++, we pass each entry of the expression latent vector to a learnable multi-level hash table. We linearly combine the output of these hash tables and condition the NeRF network on it. To train HyperNeRF++, we feed the neural features passed on by the image encoder to an ambient and deformation network and then as appearance conditioning to the NeRF network. To run MVP, we use 4k primitives. We employ an in-house FLAME-based tracking to obtain a non-detailed dense reconstruction of the subject's head to guide the initialization of the primitives at each frame.
Fig. 13 shows the reconstruction quality of our method and baseline approaches for a fixed unseen expression and a novel camera viewpoint rendering (a held-out view), while Fig. 14 compares them in a free-viewpoint synthesis setup. HyperNeRF++ over smooths regions. Both NeRFBlendShape++ and HyperNeRF++ exhibit artifacts in regions that undergo recurrent topological changes, e.g., the mouth interior, or that have complex structures, e.g., scalp hair. The latter not only produces stronger artifacts in the form of grid patterns but also removes facial details. Overall these methods generalize poorly due to over-parameterized representations. MVP can sometimes produce wrong facial expressions in extreme cases or even sometimes show unusual block artifacts for the same regions mentioned above. One of the main reasons is that MVP relies on very dense multi-view imagery to supervise volume rendering. However, in a sparser camera setup undersampled areas, especially those undergoing disocclusions, become ambiguous without explicit dense volume deformation constraints. The error heatmap visualization of Fig. 13 (last row), shows that our method reduces reconstruction errors. Overall our approach produces sharper, more accurate, and more photorealistic rendering results. Please refer to the supplementary video for further comparisons in dynamic viewpoint synthesis.
We perform quantitative evaluations on the 2 held-out views, with 200 frames each. Quantitative comparisons are reported in Tab. 3. Our approach clearly outperforms other baseline approaches, especially when comparing perceptual metrics, such as SSIM and LPIPS. L1 reconstruction error is also significantly reduced. We remark that our approach attains sharper reconstructions with faster convergence and efficiency, the latter thanks to hash-encoding and empty-space pruning techniques.
## 5. Limitations and Future Work
Our method produces highly photorealistic renderings with novel viewpoints and expressions. However, it suffers from a number of limitations. First, we noticed that it can generate artifacts in motions undergoing strong disocclusions (uncovering occlusions). For instance, in the case of the tongue, artifacts could occur around
Figure 10. Ablation study: Fixed view image synthesis for different design choices. _Left to right:_ Without canonical space, without feature conditioning, without optical flow based loss, and ours. The top row shows a rendering of Subject 3 (and ground truth), while the bottom row shows the error map. The error is computed as the per-pixel mean squared error (MSE), encoded in RGB color space. Here, blue denotes 0 MSE, yellow is 60 MSE, and reddish colors mean over 100 MSE. Our full method achieves the best results.
the mouth boundaries as the tongue starts to come out (see Fig. 15, Frame 1, blue region). The rendering quality, however, stabilizes with good quality as soon as the tongue becomes fully visible (see Fig. 15, Frame 2). Future work could address this limitation e.g. by including occlusion-aware priors. Second, our solution is currently person-specific. Future work could examine building a model that generalizes to unseen identities. For this, our dataset of 16 identities is a good starting point, though it might require more identities. Here, we could also investigate refining the model using in-the-wild data. Third, while we have shown real-time renderings at a resolution of \(480\times 270\), future avenues could enable real-time rendering at higher resolutions e.g. FHD synthesis. Here, we could investigate for instance super-resolution techniques, akin to (Chan et al., 2022; Xiang et al., 2022). Finally, we have shown results driven by monocular RGB videos so far. Theoretically, our image encoder could be replaced with other pre-trained encoders of different input modalities, such as audio signals. This would increase the spectrum of applications of our work.
## 6. Conclusion
We presented a novel approach for building high-quality digital head avatars using multiresolution hash encoding. Our approach models
Figure 11. Ablation study: Novel view synthesis quality. _Left to right:_ Ours, without optical flow based loss, without image feature conditioning, and without canonical space. Our full method achieves the best results.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metrics & HyperNeRF++ & MVP & NeRFBlendshape++ & Ours \\ \hline PSNR \(\uparrow\) & 26.42 & 28.72 & 29.66 & **31.23** \\ \hline L1 \(\downarrow\) & 5.61 & 3.64 & 3.23 & **2.79** \\ \hline SSIM \(\uparrow\) & 0.8509 & 0.8283 & 0.8745 & **0.8837** \\ \hline LPIPS \(\downarrow\) & 0.1721 & 0.1432 & 0.1326 & **0.1130** \\ \hline \end{tabular}
\end{table}
Table 3. Quantitative comparison with state-of-the-art approaches. L1 measures the absolute error of unnormalized RGB images. Our approach outperforms related methods (see bold text).
Figure 12: Ablation study: Structural consistency and detail quality. _Left to right_: No perceptual loss, no error map sampling, ours, and ground truth.
Figure 13: Quantitative comparison with the state of the art: _Left to right_: Results of HyperNeRF++ [12], MVP [13], NeRFBlendShape++ [14], ours and ground truth. The top row shows visual results, while error maps are shown in the bottom row. The error is computed as the per-pixel mean squared error (MSE), encoded in RGB color space. Here, blue denotes 0 MSE, yellow is 60 MSE, and reddish colors mean over 100 MSE. Our method clearly outperforms the state of the art.
a full head avatar as a deformation of a canonical space conditioned on the input image. Our approach utilizes a novel optical flow based loss that enforces correspondences in the learnable canonical space. This encourages artifact-free and temporally smooth results. Our technique is trained in a supervised manner using multi-view RGB data and at inference is driven using monocular input. We have shown results rendered with novel camera viewpoints and expressions. We have also shown different applications including driving the model from novel viewpoints. Our approach also shows the first 2K renderings in literature and can run in real-time at a 480x270 resolution. Overall our approach outperforms all existing methods, both visually and numerically. We will release a novel dataset of 16 identities captured by 24 camera viewpoints and performing a variety of expressions. We hope our work brings human digitization closer to reality so that we all can stay in touch with our friends, family, and loved ones, over a distance.
|
2305.10599 | Odyssey: An Interactive Workbench for Expert-Driven Floating-Point
Expression Rewriting | In recent years, researchers have proposed a number of automated tools to
identify and improve floating-point rounding error in mathematical expressions.
However, users struggle to effectively apply these tools. In this paper, we
work with novices, experts, and tool developers to investigate user needs
during the expression rewriting process. We find that users follow an iterative
design process. They want to compare expressions on multiple input ranges,
integrate and guide various rewriting tools and understand where errors come
from. We organize this investigation's results into a three-stage workflow and
implement that workflow in a new, extensible workbench dubbed Odyssey. Odyssey
enables users to: (1) diagnose problems in an expression, (2) generate
solutions automatically or by hand, and (3) tune their results. Odyssey tracks
a working set of expressions and turns a state-of-the-art automated tool
"inside out," giving the user access to internal heuristics, algorithms, and
functionality. In a user study, Odyssey enabled five expert numerical analysts
to solve challenging rewriting problems where state-of-the-art automated tools
fail. In particular, the experts unanimously praised Odyssey's novel support
for interactive range modification and local error visualization. | Edward Misback, Caleb C. Chan, Brett Saiki, Eunice Jun, Zachary Tatlock, Pavel Panchekha | 2023-05-17T22:43:29Z | http://arxiv.org/abs/2305.10599v2 | # Odyssey: An Interactive Workbench for Expert-Driven Floating-Point Expression Rewriting
###### Abstract.
In recent years, researchers have proposed a number of automated tools to identify and improve floating-point rounding error in mathematical expressions. However, users struggle to effectively apply these tools. In this paper, we describe an iterative design process, working with novices, experts, and tool developers, to investigate user needs during the expression rewriting process. We find that users want to compare expressions on multiple input ranges, integrate and guide various rewriting tools, and understand where errors come from. We organize this investigation's results into a three-stage workflow and implement that workflow in a new, extensible workbench dubbed Odyssey. Odyssey enables users to: (1) _diagnose_ problems in an expression, (2) _generate solutions_ automatically or by hand, and (3) _tune_ their results. Odyssey tracks a working set of expressions and turns a state-of-the-art automated tool "inside out," giving the user access to internal heuristics, algorithms, and functionality. In a user study, Odyssey enabled five expert numerical analysts to solve challenging rewriting problems where state-of-the-art automated tools fail. In particular, the experts unanimously praised Odyssey's novel support for interactive range modification and local error visualization.
Floating Point; Expert Programming; Debugging; Developer Tools; Term Rewriting, Dynamic Analysis +
Footnote †: journal: Computer Science
automated test generation tools (Herbie, 2017), error analysis tools (Herbie, 2017; Herbie, 2017; Herbie, 2017; Herbie, 2017; Herbie, 2017), and repair tools (Herbie, 2017; Herbie, 2017). For example, the open-source, state-of-the-art Herbie tool (Herbie, 2017) takes as input a floating-point expression and uses algebraic and analytic identities to rewrite the expression via a complex search process. Despite wide adoption of Herbie in industrial and national labs, many users still struggle to apply Herbie, reporting that it does not work for their use case, that its results are too complicated to trust, or that it misses rewritings they see as obvious.
To support developers working with floating-point arithmetic, we engaged in an iterative design process to understand and improve usability for numerical tools. We investigated both novice and expert use of Herbie by reviewing user-submitted bug reports, interviewing the Herbie developers, testing prototypes with the broader floating-point research community, and observing users in an in-lab design study. We find that users struggle with specifying their objectives and interpreting Herbie's results, facing issues of tool/user objective mismatch, lack of trust in the automated tool, and a need for independent exploration.
As a result of our investigation, we identified a three-stage floating-point rewriting workflow: (1) _diagnosing problems_, in which users identify the problematic operations within expressions; (2) _generating solutions_, in which users gather potential expression rewritings from automated tools, references, or their own creativity; and (3) _tuning_, where users test, tweak, and compare different rewritings to optimize the resulting expression for their own accuracy, performance, and maintainability needs. This workflow is not well-addressed by existing tools. For example, end-to-end tools like Herbie can take minutes to return analysis results, and there is no tool support for comparing and improving rewritings drawn from multiple sources.
To support this workflow, we designed and implemented Odyssey, an interactive workbench that allows users to identify problem areas in floating-point expressions using error visualizations, collect and manage expression rewrites using an interactive table, and combine rewrites to minimize rounding error. Odyssey leverages Herbie as an analysis and rewriting engine, but retains context about the user's objectives, allowing it to return common analyses in less than a second.
To evaluate the effectiveness of Odyssey, we conducted a study with five experts in numerical computing and floating-point arithmetic. On average, the experts successfully completed five out of seven challenging tasks drawn from real-world numerical problems within one hour. The interactive nature of Odyssey enabled experts to concentrate on high-level problem-solving and facilitated the swift evaluation and comparison of expression rewritings. By combining the power of automated systems with a dynamic, human-driven workflow, we see a workbench like Odyssey as a significant step toward enabling users to work efficiently alongside automated tools on complex domains beyond floating-point rounding error.
This paper makes three contributions:
1. An investigation of the needs of novices and experts, summarized in a three-stage workflow for floating-point expression rewriting--diagnosis, solution generation, and tuning--combining both automated tools and human rewritings.
2. An iteratively-developed workbench, Odyssey, that supports this workflow.
3. A study of Odyssey's effectiveness based on feedback from expert users who completed a set of challenging tasks drawn from real-world numerical problems.
## 2. Background and Related Work
Odyssey draws on techniques from the developer tool literature on program visualization and program history to addresses key challenges developers face in the domain of floating-point arithmetic.
### Program Visualization for Debugging
Floating-point error analysis and repair involves a mix of debugging and performance optimization work. Odyssey is thus inspired by work aimed at program visualization for debugging. Systems such as Whyline (Whyline, 2017), Timelapse (Herbie, 2017), and FireCrystal (Whyline, 2017), which connect code with runtime behavior by visualizing execution traces, inspire several of Odyssey's interactions, including the interactive "local error" heatmap visualizing per-operation floating-point error for a particular input. Moreover, a series of papers on integrating visualizations with code, such as Theseus (Whyline, 2017), which provides always-on visualizations of runtime state; Projection Boxes (Whyline, 2017), which gives programmers more control over which runtime values
Figure 1. Floating-point error is pernicious; even familiar, simple expressions can yield meaningless results.
are visualized; and Hoffswell et al. (2016), which provides recommendations for embedding visualizations in code, are reflected in our design of Odyssey's error graph, which allows programmers to visualize floating-point error and control which input values and rewritings are visualized. Odyssey sees similar benefits from these designs as prior work: opening up space for programmer exploration and observation, and thereby giving programmers a fuller understanding of the problem space and a richer set of interactions for comparison and repair.
That said, floating-point rounding error is a continuous, numeric quality of a program, and the "tuning" stage of numerical work therefore has a lot of analogs to performance optimization. Beck et al. (2010) and PerformanceHat (Beck et al., 2011), for example, visualize the proportion of runtime spent at each each line of code in the program. These approaches inspire our "heatmap" design for local error information, coloring each floating-point operation in the program based on the amount of floating-point rounding error it contributes to the result. The Roly-poly (Roly, 2012) project is also quite similar to Odyssey, aiding developers in exploring and selecting performance optimizations for image processing code. Odyssey explores a similar system-aided optimization workflow, but for accuracy instead of performance optimization.
### Maintaining and Reviewing Code Versions
To understand, experiment with, and collaborate on code, developers author and compare multiple program alternatives and listones (Heck et al., 2016). Tools such as Azurite (Azurite, 2017), Verdant (Verdant, 2017), and Variolite (Vavilotte, 2017) provide explicit support for multiple program versions. For example, Verdant helps data scientists compare, replay, and simplify histories for code in computational notebooks (Verdant, 2017). Also, Head et al. (Head et al., 2017) introduce "code gathering" techniques that find the minimal code slices in a program that produce a selected set of results. Comparing and combining multiple alternative rewritings is a also key part of floating-point error repair.
Odyssey maintains a history of rewritings both to provide a history of how a rewriting was developed and also allow developers to visualize, compare, and combine multiple alternatives, providing explicit internal support to what would otherwise be internal operations, thereby reducing cognitive load and allowing developers to focus on the higher-level problem-solving aspects.
### Floating-Point Arithmetic and Numerical Analysis
Floating-point arithmetic, defined by the IEEE 754 standard (Hoffswell et al., 2016), and variations of this standard form the standard number representation in most programming languages (Hoffswell et al., 2016). However, floating-point arithmetic is subject to rounding error, and even elementary computations often permit significant error (Hoffswell et al., 2016). Numerical analysis provides a set of mathematical tools to analyze, bound, and reduce this error (Hoffswell et al., 2016). However, many programmers are unfamiliar with numerical analysis techniques, and even fewer have a thorough understanding of how to apply these tools.
Researchers have thus developed a vast menagerie of tools automating specific numerical analysis techniques, including Rosa (Rosa, 2017) for affine arithmetic, FPTaylor (Odyssey, 2017) for error Taylor series, and Ariadne (Ariadne, 2017) for root finding. Other tools repurpose static analysis techniques to find floating-point rounding errors; such tools include Fluctuat (FPT, 2017), which uses abstract interpretation; FPDebug (Hoffswell et al., 2016), which uses a dynamic execution with shadow variables; and CGRS (Hoffswell et al., 2016), which uses evolutionary search. These tools can find inputs with high rounding error or, in some cases, certify the absence of such errors. Programmers can then use the error found to attempt to understand the source of the rounding error, and ultimately fix it. One popular tool combining these steps is Herbie (Herbie, 2017). Herbie uses sampling techniques to identify floating-point error; constructs candidate rewrites using algebraic and analytic identities, and tests those rewrites against higher-precision executions to identify the rewrite with the lowest floating-point error. In recent releases, Herbie can output multiple suggestions with different performance and accuracy characteristics (Hoffswell et al., 2016).
Unfortunately, all of these tools, Herbie included, are difficult for developers to use and integrate into their workflows. Users are typically expected to identify the expression and inputs of interest up front; compare them to other sources or the user's own ideas; and make trade-offs between accuracy and other goals (e.g., maintainability), all without tool support. Users are often recommended to switch between their code editor, version control system, a mathematical visualization tool, and multiple Herbie instances in order to solve a single problem (Hoffswell et al., 2016). VSCode-PRECiSA (Beck et al., 2011), a VSCode interface for the PRECiSA command-line tool (Beck et al., 2011) designed to support the process of analyzing a single program in several ways, is somewhat of an exception; however, it does not address the problem of tool interoperation. We developed Odyssey to address these limitations by providing a single integrated workbench for the full floating-point rounding error workflow. To lower the barriers to adoption, Odyssey uses Herbie, a widely used and open source tool (Hoffswell et al., 2016; Odyssey, 2017), under the hood.
## 3. Usage Scenario
Alex, a numerical analysis expert, would like to develop an accurate implementation of the asinh function. The asinh function is defined, for positive \(x\), as \(\operatorname{asinh}(x)=\log(x+\sqrt{x^{2}+1})\). However, a naive implementation exhibits very poor numerical behavior on
Figure 2. Users enter a new expression.
many inputs. This example is based on a real-world problem that a numerics expert found and addressed for the Rust standard library using Herbie (Herbie, 2017). Their solution is merged in mainline Rust, benefitting all Rust users. Odyssey would have made this problem easier to catch and fix. Odyssey provides an integrated workbench for the user to diagnose the sources of error in this expression, collect candidate solutions from a variety of sources, and produce an implementation that fits the user's goals.
### First Stage: Diagnosing Problems
Using Odyssey, Alex begins by typing the mathematical definition, log(x + sqrt(x * x + 1)), into Odyssey's expression entry box (see Figure 2), along with the range of possible x values. In this case, the definition is only valid for positive \(x\), so Alex enters \(0\) as the lower bound. Since this is a library function that can be executed on any input, Alex leaves the default upper bound of \(10^{308}\) in place. The expression and initial input range are used to initialize Odyssey's main screen (Figure 3) and appear in the top left corner of the screen (Figure 3A). If the user needs to launch multiple Odyssey sessions, this part of the screen will help them differentiate them.
Beneath the initial expression, Alex sees Odyssey's rewritings table (Figure 3B). The rewritings table allows the user to collect multiple versions (or "rewritings") of the expression and compare them for accuracy. Each rewriting in the table shows its average accuracy, and rewritings can also be selected or hidden to control the display of other information in Odyssey. Initially, the rewritings table contains a single rewriting, the direct implementation of their expression. In this example, the initial rewriting has quite high error (45.29 bits out of 64) indicating that there is quite some work left to do to produce an accurate implementation.
To better understand the source of this error, Alex refers to the error plot (Figure 3C). This plot shows the error of every rewriting in the table. The horizontal axis shows different input values \(x\) spanning hundreds of orders of magnitude; the vertical axis shows error, with higher values being worse. In this example, three regions are clearly visible: inputs \(x<1\), with high error; inputs \(1<x<10^{150}\), with low error; and inputs \(10^{150}<x\), with high error again. Distinct regions like these often have distinct causes of error and are a starting point for exploring more deeply.
To begin investigating, Alex clicks on one of the points in the error plot; this updates Herbie's "local error heatmap" display (Figure 3D). Local error is an internal heuristic in Herbie that identifies which operations in a rewriting cause rounding error at a given point. By clicking on one point with \(10^{150}<x\), and another point with \(x<1\), Alex confirms that this expression has two distinct sources of error: for large inputs \(x\), the source of error is the sqrt and * operations, while for small inputs \(x\), the source of error is the log operation. After diagnosing the operations with error and the affected inputs, Alex begins generating solutions to these floating-point rounding error problems.
Figure 4. The Expression Details view shows a LaTeX rendering and plain text to help users understand and work with the selected expression.
Figure 3. Diagnosis. The specification (A) shows the expression the user is trying to implement. The rewritings table (B) shows the expressions the user has tried. The error plot (C) shows the error of the current expression. The local error heatmap graph (D) shows the error breakdown of the currently selected point.
### Second Stage: Generating Solutions
To start generating solutions quickly, Alex queries an automated tool using the "Get expressions with Herbie" button (Figure 5A). This automatically translates the expression into Herbie's input format; invokes Herbie; evaluates the error of each of Herbie's suggestions; and translates each one back to a human-readable format.
In this case, invoking Herbie adds five suggestions to the rewritings table and to the error plot (Figure 5C). Since each rewriting in the table lists its error, Alex sees immediately that Herbie's suggestions reduce the original 45.28 bits of error to as low as 0.02 bits of error. Moreover, each rewriting's error is also graphed on the error plot, with different rewritings shown in different colors. Users can highlight the plot for an expression by clicking on its row in the table. For example, by clicking Herbie's fifth suggestion, log(x + hypot(1, x)), Alex sees that this expression avoids error for \(10^{150}<x\) but still has error for smaller values of \(x<1\). Multiple suggestions will probably need to be consulted, compared, and combined to achieve Alex's accuracy, performance, and maintainability goals.
Herbie is not the only source of rewritings in Odyssey. In fact, human creativity is often needed to overcome roadblocks for automated tools, and rewritings may also be sourced from other tools, from papers, or from online references. Therefore, Odyssey allows Alex to add rewritings directly to the rewritings table using the edit box (Figure 5B). As they type, their expression is automatically rendered and an error estimate is provided, to help avoid typos and other low-level mistakes. As Alex works on this expression, the table of rewritings will grow to contain all of the various rewritings or ideas they have considered. By leaving this basic organizational task to Odyssey, Alex is able to focus on high-level reasoning.
### Third Stage: Tuning
After generating solutions to the various floating-point issues in this expression, Alex wants to understand how these rewritings can be combined to produce a single implementation of the expression that satisfies their accuracy, performance, and maintainability goals (Figure 6).
Since, in this case, many of the rewritings are generated by Herbie, they start by understanding those rewritings in greater depth. To do so, Alex clicks on one of these rewritings and looks at the derivation provided for it (Figure 6A). The derivation of a Herbie-generated rewriting shows the sequence of steps Herbie
Figure 5. Solution generation. User can request rewritings from Herbie by pressing a button (A) or enter their own using the expression edit box (B), which provides live feedback and estimates the expression error on the current sample. The rewritings table and the error plot (C) are updated every time a rewriting is added, allowing the user to compare the quality of different rewritings.
used to produce it. Alex scans one derivation that has caught their attention both for ideas that can be lifted and combined with a different rewriting, as well as for potentially dangerous steps. In this case, they spot that Herbie used a Taylor series expansion to derive one of the rewritings. Taylor series expansions are dangerous, because they are often valid only for inputs in a certain range, and can lead to high error if used outside of that range. In this case, Herbie guarded the Taylor series with the conditional \(x\leq 1\); however, it may be possible to tune the condition further.
To begin tuning this piece, Alex uses Odyssey's range adjustment control (Figure 6C). Since the conditional has a threshold at \(1\), Alex enters a range of inputs near \(1\): \(10^{-52}\leq x\leq 10^{12}\). When Alex updates the range, Odyssey samples a new set of inputs all chosen from the selected range, and the plot updates to show only the new set of inputs. Because these inputs are all clustered near \(1\), Alex can now examine error in this range at much higher resolution. Here, the higher resolution reveals what inputs around \(1\) have a spike in error.
To fix this new-found problem, Alex continues to test new rewritings using the expression edit box. Since, at this point, Alex has already found many quite-accurate rewritings, they choose to modify an existing rewriting using the copy-to-clipboard button (Figure 6B). This allows Alex to easily make small adjustments, such as raising or lowering the threshold by rewriting the branch condition, and see how that affects the inputs they have focused on. Alex may not always tune expressions for accuracy; they might instead simplify rewritings to make them run faster, or make modifications to improve readability and maintainability. In those cases, the error graph allows Alex to validate that error has not increased unacceptably. Finally, Alex has tuned the expression to their liking, so they use the copy-to-clipboard button to copy the final expression and insert it into their program.
Reviewing these steps, Alex used a three-step floating-point error improvement workflow: diagnosing the sources of floating-point error; generating candidate solutions to these source of error; and then tuning and validating the resulting solution until it met their accuracy, performance, and maintainability goals. The entire process was orchestrated through Odyssey's table of rewritings and error plot, which track the various rewritings Alex already considered and allow Alex to easily compare rewritings over the input range. Odyssey additionally provided convenient ways to leverage the automated error-improvement tool Herbie, including invoking Herbie, visualizing internal heuristics, and presenting derivations. Combined, these features allow Alex to focus on higher-level concerns such as accuracy-improving rewrites and acceptable trade-offs between their goals.
## 4. Iterative design process
To understand how to meet the needs of Herbie's users, over a period of roughly 12 months, we (1) reviewed user-submitted bug reports, (2) tested changes to the Herbie user interface, (3) presented a mock-up for a totally new interface to experts, and (4) conducted an iterative user design study to build the new interface. Each phase generated observations and hypotheses about Herbie users' needs and workflows, which ultimately lead to the design of Odyssey.
### Phase 1: Analyzing Herbie's Usage and Issues
We began by reviewing 97 bug reports submitted by Herbie users,2 and the developers' responses. After filtering out bug reports about crashes or installation difficulties, we were left with 24 bug reports where Herbie was unable to meet the user's floating-point error-improvement needs. We then grouped these issues into three clusters of usability issues with some reports belonging to multiple clusters.
Footnote 2: Publicly available at [https://github.com/herbie-fp/herbie/issues](https://github.com/herbie-fp/herbie/issues).
First, in nine bug reports (e.g., issues #247, #333), users are surprised that Herbie's output gives a less accurate answer on specific inputs, even though Herbie reports an overall improvement. In their response to these issues, Herbie developers explain that Herbie improves average accuracy over a range of input values, which might involve decreasing the accuracy for specific inputs. For this cluster, we hypothesized that users would have benefitted from a clearly indicated "input range" parameter for Herbie.
Second, in eight bug reports (e.g., issues #437 and #391) users asked why Herbie made certain complicated, often meaningless, modifications to their program. In many of these bug reports, the Herbie developers respond that Herbie makes many decisions "at random," and that users could use Herbie's derivations feature to confirm whether a certain change improved accuracy or not. We hypothesized that making Herbie's derivation feature more prominent would help users look at intermediate steps to understand Herbie's decisions and pull out "main ideas" they can understand and maintain.
Third, in 13 bug reports (e.g., issues #247 and #261) users asked why Herbie failed to find a specific modification users had in mind. In most of these bug reports, the Herbie developers respond that Herbie is missing certain "rewrite rules" or needs an internal parameter called "num-enodes" increased, which would allow it to find more rewritings. We hypothesized that users needed more intuitive ways to guide Herbie's search in order to find sensible rewritings for their applications.
Reviewing the bug reports led us to observe the following:
* Users want to optimize solutions for limited input ranges relevant to their own needs.
* Users want to understand why automated tools choose the rewritings they do, especially when the rewritings conflict with their own goals for an expression.
* Users want to communicate their own rewriting ideas to automated tools to help shape the search process.
The Herbie developers confirmed that they were aware of the clusters of bugs we identified and had similar hypotheses. For example, the Herbie team had recently added an option for user-specified input ranges, though it was hidden behind an "advanced options" panel. Given our hypothesis about the importance of input ranges, we designed and contributed an interface that allowed users to more easily provide input ranges. This was merged into Herbie and released, along with the Herbie developers' other improvements, as Herbie 1.6.
### Phase 2: Testing the User Interface Changes
After the release, we continued monitoring the Herbie issue tracker. To our surprise, the changes made in Herbie 1.6 did not seem to significantly change issues users faced.
While users _were_ explicitly specifying input ranges, the user-specified input ranges were usually extremely wide, often as wide as Herbie's default input ranges. Most users clearly did not have a small, domain-specific input range in mind. Instead, users had both a large range of possible inputs they wanted to visualize and several critical ranges where they additionally checked for accuracy.
Meanwhile, the Herbie developers experimented with additional rewrites rules and larger "num-enodes" values (published in [38] and [48]) to try to find the expressions users were suggesting. However, these changes did not solve Herbie's problems with finding user-suggested rewrites: user rewrites might not be more accurate, and Herbie found it difficult to optimize for "simplicity" or "maintainability" automatically. We hypothesized that if users could enter variations of Herbie's outputs themselves and have Herbie _validate_ those programs, they might feel more comfortable using and modifying Herbie's output, even if Herbie's initial results did not meet their needs. Significantly, this would require extending the user interface to allow the user to input and compare multiple programs.
In other words, we made the following key observations:
* Users want to check the performance of solutions both overall and on a variety of _critical ranges_.
* Despite improvements in automated methods, automated search will still sometimes fail to find the results users want. Users should have ways to submit and validate their own attempts at a solution.
* In order to see the effects of their changes, users need ways to track and compare multiple rewritings.
To more rapidly iterate on user interface designs, we began designing Odyssey as a new interface separate from Herbie's default user interface.3
Footnote 3: Coincidentally, around this time, the developers informed us of a new mode for Herbie [42] designed to suggest multiple possible rewritings to the user. Since Odyssey already needed to support tracking and comparing multiple rewrites, we designed Odyssey to use the new mode as its default.
### Phase 3: Presenting a Mock-up to FP Experts
To get early feedback on Odyssey, we demonstrated a mock-up at the FPBench Community Meeting [2]. FPBench is a community of floating-point researchers, including both the Herbie developers and the developers of other floating-point analysis tools. The Community Meeting is a monthly Zoom call where community members share progress, discuss applications, and brainstorm new research directions. In our presentation, we demonstrated how Odyssey allowed users to see, track, and compare suggestions from Herbie; enter their own programs; and check the accuracy of a solution for different input ranges. FPBench community members were supportive. Several community members wanted to use Odyssey with their own floating-point systems as backends.
Numerical methods experts, who would be potential users of Odyssey, critiqued our assumption that users would be able to find their critical ranges based on a visualization. The experts pointed out that the critical ranges they were typically interested in were too small to see in the chart. Based on this early feedback, we added support for coordinated zooming, where users can type in a critical ranges, and Odyssey would simultaneously modify its sampled inputs, visualizations, and accurate evaluations to draw from the new range.
Engaging with the FPBench Community led to the following observations:
* Users need to use a variety of visualization and generation tools that do not currently interoperate.
Figure 6. Tuning. The user can use derivations (A) to help them understand Herbie-generated rewritings. Each expression can be copied using the copy button (B) for easy editing of existing rewritings. The user can use the input range editor (C) to “zoom in” on critical ranges—i.e., resample and reanalyze all expressions on a new range. Above, the user has tried rounding some of an expression’s constants after zooming.
Based on the positive feedback from the FPBench community, we moved on to conduct a user design study with FPBench community members.
### Phase 4: User Design Study with Prototype
Our user design study consisted of nine interviews with participants ranging from floating-point novices to experts. Most participants were graduate students working on floating-point-related research with at least two years of experience. We spaced these interviews out and iteratively added features to Odyssey, responding to user concerns after each interview. This phase reinforced some of our previous observations and led to new observations.
Reinforcing our first observation from Phase 2-Testing UI Changes, we found that users would frequently zoom the chart to specific critical ranges and would interactively shift between the full input ranges and critical subranges. (For an example, see Figure 6C.) One particularly interesting use of this feature we observed was for adjusting branch conditions. For example, if Herbie suggested an "if \(x<0.1\)" condition, users zoomed to the region between 0.01 and 1 and checked whether 0.1 was the best threshold. Users then modified the constant and compared the accuracy of the modified program thanks to Odyssey's support for comparing multiple user-submitted programs, as our other observations from Phase 2-Testing UI Changes suggested.
We also made the following new observations. First, we found that more experienced users iteratively submitted many handwritten programs to Odyssey. In some cases, users modified a Herbie result, used Odyssey's reported error to confirm that the change didn't harm accuracy, and then used the modified program as a base for further modifications. In other cases, users modified a Herbie result and re-ran Herbie on the modified expression, helping Herbie around a road-block of some kind and achieving a lower error as a result. We also saw users combining pieces of different programs into a single final program. In some cases, users described implicit trade-offs, for example noting that Herbie's result was very complex, and that deleting certain terms from Herbie's result was less accurate but easier to read.
Second, we identified and corrected usability issues introduced by Odyssey. For example, when the user input a new rewriting, we initially only provided feedback on whether the user's expression parsed successfully. However, users weren't always sure if they had entered the expressions correctly, and we even found ourselves making mistakes. To address this, Odyssey converts a user's input to LaTeX as they type, and renders it using KaTeX (Brock et al., 2018). This lets users validate their expectations about parser behavior.
Third, we noticed that many participants, including both novices and experts, struggled to explain _why_ there was error in an expression, even when they could see the error in Odyssey's error plot. For example, in the program \(\log(x+\sqrt{x^{2}+1})\), most users could guess that the error for large \(x\) values was caused by overflow, but far fewer participants could identify that error for small \(x\) was caused by the \(\log()\) operation.
In a follow-up conversation with the Herbie developers, we learned that Herbie used a metric called "local error" to identify which operations were likely sources of error. We decided that exposing this metric to the user as a local error "heatmap" (see Figure 3D) could help users better understand floating-point error. At first, our heatmap showed average local error, in keeping with Herbie's internals. However, that left users to guess which input values contributed to average local error of a given operator. Therefore, we extended our visualization to show local error for a specific input. This per-point visualization ended up being much easier for users to understand, since for any input there is typically only one operation with significant local error. Participants frequently used per-point local error to explain why error occurred for specific programs. In the process, we discovered that Herbie's local error implementation had a subtle bug on specific, rare inputs. The Herbie developers were able to patch the bug, and we incorporated the patch into Odyssey.
Finally, after initially removing derivations (see Figure 6A), we realized that they were an important foundation for users' trust in Herbie's results. For example, one participant was surprised when Herbie recommended the expression "1.0" as an "improved" version of some much more complex expression and became skeptical of all of Herbie's other outputs, manually performing derivations to check that those expression had been computed correctly. Adding back support for derivations gave users more trust in Herbie's suggestions.
Through the user design study, we observed the following:
* Experienced users follow an iterative process when rewriting expressions.
* Rapid feedback during expression input helps users catch low-level mistakes.
* Users need help understanding what part of the expression is causing error.
* Users want justification and explanation for the steps of automated tools.
## 5. Expression Rewriting Workflow and Design Objectives
Our design study with users of different skill levels (Phase 4-User Design Study ) led us to model floating-point error improvement as a well-defined workflow consisting of three main stages: diagnosis, solution generation, and tuning. We thus focused Odyssey on features that directly support this workflow.
The three-stage workflow we identified consists of:
First Stage: Diagnosing ProblemsIn this stage, users identify problematic operations within expressions, determine which problems are relevant to their objectives, and finding starting points for further analysis. For instance, in an expression like \(\log(x+\sqrt{x^{2}+1})\), users must determine that the \(x^{2}\) operation overflows for large values of \(x\), while the logarithm is inaccurate for small values of \(x\). The user then decides whether large values of \(x\) are relevant in their environment. If so, they focus on avoiding the overflow in \(x^{2}\).
We developed two principles to support diagnosis. First, as we saw in Phase 1-Bug Reports and Phase 4-User Design Study, users need ways to focus analysis on the parts of the input range and expression they care about investigating--without losing track of the broader analysis. Second, even experts (as in Phase 3-Mock-up and Phase 4-User Design Study ) need tools to help determine
which operations cause error without relying on their expertise or resorting to trial-and-error operation replacement.
Second Stage: Generating SolutionsIn the second stage, users gather potential rewritings from a variety of sources. The objective is to create a pool of rewritings that the use can evaluate and combine to address the problems identified in the first stage. While existing tools, like Herbie, are a valuable source of ideas and potential rewritings, the user must still track and organize the outputs. Moreover, rewriting ideas can come from many other places: other automated tools, papers, online references, and even the user's own creativity. Users need to collect the available rewritings, keep track of their origin, and organize them for easy evaluation.
We developed three principles to support solution generation. First, there must be a central repository of rewritings drawn from multiple sources. The repository must also store source-specific details, such as Herbie's derivations. Second, since users themselves are a major source of ideas, manual input of rewritings must be supported, with instantaneous feedback to provide low-level error checking. This supports a tight feedback loop and eases iterative exploration. Third, where possible, it should be possible to use user inputs as starting points for additional automated exploration, allowing users to overcome roadblocks faced by automated tools.
Third Stage: TuningIn the third stage, users test, compare, and tweak rewritings to optimize for their accuracy, performance, and maintainability goals. Often the diagnosis and solution generation phases help users identify multiple independent problems and multiple independent rewritings that address them. Users must combine these rewritings to address error. This combination process is itself iterative. Users needing to validate that the combination did not introduce its own error. Moreover, the combination process might itself need tuning. Users may want to adjust the threshold at which they switch from one rewriting to another. Overall, this stage involves iterative refinement and experimentation until the user is satisfied with the result.
We developed two principles to support tuning. First, the user needs ways to compare rewritings for accuracy and get instantaneous feedback as they work. Second, users need explicit support for combining rewritings, whether directly using "if" conditions or indirectly by allowing the user to see multiple rewritings at once.
This schematic workflow can be iterated and branched by the user. For example, the user might identify multiple problems and work on solutions to each one independently, and then attempt to combine the results. Alternatively, users may receive a promising result from Herbie and then attempt to understand its floating-point issues and iterate manually to address them. Yet another possibility, the user may fix error at one operation only to cause error at a different operation, requiring them to iterate to resolve this new problem.
Our observations throughout the iterative design process suggest that the principles address explicit user needs during floating-point error improvement. Moreover, these principles allow users to harness both their own expertise and automated tool capabilities to achieve optimal results.
## 6. Implementation
Odyssey is implemented in two pieces: a "backend" that uses Herbie to dispatch numerical tasks, and a "frontend" implemented using web technologies to present an interactive workbench UI to the user. Odyssey can be used via a web browser or embedded into tools like Visual Studio Code.
Figure 7. The general workflow supported by Odyssey. Odyssey starts with a real-number specification, analyzes sources of error, creates different solutions based on the analysis, and tunes solutions based on user’s needs.
### "Database Workbench" Architecture
The key to supporting our design principles is Odyssey's "database workbench" architecture. In this architecture, Odyssey stores a list of rewritings that the user is exploring and manages the invocation of analysis, visualization, and generation tools via its backend. This architecture stores all of the state on the frontend, allowing direct manipulation by the user. The automated analysis, visualization, and generation tools, meanwhile, are stateless, being invoked by Odyssey on whatever rewritings the user is currently considering. This architecture puts the user at the center, giving the user control over their workflow and invoking automated tools only when requested by the user.
This architecture also leads to a natural separation of concerns between the frontend and backend. The Odyssey frontend implements all interactions, graphics, and manipulation actions. However, all numerical tasks (sampling, evaluating error, and generating expressions) are the responsibility of the backend. This is particularly important to Odyssey, because TypeScript, which the frontend is written in, does not provide low-level operations like enumerating floating-point numbers, and the TypeScript environment's implementations of functions like log and sort may differ from the user's target environment. Moreover, while Odyssey currently only invokes Herbie subsystems, the backend could be extended to invoke other tools as well. Here, the backend would be tasked with ensuring that all of the tools invoked by the user interoperate, a responsibility that currently falls to the user.
### The Odyssey Frontend
The Odyssey frontend provides a rewritings table and error plot to help users diagnose problems, generate solutions, and tune the results.
The main state is stored in the rewritings table, shown in Figure 5. All rewritings the user is considering--including both those generated by Herbie and those entered by the user, are stored here. Each rewriting also shows its average error, for easy comparison. A checkbox allows the user to hide expressions from the error plot and other parts of the UI, which functions as a kind of "archiving" operation so that users can ignore sub-par rewritings without an irreversible deletion operation. Additionally, a clipboard button allows users to copy rewritings, which is essential to users modifying or combining rewritings. None of these interactions involve the backend, and are thus instantly responsive to user action.
The input box allows adding rewritings to the table using a natural mathematical syntax backed by a parser from the mathjs library (Herbie, 2017). Odyssey then converts that input both to an instantly-updating LaTeX render (to help users catch mistakes and typos) and to the standard FPCore input format, which Herbie uses to represent rewritings. Herbie is then invoked to analyze the error of the new rewriting, which is then added to the plot. Additionally, rewritings can be added to the table by invoking Herbie to generate suggested rewritings; any rewritings suggested by Herbie are also converted from FPCore back to LaTeX and mathematical syntax so that the user does not have to understand FPCore in order to use Odyssey.
The main visualization is a large error plot. This plot shows the error on all of the sampled inputs, for each of the rewritings in the rewritings table, with colors helping users match each rewriting to its error plot. Because rewritings often have identical error over some range the user can click on a rewriting in the table to highlight it in the error plot; users can also use checkboxes in the table to hide expressions from the error plot. By hovering over each point in the error plot, the user can see the exact sampled input, and by clicking on a point, they can update parts of the UI (such as the local error heatmap) to focus on that specific input. The user can also adjust the input domain using an input range selector below the plot. Changing the input domain causes Odyssey to resample inputs, evaluate each rewriting on the new inputs, and redraw the error plot using the newly-evaluated errors. Once again, besides adjusting the input range, all operations are instantaneous and do not invoke the backend.
On its own, Odyssey does not provide any additional features. However, Odyssey is extensible, and tools invoked by the backend can offer additional visualizations. To see these additional visualizations, the user selects a specific rewrite, and the visualizations are shown beneath the main UI. Selecting the specific rewritings means that different rewritings, which might come from different sources, can provide different kinds of justifications or explanations. Our Herbie backend provides two such visualizations: the local error heatmap and derivations. When Odyssey is extended to support additional backend tools, we expect each tool to provide its own additional visualizations. In lieu of additional visualizations, manually-entered expressions can have a "note" explaining the user's reasoning or referencing some external source.
### The Herbie Backend
Odyssey's Herbie backend is used to sample inputs, evaluate the error of rewritings, and suggest new rewritings to the user. Herbie was originally designed as a batch-mode tool, so part of our work involved adding an HTTP API to expose various internal analysis functions so that they can be invoked by Odyssey. Luckily, the Herbie features that we wanted to expose, including input sampling and error evaluation, were already independently-invocable functions in Herbie. Odyssey therefore does not require many changes to Herbie's internals, and tools besides Herbie could potentially be used in the backend.
A key challenge in the backend is dealing with latency. Herbie operations typically take several seconds, and suggesting alternatives for complex expressions can take minutes. We used the Typescript SolidJS library (Becker et al., 2017) to ensure that all invocations of the backend were asynchronous, allowing the user to continue working while Herbie processes their requests. Because Odyssey's core interactions (such as selecting individual rewritings and examining input points) do not require interaction with the backend, Herbie's latency could be hidden to a significant extent. Another source of latency was Herbie's initial design as a batch-mode tool. This means that Herbie typically samples inputs, evaluates error, and suggests rewritings every time it is invoked, even though some of those steps (like sampling inputs) are slow while others (like evaluating error) are fast. Odyssey's Herbie backend thus independently caches the outputs of each step (like the sampled inputs). This way, evaluating the error of an expression is done on cached sampled inputs and takes
milliseconds instead of resampling the inputs, which would take seconds.
This approach was also used when extending Odyssey with two additional visualizations from Herbie: the local error heatmap and derivations. The local error heatmap uses the cached sampled inputs to evaluate an internal Herbie heuristic, local error, which estimates the error introduced at each input and by each operation in some rewriting. Odyssey then visualizes the results by coloring the high-error operations of the rewriting in red. While local error was originally just an internal Herbie heuristic, users found it helpful in diagnosing floating-point problems. Additionally, derivations use the cached suggested rewritings produced by Herbie to show the sequence of steps Herbie took to produce its output. Originally meant as a debugging tool for the Herbie developers, we found that derivations increased user trust in Herbie's results. Importantly, while local error and derivations are specific to Herbie, the general principle of exposing internal heuristics could be applied to other tools. Odyssey's "database workbench" architecture then provides a modular set of additional visualizations where these tools' internal heuristics can be presented to the user.
## 7. Expert Evaluation
The goal of the expert evaluation was to assess the effectiveness of Odyssey in supporting the three-stage workflow we identified: diagnosing problems, generating solutions, and tuning expressions.
### Protocol
We conducted an interview study with five experts from the floating-point community to evaluate the effectiveness of Odyssey in supporting the three-stage workflow. We recruited the experts via email through professional networks and communities (e.g., FPBench). Each expert had different levels of experience in academia and industry, ranging from 3 years to over 45 years, and their backgrounds covered various aspects of floating-point systems, including hardware design, verification, and optimization.
Each interview session was conducted over Zoom, with experts operating the tool via remote control to avoid early issues we experienced with participants on networks with special configurations. Interviews lasted between 60 and 90 minutes and consisted of three parts:
* _Introduction and tutorial._ The first author briefly introduced Odyssey and the problems it is designed to address. Then, each expert followed a hands-on tutorial demonstrating the usage of Odyssey on a simple example.
* _Seven tasks._ Each expert completed seven tasks, each designed for one of the three workflow stages and aimed at eliciting the experts' reactions to different parts of Odyssey's interface (Table 1). If the experts encountered difficulties, the first author provided guidance or reminded them of relevant interface features from the tutorial.
* _Exit survey and discussion._ To conclude, each expert completed a survey (Table 2) and participated in a semi-structured interview with the first author, where the experts reflected on their experience with Odyssey and provided feedback on potential improvements and extensions. The first author specifically asked for experts' opinions on the legitimacy of the workflow we aim to support, its relevance to their work, and the extent to which they felt Odyssey supported each part of the workflow.
Throughout all three parts, the experts' screens and audio were recorded. The first author also took note of the experts' comments, insights, and responses to the tasks and survey questions. All study materials are provided as supplemental material.
### Analysis and Results
We conducted an iterative, thematic analysis of expert solutions and the first author's notes for each stage of the workflow. Below, we discuss the experts' responses to the relevant tasks and survey items for each stage. Through this analysis, we aim to provide a qualitative evaluation of Odyssey's effectiveness in supporting each part of the workflow.
First Stage: Diagnosing Problems.: Task 1 required experts to analyze the error in an inverse hyperbolic sine implementation and identify the parts of the expression causing errors, then decide which operation or operations needed to be rewritten in order for the rewriting to correctly handle large inputs. Among the five experts, four successfully completed this task, relying on Odyssey's error visualizations (see Figure 3).
P2 explored multiple input ranges in order to identify the two problematic operations:
* _"This is across the entire sample... so I wonder if it's doing something different on this side [clicking a point with a small x value and looking at the local error graph] So there it's all the log, and over there... [clicks a large x value] it's all the square root. So that's interesting, it's actually coming from different operations."_
Here, the error plot effectively surfaced the two areas of high error (small and large x values), giving the expert clear places to look for troublesome operations. Then, by switching between inputs in different regions, the expert was able to see that the problematic operation was different between these regions.
In the survey (see item 3 in Table 2), all five experts rated the interface's ability to help identify or confirm specific problems with expressions at a 7 out of 7. We attribute this success mainly to the error plot and local error heatmap, which implemented the second principle we identified for a good diagnosis tool. They supported the user in assigning responsibility for error without relying on expertise or resorting to trial and error. As P2 concluded,
* _"Having the graph and being able to click on the different places where error is high is definitely nicer than just looking at output in a text file."_
Second Stage: Generating Solutions.: We designed several tasks to evaluate Odyssey's support for collecting and evaluating new expressions that address the identified problems in floating-point expressions. Close to all experts who attempted each task succeeded (see Tasks 2 and 5 in Table 1).
Task 2 required experts to analyze a troublesome subexpression from Task 1 and find a better rewriting for it. Then, experts needed to bring the solution back to the original analysis and decide if they were happy with it. Four of the five experts who attempted
Task 2 successfully completed it, showing that the interface facilitated the collection of solutions and their integration into existing expressions. Of those four, two experts found their own unique approaches to solving the problem identified in Task 1 rather than relying on an automated solution. One expert pulled a factor of \(x\) out of the square root, and another expert created a branch that switched to an approximation for large values of \(x\). Both of these approaches showed low error on the error plot, though the experts noted there could be issues with these choices (for example, branching impacts performance, and dividing by \(x\) is risky when \(x\) could be 0). This showcases the flexibility of Odyssey in allowing users to explore alternative solutions and evaluate their impact on the error plot. (The expert who did not complete Task 2 was our first participant, with whom we lost much of the interview time due to the networking issues mentioned earlier.)
Similarly, Task 5 asked experts to find a more accurate rewriting for a subexpression applicable to small values of \(x\). Three out of the four experts successfully completed this task, further supporting the effectiveness of Odyssey in assisting experts in gathering and evaluating potential solutions.
P3 had the following to say about working through the process up to Task 5:
* _It feels like quite a natural way you might approach this problem as a human. You're burrowing down into it more precisely and pushing your error around a little bit. I thought the transition of 'we've moved the error from the log into the subtract [using log1p], now I know how to deal with the error in a subtract as well'felt natural,... since... once we figure out it was going to be the subtract that was giving us trouble, then [we can use Herbie to rewrite successfully]. It gets there much faster, but it's cool that I also feel that I would have thought about going in a similar direction._
In the survey, experts rated the interface's ability to generate ideas for solving specific problems (item 4) with scores ranging from 5 to 7, with an average of 5.8. The interface's effectiveness in evaluating the quality of ideas quickly (item 5) was rated between 5 and 7, with an average of 6.4. These relatively high ratings indicate that the experts found Odyssey helpful in generating and evaluating ideas for improving floating-point expressions.
Users were able to use Odyssey to successfully generate a variety of valid nontrivial new expressions for analysis, both using an automated tool (e.g. the way we expected users to solve Task 2) and by themselves (P5 and P4). This was significantly different from our experience in the earliest parts of Phase 4-User Design Study of the design process. The ability to send rewrites back to Herbie was a vital part of the solution generation process for the three experts who were able to complete Task 5.
_Third Stage: Tuning._ The third stage of our proposed workflow involves tuning expressions to further optimize their accuracy and performance. To assess Odyssey's support for this stage, we evaluated Task 7, as well as survey items 6 and 7, which inquired about the interface's support for comparing and mixing different expressions.
Task 7 challenged experts to create a more accurate expression than Herbie's best alternative for a given expression by combining different solutions and fine-tuning the branch point. The task demonstrated that a human can use Odyssey to outperform Herbie's
\begin{table}
\begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline Task & Description & Targeted part of workflow & Success rate \\ \hline
1 & \(log(x+\sqrt{x\cdot x+1})\) is an expression for the inverse hyperbolic sine. Identify the parts of the expression causing errors for large/small \(x\). & Diagnose troublesome subexpressions and problematic ranges. & 4/5 \\ \hline
2 & Use Odyssey to find a solution for the troublesome square root subexpression from task 1. & Generate solutions for the subexpression and use these to optimize original expression. & 4/4 \\ \hline
3 & Is your solution to task 2 good enough? & Use visualizations to form evaluation criteria for ending analysis. & 4/4 \\ \hline
4 & Identify problems with branch expressions in fully automated solutions for task 1. & Explain important features of expressions and diagnose issues. & 3/4 \\ \hline
5 & Use Odyssey to find and recommend \(log1p\) to solve small \(x\). & Nudge an automated tool past roadblocks to generate better solutions. & 2/3 \\ \hline
6 & Evaluate whether the full solution for the expression after tasks 1-5 is trustworthy. & Use Odyssey’s feedback on expressions and information about expression soundness to evaluate expressions’ trustworthiness and fitness based on personal standards. & 2/2 \\ \hline
7 & Use branch conditions to outperform a fully automated rewriting for the expression \((exp(x)-2)+exp(-x)\). & Mix solutions from different sources and tune branch conditions to create stronger solutions. & 3/4 \\ \hline \end{tabular}
\end{table}
Table 1. Experts worked through up to seven tasks to exercise the features of Odyssey before a survey-based discussion. Due to time constraints, not all experts completed all tasks.
internal heuristics when unique requirements call for a tailored approach. After using the range zoom feature and noticing Herbie's solution was still outperforming their solution on a small region, P2 remarked, "_So in this view, we can see that we don't have quite the right number [for the branch point]._" The expert then adjusted the branch point based on the visual feedback.
In the survey, experts rated the interface's capacity to help them mix expressions from different sources (item 7) with scores ranging from 4 to 7, with an average of 5.4. The interface's support for comparing different expressions (item 6) was rated even more highly, at an average of 6.4 (range from 6 to 7).
As we can see in the example above, the especially high rating for comparison was likely a result of combining the ability to plot the error for different expressions together with zooming to focus on getting feedback on specific regions. A couple experts (P4, P5) mentioned wanting more support for combining expressions, especially around conditional branches. P4 explained that an automated tool might be able to add guard conditions where appropriate.
Finally, the experts appreciated the potential power of mixing human and automated solutions, with P3 commenting that suggesting log1p and hypot to Herbie felt similar to proof assistant tools where "_if you just add in an additional step on the way or an additional lemma... then it can actually mudge it over that threshold._"
In summary, the results from Task 7, along with the survey responses for items 6 and 7, provide evidence that Odyssey effectively supports tuning expressions for optimal accuracy and performance. The interface enables users to mix expressions and adjust coefficients while offering real-time feedback, streamlining the tuning process and enhancing the overall quality of floating-point expressions.
## 8. Discussion
We see our results as very promising, especially for early work on supporting a unified workflow. While we only included one source of automatic rewritings and one source of analyses in Odyssey (both provided by our Herbie API), even this was enough for users to understand error and solve real-world problems like the Rust asinh bug we've discussed. Furthermore, the tool-independence of Odyssey's database of expressions and analyses means more tools can be easily added in the future.
Floating-point experts were very appreciative of our work, and saw a variety of ways it could be extended to further support their particular areas of expertise. These included ideas like adding support for multi-precision rewritings, incorporating operation cost analyses from Herbie and other tools, adding ways of helping human users simultaneously optimize at least 3 variables, and increasing support for splitting expressions into subregions and subexpressions based on domain-specific heuristics.
The signs we saw and feedback we received pointed to this being an area that can use much more interface-level support, especially from systems that increase the interoperability and user-friendliness of analysis tools. Our experience was that even members of our team who originally had little floating-point background were able to develop features users in our studies ultimately found very helpful, simply by pushing for increased user access to tool-internal data. A broad lesson we have learned is that "black box" tools may use many internal measures and heuristics that are ripe for being turned into an inside-out "white box" toolset.
We would like to highlight again that the aspect of our system most frequently praised by experts, the local error heatmap, was added mainly to help in our preliminary work with less experienced users. We believe it is easy to underestimate the cognitive burden that experts face when working on the details of floating-point error correction, and we encourage designers to relieve that burden wherever possible. Just because an expert should be able to figure out with a little effort where an expression's error is coming from does not mean they should have to expend that effort when a computer can do it for them. Relieving this burden led the experts in our evaluation to praise Odyssey for allowing them to think about the problems they faced at a high level.
More broadly, tool designers should take into account the complete workflow of users. We hope developers can take our work here as an example, and with our positive results, we plan to continue refining the support Odyssey provides.
\begin{table}
\begin{tabular}{||c|l|c|c||} \hline \# & Survey Questions: & Results: & Average: \\ \hline
1 & “The workflow made sense to me and I was able to follow it.* & 5, 5, 6, 7, 7 & 6/7 \\ \hline
2 & “This workflow matches my experience approaching real numerical analysis problems.* & 4, 6, 6, 6, 6 & 5.6/7 \\ \hline
3 & “The interface helped me identify or confirm specific problems with expressions.* & 7, 7, 7, 7, 7 & 7/7 \\ \hline
4 & “The interface allowed me to generate ideas for solving a specific problem.* & 5, 5, 6, 6, 7 & 5.8/7 \\ \hline
5 & “The interface let me evaluate the quality of ideas for rewritings quickly.* & 5, 6, 7, 7, 7 & 6.4/7 \\ \hline
6 & “It was easy to compare expressions in the interface.* & 6, 6, 6, 7, 7 & 6.4/7 \\ \hline
7 & “It was easy to mix together expressions from different sources in the interface.* & 4, 5, 5, 6, 7 & 5.4/7 \\ \hline
8 & “The interface let me focus on thinking about the problem at a high level.* & 5, 6, 7, 7, 7 & 6.4/7 \\ \hline
9 & “I can think of ways to extend this workflow + interface to address numerical analysis problems & 5, 6, 7, 7, 7 & 6.4/7 \\ that I have worked on.* & & & \\ \hline \end{tabular}
\end{table}
Table 2. After completing the seven tasks, experts were asked to evaluate different aspects of the tool on a scale of 1 to 7.
### Limitations and Future Work
A major limitation of our design process was the tight design loop we had to maintain during development. While this was necessary to ensure we were building a system that would be useful to users, this meant we had to compromise on the polish of some features and altogether avoid others which would take too long to implement or require disturbing many parts of the interface. With more time, we plan to further improve the interface's layout and provide more structured expression editing support.
Of course, the main future work we have planned is to extend Odyssey to incorporate more analyses and sources of rewritings, including ideas like operation cost analyses and hardware-specific rewrites that were mentioned by the experts in our study. Tools like PRECISA [45] that already have an HTML-based analysis interface may be a good starting point for testing these integrations.
There are many precedents for applying human-in-the-loop methods to the solution of formal problems, as P3's striking analogy between Odyssey and proof assistants like Coq pointed out. Proof assistants are a well-explored domain with a clear interactive workflow, and we think considering this analogy further could be also fruitful.
Odyssey also has clear potential application in floating-point education. Several of our tasks asked users to explain to the interviewer potential problems with an expression using the interface, and both the experts and the novices in our formative study were able to point out areas of high error, select points, and zoom in to get a better look at problem regions to support diagnostic claims. Odyssey has the potential to thrive in a classroom setting; it could be used by an instructor to show off how expression rewriting makes expressions more accurate or by students to explore and diagnose error sources an expression and try fixing them. We plan to try applying Odyssey in an undergraduate class covering floating-point representations soon.
We are also excited by the explanatory potential offered by the incorporation of large language models (LLM) like GPT. We have found that available language models can, in fact, offer rewritings and generate plausible explanations for users, but they are prone to "hallucinating" and incorporating nonsensical logic, so their output must be validated before it is used. Odyssey is the perfect platform for investigating the rewriting attempts of LLMs, and may be able to incorporate them in other ways, including by sending a record of the user's work on the analysis to the language model as context for further generation.
Finally, a major possible extension was brought up independently by two different participants, who commented that they would be very interested in plugging in additional visualizations showing actual output effects of errors for each expression. For example, one participant has worked with expressions representing ellipses, and wanted to see how different kinds of error could lead to distortion of the ellipses. Allowing for additional visualizations would be a major possible improvement, since it will help users understand whether the error they see on the error plot matters when code is compiled and run in practice. If (as with ellipses) the output space can be mapped back to specific input values, combining output visualization with the error graph heatmap will let experts relate points with noticeable error in the actual output to the particular mathematical operation causing that error. Thanks to the reactive database model of Odyssey's codebase, it will be easy to plug in this kind of output visualization and have it communicate with the rest of the interface about selected points.
Overall, we are excited to see what floating-point experts and novices end up doing with Odyssey and look forward to improving our support for their work in the future.
|
2307.07844 | Studying QGP transport properties in a concurrent minijet+hydro
framework | Minijets are ubiquitous in heavy-ion collision experiments. However, they are
often excluded from the hydrodynamic simulations of QGP as they do not
thermalize at short time scales and are not treated as part of the collective
medium. Using a concurrent jet+hydro framework, we show that the minijets could
account for a significant portion of particle multiplicity. Therefore, the
energy deposition from minijet-medium interactions can substantially modify the
QGP transport properties inferred from model-to-data comparisons. | Charles Gale, Sangyong Jeon, Daniel Pablos, Mayank Singh | 2023-07-15T16:33:36Z | http://arxiv.org/abs/2307.07844v1 | # Studying QGP transport properties in a concurrent minijet+hydro framework
###### Abstract:
Minijets are ubiquitous in heavy-ion collision experiments. However, they are often excluded from the hydrodynamic simulations of QGP as they do not thermalize at short time scales and are not treated as part of the collective medium. Using a concurrent jet+hydro framework, we show that the minijets could account for a significant portion of particle multiplicity. Therefore, the energy deposition from minijet-medium interactions can substantially modify the QGP transport properties inferred from model-to-data comparisons.
## 1 Introduction
There is a wide consensus that a deconfined state of nuclear matter, the quark-gluon plasma (QGP), is created in ultrarelativistic heavy-ion collisions [1]. Relativistic dissipative hydrodynamics has been very successful in describing the low-momentum collective motion of the QGP [2]. Heavy-ion collisions produce numerous moderate-energy jets (minijets) by hard scatterings at early times [3]. These minijets do not thermalize and cannot be effectively treated as part of the fluid. They traverse the medium and deposit energy and momentum through interactions with the QGP. Minijets can account for a significant portion of total multiplicities and act as significant sources of local energy-momentum fluctuations [4].
Minijet evolution can be affected by other minijets whose wake lies in its path. They can give rise to Mach wakes in the fluid and significantly change the evolutionary history of the medium. Consequently, a consistent treatment of minijet dynamics requires a concurrent jet+hydro framework where the minijets and the bulk medium inform each other's evolution.
Here, we study the effect of minijets on the extraction of QGP transport properties. We describe our framework in section 2, look at the modification to hydro evolution in section 3 and discuss our findings in section 4.
## 2 Our framework
The bulk QGP is initialized using the IP-Glasma model [5]. This describes physics below the saturation scale \(Q_{s}\). The color fields in IP-Glasma are evolved for 0.4 fm/c after the collision. The minjets are initialized by employing the hard processes in the PYTHIA8 framework [6]. The space-time positions of the binary nucleon-nucleon collisions used for IP-Glasma and PYTHIA are the same.
The bulk medium is evolved using \(3+1\) D viscous hydrodynamic solver MUSIC [7]. The minijet energy loss is fed into the bulk evolution via a source term [4]. The energy-momentum tensor \(T_{\rm hydro}^{\mu\nu}\) evolves as
\[\partial_{\mu}T_{\rm hydro}^{\mu\nu}=J^{\nu}, \tag{1}\]
where the source term \(J^{\nu}\) is the momentum deposition convoluted with a Gaussian with width \(\sigma_{x}\) in \(x\) and \(y\) directions, and width \(\sigma_{\eta}\) in the rapidity direction,
\[J^{\nu}=\sum_{i}\frac{\Delta P_{i}^{\nu}}{\Delta\tau(2\pi)^{3/2}\sigma_{x}^{2 }\sigma_{\eta}\tau}e^{-\frac{\Delta x_{i}^{2}+\Delta y_{i}^{2}}{2\sigma_{x}^{ 2}}}e^{-\frac{\Delta y_{i}^{2}}{2\sigma_{\eta}^{2}}}. \tag{2}\]
Here, \(\tau\) is proper time, \(\Delta\tau\) is the size of evolution time-step, and \((\Delta x_{i},\Delta y_{i},\Delta\eta_{i})\) is the spatial distance to the \(i^{\rm th}\) momentum deposition \(\Delta P_{i}^{\nu}\).
The energetic partons travel a finite distance in the QGP before stopping. The minijet energy loss is treated within the hybrid strong-weak coupling model [8]. The parton splittings are governed by the weakly coupled perturbative interactions while the minijet-QGP interaction is governed by the strongly coupled energy loss formula [9]. The energy lost per unit length is given as
\[\left.\frac{dE}{dx}\right|_{\rm strongly\ coupled}=-\frac{4}{\pi}E_{\rm in} \frac{x^{2}}{x_{\rm stop}^{2}}\frac{1}{\sqrt{x_{\rm stop}^{2}-x^{2}}}\,. \tag{3}\]
Here, \(E_{\rm in}\) is the initial parton energy and \(x_{\rm stop}\) is the stopping distance. In the strongly coupled limit, the stopping distance can be obtained from holographic calculations [10, 11] as
\[x_{\rm stop}^{\rm AdS/CFT}=\frac{1}{\kappa_{i}T}\left(\frac{E}{T}\right)^{1/3}, \tag{4}\]
where \(\kappa_{i}\) is a species dependent parameter and \(T\) is temperature.
The bulk medium is hadronized using the Cooper-Frye formalism [12] and the surviving minijets are hadronized using the Lund string model in PYTHIA. The partons that never crossed the freezeout hypersurface and were never quenched are hadronized using a corona color neutralisation model (CCN) where the original parton color is preserved, as in vacuum. The quenched partons have their colors randomized and are hadronized along with the medium in the local thermal color neutralisation model (LTCN). All the hadrons undergo cascading via UrQMD [13].
## 3 Modification to hydro
The orientation of minijets is not correlated with the event geometry. They will also enhance energy deposition and entropy production. Consequently, hydrodynamic parameters need to be rescaled to match the data. In this study, we modify the overall normalization (\(s_{\rm factor}\)) of the energy density after the IP-Glasma evolution and the constant shear viscosity to entropy density ratio (\(\eta/s\)) to asses the effects of minijets on medium evolution. The values of these parameters depend on the minimum possible transverse momentum of each parton in a back-to-back parton pair produced in a hard scattering (\(p_{\rm min}^{\rm 1}\)), which is chosen to be above saturation scale. The optimum values of these parameters for three different choices of \(p_{\rm min}^{\rm 1}\) are fixed to reproduce charged hadron multiplicity and \(v_{2}\) (see Fig. 1). The obtained parameters are compared to the case without minijets in Table 1.
For \(p_{\rm min}^{\rm 1}=4\) GeV, the value of \(\eta/s\) and \(s_{\rm factor}\) need to be adjusted by about 85% and 50%, respectively. This is because a sizeable portion of the energy in the bulk medium is being contributed by minijet sources, as seen in Fig. 2. On top of this, numerous un-thermalized minijets, whose \(x_{\rm stop}\) is longer than their path-length in QGP, also contribute to multiplicity. This ratio can be seen
Figure 1: The charged hadron spectra (left) and \(v_{2}\) (right) as a function of centrality for different choices of \(p_{\rm min}^{\rm 1}\), with suitably adjusted \(s_{\rm factor}\) and \(\eta/s\).
in Table 2. The adjustments to these parameters for higher \(p_{\rm min}^{\rm J}\) is less dramatic as the energy contribution from minijets decreases.
The flow profile is significantly different, even after accounting for changes in normalization and shear viscosity. Figure 3 shows the average transverse velocity for different choices of \(p_{\rm min}^{\rm J}\). Flow develops much faster as more and more minijets deposit their momentum and enhance pressure gradients.
Minijets also modify the cooling of QGP fireball. They carry energy in their wake leaving cooler temperatures behind which break the constant temperature isotherms. This results in a larger portion of fireball freezing out earlier as evident in Figure 4.
\begin{table}
\begin{tabular}{c|c|c} \(p_{\rm min}^{\rm J}\) & \(s_{\rm factor}\) & \(\eta/s\) \\ \hline
4 GeV & 0.45 & 0.02 \\
7 GeV & 0.82 & 0.1 \\
10 GeV & 0.9 & 0.125 \\ No Jets & 0.915 & 0.13 \\ \end{tabular}
\end{table}
Table 1: Optimum values of initial state normalization and the shear viscosity to entropy density ratio for different \(p_{\rm min}^{\rm J}\).
\begin{table}
\begin{tabular}{c|c|c} \(p_{\rm min}^{\rm J}\) & \(\langle N_{\rm frag.}/N_{\rm total}\rangle_{0-5\%}\) & \(\langle N_{\rm frag.}/N_{\rm total}\rangle_{40-50\%}\) \\ \hline
4 GeV & 0.077(1) & 0.252(3) \\
7 GeV & 0.0125(5) & 0.033(2) \\
10 GeV & 0.0042(3) & 0.014(2) \\ \end{tabular}
\end{table}
Table 2: The average ratio of the number of hadrons coming from the fragmentation of un-thermalized partons to the total number of hadrons.
Figure 2: Ratio of the energy injected by minijets in the medium to the total energy of the medium as a function of proper time.
## 4 Discussion
The simulations with minijets, with suitable adjustments to \(s_{\rm factor}\) and \(\eta/s\), reproduce the data well. The \(p_{T}\)-integrated spectra and \(v_{2}\) are shown in Figure 1. The differential spectra and \(v_{n}\) can also be reasonable reproduced for different choices of \(p_{\rm min}^{\rm J}\)[4].
The enhanced entropy from the minijets need a downward adjustment of the overall normalization factor of the initial energy density in the hydrodynamic medium. This is to ensure that the
Figure 4: Fraction of energy being frozen out of a constant temperature hypersurface in 30-40% centrality collisions, as a function of proper time.
Figure 3: Averaged transverse fluid velocity as a function of time for different centralities.
correct multiplicities are reproduced. The enhanced fluctuations also require re-tuning of the shear viscosity to entropy density ratio. This has important implications for extraction of QGP properties from model-to-data comparisons in the heavy-ion collision program.
The size of these effects depend on the soft-hard separation scale \(Q_{s}\). This was an open parameter in our study. In principle, this could be optimized from a systematic Bayesian study of this model and experimental data. However, it is more likely that the soft and hard modes do not cleanly separate at a particular scale, and there is an overlap between the two. This warrants an energy loss model with a gradual separation between the soft and hard modes. These aspects are left for future studies.
## Acknowledgements
This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (C. G., S. J.) and in part by the U.S. DOE under Grant No. DE-FG02-87ER40328 (M. S.) Computations were made on the Beluga supercomputer at McGill University, managed by Calcul Quebec and by the Digital Research Alliance of Canada. D.P. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 754496.
|
2301.05305 | Reinforcement Learning-based Joint Handover and Beam Tracking in
Millimeter-wave Networks | In this paper, we develop an algorithm for joint handover and beam tracking
in millimeter-wave (mmWave) networks. The aim is to provide a reliable
connection in terms of the achieved throughput along the trajectory of the
mobile user while preventing frequent handovers. We model the association
problem as an optimization problem and propose a reinforcement learning-based
solution. Our approach learns whether and when beam tracking and handover
should be performed and chooses the target base stations. In the case of beam
tracking, we propose a tracking algorithm based on measuring a small spatial
neighbourhood of the optimal beams in the previous time slot. Simulation
results in an outdoor environment show the superior performance of our proposed
solution in achievable throughput and the number of handovers needed in
comparison to a multi-connectivity baseline and a learning-based handover
baseline. | Sara Khosravi, Hossein S. Ghadikolaei, Jens Zander, Marina Petrova | 2023-01-12T21:36:05Z | http://arxiv.org/abs/2301.05305v1 | # Reinforcement Learning-based Joint Handover and Beam Tracking in Millimeter-wave Networks
###### Abstract
In this paper, we develop an algorithm for joint handover and beam tracking in millimeter-wave (mmWave) networks. The aim is to provide a reliable connection in terms of the achieved throughput along the trajectory of the mobile user while preventing frequent handovers. We model the association problem as an optimization problem and propose a reinforcement learning-based solution. Our approach learns whether and when beam tracking and handover should be performed and chooses the target base stations. In the case of beam tracking, we propose a tracking algorithm based on measuring a small spatial neighbourhood of the optimal beams in the previous time slot. Simulation results in an outdoor environment show the superior performance of our proposed solution in achievable throughput and the number of handovers needed in comparison to a multi-connectivity baseline and a learning-based handover baseline.
Millimeter-wave, user association, beam tracking, handover, reinforcement learning.
## I Introduction
Millimeter-wave (mmWave) is a key radio access technology for beyond 5G communication systems, offering ultra-high data rates due to a large amount of free spectrum [1]. However, due to the fewer scattering paths and significant penetration loss, mmWave links are vulnerable to static or dynamic obstacles. To overcome such severe loss, both base station (BS) and user equipment (UE) may need directional communication using a large number of antennas, which may result in frequent misalignment of beams due to mobility and blockage. Hence, finding and maintaining the optimal beam directions (beam alignment) is necessary. The lengthy period to achieve the beam alignment (hundreds of milliseconds to seconds [2]) results in a high cell search time or BS discovery time in mmWave systems. As reported in [3], the BS discovery time which is the time required to search the target BS when the handover command is received by the UE is about \(200\) ms. Moreover, to improve the capacity and coverage the density of the BSs is usually high in mmWave systems [1]. Hence, conventional handover methods based on instantaneous received signal power can cause unnecessarily frequent handovers and a ping-pong effect. This leads to a severe drop in service reliability. Therefore, fast BS discovery (finding target BS in the handover process), and efficient handover execution techniques, will be required to use the full promise of mmWave cellular networks.
The spatial mmWave channel can be approximated by a few dominant paths, where each path can be defined with its angle of departure (AoD), angle of arrival (AoA) and gain [4]. Hence, one can only estimate these path parameters instead of a large dimensional channel matrix [5, 6]. The process of identifying the dominant paths is called beam training. However, due to the dynamic environment, frequent beam training may cause high overhead1. Temporal correlation of spatial mmWave channel can be employed to accelerate the beam training process by tracking the variation of the dominant path directions [6].
Footnote 1: Overhead depends on the training time compared with the changes in the environment.
### _Related Work_
To address the link failure and throughput degradation in a dynamic environment, the multi-connectivity technique has been vastly analyzed in literature [7, 8]. In this technique, the UE keeps its connection to multiple BSs (either at mmWave band or sub-6 GHz band). However, power consumption, synchronization and the need for frequent tracking are the main challenges. In the 3GPP standard (release 16) two handover techniques are introduced to improve the link robustness during mobility: dual active protocol stack (DAPS), and conditional handover (CHO) [9]. In the DAPS, the connection to the current serving BS is maintained until the connection to the target BS is fully established. In the CHO, the UE is configured with multiple target BSs. During the handover, the UE can select one of the configured BSs as the target BS during the RRC reconfiguration message. Although CHO can decrease the handover failure probability, it may increase the handover latency if the UE asks for multiple handovers during a single RRC reconfiguration [7].
Applying machine learning as the main decision-maker tool to make the optimal handover decision and choose the target BS has been also studied in the literature [10, 11]. The authors in [10] proposed a reinforcement learning (RL) based handover policy to reduce the number of handovers while keeping the quality of service in heterogeneous networks. In [11] an intelligent handover method based on choosing the backup solution for each serving link to maximize the aggregate rate along a trajectory has been proposed.
In terms of beam tracking, authors in [12] applied the correlation of spatial mmWave channel in adjacent locations and proposed the beam steering method based on searching over a small angular space in the vicinity of the previously known valid beams. The authors in [6] applied machine learning to the tracking procedure to extract useful information from the history of AoD tracking.
All the aforementioned works only take handover or beam tracking issues into account. Additionally, they do not study the impact of selecting beam tracking and handover on the achieved throughput of the UE along its trajectory and instead focus on the achieved rate as the primary performance metric.
### _Our Contributions_
In this paper, we develop a novel joint handover and beam tracking algorithm in a mmWave network under mobility. The algorithm aims to associate the UEs to BSs that maximize the sum achieved throughput along the trajectory and ensure the achieved throughput in each location of the trajectory is higher than a pre-defined threshold. The user association process is defined as the process of determining whether a user is associated with a particular BS before data transmissions commence. In the case of handover, the UE is associated with a new BS, whereas in the case of beam tracking, the UE remains associated with the serving BS from the previous time slot. The main contributions of our paper are summarized as below:
* _System Modeling_: We model the user association problem as a non-convex optimization problem. Unlike the existing works in the literature, we consider achieved throughput as the main performance metric to measure the effect of handover or beam tracking on the UEs' quality of service.
* _Learning-based Solution_: The objective function in our proposed user association problem highly depends on the user association mechanism. We utilize the reinforcement learning (RL) algorithm to approximate the solution to this problem. The aim is to decide whether to run a beam tracking algorithm or a handover algorithm.
* _Joint Handover and Beam Tracking Algorithm_: In the case of a handover decision, the target BS will be recognized as the output of the RL algorithm. In the case of beam tracking, the search space will be defined based on our proposed tracking algorithm by searching the directions in the small spatial neighbourhood of the previously selected optimal directions.
* _Empirical Evaluation_: We apply ray tracing with a real building data map as the input. The results show the effectiveness of our proposed method in achieving throughput along trajectories and decreasing the number of handovers.
The rest of the paper is organized as follows. We introduce the system model and problem formulation in Section II. In Section III, we propose our method. We present the numerical results in Section IV and, conclude our work in Section V.
_Notations:_ Throughout the paper, vectors and scalars are shown by bold lower-case (\(\mathbf{x}\)) and non-bold (\(x\)) letters, respectively. The conjugate transpose of a vector \(\mathbf{x}\) is represented by \(\mathbf{x}^{H}\). We define set \([M]:=\{1,2,..,M\}\) for any integer \(M\). The indicator function \(1\{\cdot\}\) equals to one if the constraint inside \(\{\cdot\}\) is satisfied.
## II System Model and Problem Formulation
In this section, first, we introduce the mmWave channel model. Then, we present the user association problem formulation.
We consider a downlink communication with \(|\mathcal{B}|\) mmWave BSs, where each is equipped with \(N_{\mathrm{BS}}\) antennas, communicating with a single antenna mobile UE. We consider analog beamforming with a single RF chain. We assume all BSs allocate equal resources to their serving UEs. The channel between BS \(j\in\mathcal{B}\) and its serving UE during time slot \(i\) is [13]:
\[\mathbf{h}_{j}=\sum_{\ell=1}^{L}h_{\ell}\mathbf{a}^{H}(\phi_{\ell},\theta_{ \ell}), \tag{1}\]
where \(L\) is the number of available paths. Each path \(\ell\) has complex gain \(h_{\ell}\) (include path-loss) and horizontal \(\phi_{\ell}\) and vertical \(\theta_{\ell}\), AoD. Due to the notation simplicity, we drop the index \(j\) and \(i\) from the channel parameters. The array response vector is \(\mathbf{a}(.)\) where its exact expression depends on the array geometry and possible hardware impairments. The signal-to-noise ratio (SNR) in time slot \(i\) is
\[\text{SNR}^{(i)}_{j}=\frac{p|\mathbf{h}_{j}^{H}\mathbf{f}_{j}|^{2}}{\sigma^{2 }}, \tag{2}\]
where \(\sigma^{2}\) is the noise power, \(p\) is the transmit power, \(\mathbf{f}_{j}\in\mathcal{C}^{N_{\mathrm{BS}}}\) is the beamforming vector of BS \(j\).
We define variable \(x^{(i)}_{j}\in\{0,1\}\) for \(j\in\mathcal{B}\) as an association indicator in time slot \(i\), where is equal \(1\) if UE is associated to the BS \(j\) and \(0\) otherwise. Hence, the achieved rate per second per hertz in time slot \(i\) is
\[\mathrm{R}^{(i)}=x^{(i)}_{j_{S}}\log_{2}(1+\text{SNR}^{(i)}_{j_{S}})=\sum_{j \in\mathcal{B}}x^{(i)}_{j}\log_{2}(1+\text{SNR}^{(i)}_{j}),\]
where \(j_{S}\) is the index of the serving BS of the UE during time slot \(i\). Here, we assume each UE is served by only one BS.
We define the achievable throughput per hertz of the UE by multiplying its rate by the data transmission time as
\[\Gamma^{(i)}=(1-\frac{\tau_{b}^{(i)}}{\tau_{c}})\mathrm{R}^{(i)}, \tag{3}\]
where, \(\tau_{b}^{(i)}\) is the beam training duration which may have a different value in each time slot \(i\), and \(\tau_{c}\) is the duration of the time slot that is a fixed value for all time slots, see Fig. 1.
### _Beam Training and Beam Tracking_
As depicted in Fig. (a)a, when the UE is connected to a BS \(j\in\mathcal{B}\), initial beam training is performed by sending pilots over all combination of the beam directions in the codebook during \(\tau_{b}\). Based. on the UE's feedback of the received signal strength (or estimated SNR), the best beam pair directions are selected. Then, the BS and the UE would use this
direction \((\phi_{\ell^{*}},\theta_{\ell^{*}})\) during the data transmission phase. The beamforming vector, \(\mathbf{f}\) is chosen to maximize the achievable rate of the UE. Due to the monotonicity of the logarithm function, this is equivalent to maximising the SNR term in (2). Hence
\[\mathbf{f}_{j}^{*}=\operatorname*{arg\,max}_{\mathbf{f}_{j}\in\mathcal{F}} \quad\left|\mathbf{h}_{j}^{H}\mathbf{f}_{j}\right|^{2} \tag{4}\]
where \(\mathcal{F}\) is the beamforming codebook that contains all the feasible beamforming vectors. The n-th element of the codebook \(\mathcal{F}\) is defined as \(\mathbf{f}(n)=\mathbf{a}(\phi_{n},\theta_{n})\), where \((\phi_{n},\theta_{n})\) are steering angles and \(\mathbf{a}(.)\) is the array response vector.
When the BS continues serving the same UE in a consecutive time slot, only searching the neighbouring beam directions of the main directions can be sufficient to maintain the link quality. This process is called beam tracking. As shown in Fig. 0(b), the duration of \(\tau_{b}\) is much smaller than the initial beam training duration.
### _Problem Formulation_
The UE association depends on the channel quality between the BS and the UE. Due to UE mobility or temporary blockage, the channel quality changes and consequently the UE association. Based on the UEs' velocity, we determine how quickly the channel quality can change and predict the time at which the current UE association needs to be updated. We define \(T_{\mathrm{A}}\) seconds as the frequency of updating the association. Hence, we need to make the decision every \(T_{\mathrm{A}}\) whether to run the handover execution or beam tracking procedure if SNR is lower than the pre-defined SNR threshold (SNR\({}_{\text{thr}}\)). Note that we can have an on-demand reactive handover at any time slot if the link toward the serving BS fails abruptly. However, with a proper choice of \(T_{A}\), the frequency of those reactive events could be very small. We define the duration of the trajectory as \(M\) and consider the discrete time index \(i\) to describe the association update at each interval.
The goal is to maximize the aggregate throughput of the UE along the trajectory while ensuring the achieved throughput in each time slot \(i\) is higher than a predefined threshold. To this end, we define functions \(F_{1}\) and \(F_{2}\) as
* \(F_{1}\) is the averaged throughput along the trajectory as \[F_{1}=\sum_{i=1}^{M}\mathbb{E}\left[\Gamma^{(i)}\right],\] where the expectation is with respect to the randomness of channel fading and the blockage, \(M\) is the duration of the trajectory, and \(\Gamma^{(i)}\) is defined in (3).
* \(F_{2}\) is the expected number of time slots whose throughput is lower than the threshold (\(\Gamma_{\mathrm{thr}}\)).
We formulate the user association at time slot \(i\in[M]\) as an optimization problem which involves finding the \(x_{j}^{(i)}\) corresponding to the association indicator as
\[\max_{\{x_{j}^{(i)}\}_{i,j}} F_{1}-\lambda F_{2} \tag{5a}\] \[\mathrm{s.t.} \sum_{j\in\mathcal{B}}x_{j}^{(i)}=1,\forall,i\in[M]\] (5b) \[x_{j}^{(i)}\in\{0,1\},\quad\forall j\in\mathcal{B},i\in[M] \tag{5c}\]
where \(\lambda\) is a large constant controlling the importance of \(F_{2}\). Constraint (5b) guarantees that each UE is served by one BS.
The optimization problem (5) is nonlinear. Solving this optimization problem requires estimating the expectation value in \(F_{1}\) and \(F_{2}\) which requires running many realizations. Moreover, the impact of choosing the \(x_{j}^{(i)}\) (the target BSs in the handover case or choosing beam tracking procedure) propagates in time and can affect the UEs' performance in the next time slots. Therefore, we need to consider the long-term benefits of selecting association indicators besides their immediate effects on the UEs' performance. Furthermore, In order to select the target BSs, we need to model or predict the UEs' performance in the next time slots, which can add more complexity to the network due to the mobility of the UE and obstacles in mmWave networks. These motivate us to utilize the RL to approximate the solution of (5).
## III Proposed Method
We transform the problem (5) to an RL problem in which the objective function is turned into a reward function, and the constraints are transformed into the feasible state and action spaces. In the following, first, we start with defining the Markov decision process, and then we will describe our joint handover and beam tracking algorithm.
### _Markov Decision Process Formulation_
RL problems are formulated based on the idea of the Markov decision process (MDP), which is the agent's interaction with different states of the environment to maximize the expected long-term reward. The agent is the main decision-maker who can sit on the edge cloud. All BSs are connected to the agent. Now, we define different elements of an MDP.
Fig. 1: \(\tau_{c}\) is the time slot duration. \(\tau_{b}\) is (a) the initial beam training duration when the UE is associated with the new BS (handover case), (b) the beam tacking duration when the serving BS is the same for the consecutive slots.
#### Iii-A1 State Space
The state space describes the environment by which the agent is interacting through different actions. We define the state at time slot \(i\) as \(s^{(i)}=(\ell^{(i)}),j^{(i)}_{S},\text{SNR}^{(i)},I^{(i)})\in\mathcal{S}\), where \(\ell^{(i)}\) is the location index of the UE along the trajectory 2, \(j^{(i)}_{S}\) is the index of the serving BS, \(\text{SNR}^{(i)}\) is the SNR value of the UE with serving BS \(j^{(i)}_{S}\) in time slot \(i\). \(I^{(i)}\in\{0,1\}\) is the beam tracking activation indicator. \(I^{(i)}=1\) means the \(i\)-th time slot is the tracking slot for the UE.
Footnote 2: Note that, we discretize the location of the UE along the trajectory. Hence, every location dimension \((x,y)\) a trajectory with length \(M\) is mapped to a location index \(\ell^{(i)}\in[M]\).
#### Iii-A2 Action Space
The action space includes all possible actions that can be taken by the agent. The action can change the state of the environment from the current state to the target state. In our problem, \(a^{(i)}\in\mathcal{A}=\{0,1,2,...,[|\mathcal{B}|]\}\) is the decision regarding beam tracking (\(a^{(i)}=0\)) or choosing the index of new serving BS in the case of handover decision (\(a^{(i)}\in[|\mathcal{B}|]\)). In other words, if \(a^{(i)}\neq 0\) means the handover decision is made and the value of \(a^{(i)}\) shows the target BS. Hence, the action is to specify a serving BS for the UE along its trajectory.
#### Iii-A3 Policy
A policy \(\pi(.)\) maps the state of the environment to the action of the agent. In our case, \(\pi\) is a function from \(\mathcal{S}\) to \(\mathcal{A}\), i.e., \(\pi:\mathcal{S}\rightarrow\{0,1,...,[|\mathcal{B}|]\}\)
#### Iii-A4 Rewards
The agent obtains the reward after taking an action \(a^{(i)}\) when current state is \(s^{(i)}\) and moves to next state \(s^{(i+1)}\). Here we define reward \(r(s^{(i)},a^{(i)},s^{(i+1)})\) as
\[r(s^{(i)},a^{(i)},s^{(i+1)})=\Gamma^{(i)}-\lambda 1\left\{\Gamma^{(i)} \leq\Gamma_{\rm thr}\right\}, \tag{6}\]
where \(\Gamma^{(i)}\) is defined in (3).
#### Iii-A5 State-action value
The function \(Q_{\pi}(s,a)\) is the long-term reward and is defined as the expected summation of discounted reward in the future for the action \(a\in\mathcal{A}\) that agent takes in state \(s\) under policy \(\pi\). The RL algorithm aims to choose the optimal policy \(\pi^{\star}\) in each state \(s\) that maximizes the \(Q_{\pi}(s,a)\). With discount factor \(\eta\in[0,1]\), we have
\[Q_{\pi}(s,a)=\mathbb{E}\left\{\sum_{i}\eta^{i}r(s^{(i)},s^{(i)},s^{(i+1)}) \right\},\]
where the expectation is over the transition probabilities. In our problem, transition probabilities model the SNR variations due to the randomness of the channel fading and blockage. We assume mobility information including the UEs' current location and its trajectory is known3. Therefore, the transition to the next location is deterministic.
Footnote 3: Note that the location information can be easily fed back through lower-frequency links.
The optimal policy in state \(s\in\mathcal{S}\) is found by
\[\pi^{\star}(s)=\operatorname*{arg\,max}_{a\in\mathcal{A}}Q_{\pi}(s,a). \tag{7}\]
Due to the continuous and large number of state spaces, we apply deep Q-learning (DQL) [14] to solve (7). In DQL, the state-action value function is estimated by the deep neural network function approximators.
### _Joint Handover and Beam Tracking Algorithm_
Algorithm 1 describes our proposed joint handover and beam tracking algorithm along a trajectory with duration \(M\). If the current association cannot offer the required SNR level, the decision regarding handover or beam track is made based on \(a^{(i)}\) as the output of the RL algorithm. In the case of the handover decision, the value of \(a^{(i)}\) represents the target BS.
The beam tracking algorithm based on small spatial measurement in time slot \(i\) is shown in Algorithm 2. In slot \(i\), the algorithm starts by using the main beam of the same serving BS in the previous time slot \(i-1\). If the SNR value is lower than the threshold, then starts a small spatial measurement over the AoD direction of the main beam. To quantify the size of the spatial neighbourhood, we define \(\Delta\phi\) and \(\Delta\theta\) as the maximum absolute horizontal and vertical deviation from the main AoD direction. We define \(\delta\phi\) and \(\delta\theta\) as the measurement resolution in horizontal and vertical, respectively. Inspired by [15], the spatial neighbourhood \(\mathcal{N}\) surrounding the main AoD direction can be expressed using the horizontal neighbourhood \(\mathcal{N}_{\phi}\) and vertical neighbourhood \(\mathcal{N}_{\theta}\) as
\[\mathcal{N}_{\phi}(\Delta\phi,\delta\phi)=\left\{i.\delta\phi:i\in \left[-\left\lfloor\frac{\Delta\phi}{\delta\phi}\right\rfloor,\left\lfloor \frac{\Delta\phi}{\delta\phi}\right\rfloor\right]\right\} \tag{8}\]
\[\mathcal{N}_{\theta}(\Delta\theta,\delta\theta)=\left\{j.\delta \theta:j\in\left[-\left\lfloor\frac{\Delta\theta}{\delta\theta}\right\rfloor, \left\lfloor\frac{\Delta\theta}{\delta\theta}\right\rfloor\right]\right\} \tag{9}\]
where \(\lfloor.\rfloor\) is the floor operation. The complete neighbourhood is the Cartesian product of the horizontal and vertical neighbourhoods as
\[\mathcal{N}(\Delta\phi,\Delta\theta,\delta\phi,\delta\theta)= \mathcal{N}_{\phi}(\Delta\phi,\delta\phi)\times\mathcal{N}_{\theta}(\Delta\theta,\delta\theta)\] \[=\left\{(\phi,\theta):\phi\in\mathcal{N}_{\phi}(\Delta\phi, \delta\phi),\theta\in\mathcal{N}_{\theta}(\Delta\theta,\delta\theta)\right\} \tag{10}\]
The spatial neighborhoods \(\mathcal{T}^{(i)}\) in time slot \(i\) surrounding the main AoD directions \((\phi^{(i-1)}_{\ell^{\star}},\theta^{(i-1)}_{\ell^{\star}})\) in previous time slot is
\[\mathcal{T}^{(i)}=(\phi^{(i-1)}_{\ell^{\star}},\theta^{(i-1)}_{\ell^{\star}},)+ \mathcal{N}(\Delta\phi,\Delta\theta,\delta\phi,\delta\theta). \tag{11}\]
Now given the main AoD direction, we need to find the transmit direction from neighbourhoods \(\mathcal{T}^{(i)}\) that offers the SNR threshold. We represent the sorted direction pairs as \([\mathcal{T}^{(i)}]_{\mathcal{I}}\), where \(\mathcal{I}\) is the sorted indices. It means the directions in \([\mathcal{T}^{(i)}]_{\mathcal{I}}\) increase in distance from the main AoD direction. Starting from the main AoD direction, the SNR of each transmit direction in \([\mathcal{T}^{(i)}]_{\mathcal{I}}\) is measured until a beam pair meets the required SNR level. Afterwards, no further measurements are required. If no direction meets the threshold, the entire \((\Delta\phi,\Delta\theta)\)-neighbourhood is measured to find the beam pairs that offer the SNR threshold.
Note that in the worse scenario, if the selected target BS based on our proposed algorithm cannot offer the required SNR level due to very sudden blockage, the conventional handover methods based on searching over the candidate BSs in UEs vicinity can be applied. However, as shown in the numerical results, such extreme case is rare.
```
0: Trajectory with duration \(M\)
1: Initialization: for \(i=1\) set \(j_{S}^{(1)}\)=1
2:for\(i\in 1,...,M\)do
3:if\(\text{SNR}_{j_{S}}^{(i)}<\text{SNR}_{\text{thr}}\)then
4: Choose the optimal action \(a^{(i)}\) based on current \(s^{(i)}\).
5:if\(a^{(i)}\neq 0\)then.\(\triangleright\) handover execution
6: Set \(j_{S}^{(i)}=a^{(i)}\) and run the initial beam training process and compute the achieved throughput \(\Gamma^{(i)}\) as (3).
7:else
8: Run Algorithm 2 and compute \(\Gamma^{(i)}\).
9:endif
10:endif
11:endfor
12:\(\Gamma^{(i)}\)
```
**Algorithm 1** Joint handover and beam tracking
```
0:\([\mathcal{T}^{(i)}]_{\mathcal{I}}\), \(\text{SNR}_{\text{thr}}\), duration of each beam pair testing (\(\beta\)), \(\text{cnt}^{(i)}=0\).
1:for\((\phi,\theta)\in[\mathcal{T}]_{\mathcal{I}}\)do
2: Set \(\textbf{f}_{j}^{(i)}=\textbf{a}(\phi,\theta)\).
3: Measure \(\text{SNR}_{j}^{(i)}\) as (2).
4: Set \(\text{cnt}^{(i)}=\text{cnt}^{(i)}+1\).\(\triangleright\) number of beam pair testing
5:if\(\text{SNR}_{j}^{(i)}>=\text{SNR}_{\text{thr}}\)then
6:\((\phi_{\epsilon}^{(i)},\theta_{\epsilon}^{(i)})=(\phi^{\text{BS}},\theta^{ \text{BS}})\)
7:\(\tau_{b}^{(i)}=\beta.\text{cnt}^{(i)}\)
8:break;
9:endif
10:endfor
```
**Algorithm 2** Beam tracking in time slot \(i\) at the BS \(j\)
## IV Numerical Results
We evaluate the performance of the proposed method in an urban environment using the ray tracing tool in the MATLAB toolbox. The output of the ray tracing tool is the \(L\) available paths between a BS and a UE in a specific location. The ray tracing maintains the spatial consistency of mmWave channels. As depicted in Fig. 2, we extracted the building map of Kista in Stockholm city, Sweden and used it as the input data for the ray tracing simulation. In our scenario, we assumed the building material is _brick_ and the terrain material is _concrete_. We also add some random obstacles in the street with different heights (\(1\) m and \(3\) m) and widths (\(2\) m and \(4\) m) as the human bodies and various vehicles. These temporary obstacles are distributed randomly in the street with density \(10^{-2}\) per \(m^{2}\). The material loss and the location of the temporary obstacles are chosen randomly in each realization of the channel. The BSs are located on the wall of buildings. The location of the BSs is chosen randomly while covering the entire trajectory. The BSs' height is \(6\) m. We consider a pedestrian mobility model with a speed of \(1\) m/s. We consider the different lengths of the trajectories as \(100T_{\text{A}}\), \(200T_{\text{A}}\), \(300T_{\text{A}}\), \(400T_{\text{A}}\), \(500T_{\text{A}}\). The main simulation parameters are listed in Table I.
In the simulation, we consider the \(\text{SNR}_{\text{thr}}=2\) dB and the throughput threshold \(\Gamma_{\text{thr}}=1\) bit/Hz. The value of \(\tau_{c}\) is \(10\)\(ms\). In the case of handover, we fix the initial beam training duration as \(\tau_{b}=\frac{1}{3}\tau_{c}\). In the case of beam tracking, \(\tau_{b}\) is not fixed and equals the size of measuring neighbourhood multiplied by the duration of each beam pair testing (\(\beta=10\)\(\mu s\)). We compare the performance of our proposed method with two baselines. To have a fair comparison, we choose two baselines in which the target BS for the handover is pre-determined. Hence, we do not take into account the discovery time of finding the target BS in the baselines. Just like in our method, the handover is triggered if \(\text{SNR}<\text{SNR}_{\text{thr}}\).
As **Baseline 1** we consider the multi-connectivity method [8]. We implement a scenario where the UE maintains its connection with a nearby BS as a backup solution while being connected to the serving BS and once it experiences the blockage of the serving link, starts connecting to the backup solution. As **Baseline 2** we select the learning-based handover in [11]. The method shows very good performance in maximizing the achieved rate along the trajectory. In this baseline, the target BS during the handover process is determined by a learning algorithm. Although the target BSs are selected based on the long-term effect on the achieved rate, still can cause frequent handovers and throughput degradation.
First, we fix the number of BSs to \(10\) (see Fig. 2). We consider \(10^{4}\) different channel realization as the input of the RL algorithm. After getting the optimal policy, we test it over real-time measurements and report the average of the performance over \(500\) channel realizations. Fig. 3 shows the average number of locations with unmet throughput thresholds along the trajectory with different lengths and Fig. 4 shows the average number of handovers needed. In comparison to the other two baselines, our method provides better throughput results by selecting to perform either beam tracking or a handover. Furthermore, we note that the two baselines have a higher number of handovers than our method due to only considering the handover solution. Hence, by considering the joint handover and beam tracking problem our method provides better-achieved throughput while decreasing the number of handovers.
Fig. 5 shows the average aggregate achieved
Fig. 2: Simulation area in Kista, Stockholm. The yellow line shows the trajectory. Stars show the location of the BSs.
throughput along the trajectory with length \(300\) m for different numbers of BSs. By increasing the number of BSs the number of the locations satisfying the \(\Gamma_{\text{thr}}\) also increases hence the aggregate throughput along the trajectory increases. Even with a small number of BSs, our method outperforms baselines in aggregate throughput along the trajectory by determining whether to use a handover or beam tracking solution.
We consider 10000 iterations during the training in our method and Baseline 2. With the training machine MacBook Pro 2020 M1 with a memory of 16 GB, each iteration takes about 15 seconds. Note that the absolute value of the training time per iteration depends on the running machine.
## V Conclusions
In this work, we proposed and studied a learning-based joint handover and beam tracking method in a mobile mmWave network. The aim of our algorithm is to maximize the aggregate throughput of the UE along a trajectory and ensure the achieved throughput in each location is higher than the threshold. Our evaluation results showed that by making an optimal decision regarding handover execution or beam tracking, our method provides high achievable throughput and reduces the number of handovers. Considering different mobility models and studying the effect of neighbouring size can be valuable future work.
|
2302.02884 | Intra-operative Brain Tumor Detection with Deep Learning-Optimized
Hyperspectral Imaging | Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is
challenging due to the infiltrative nature of the lesion. Currently, no
real-time, intra-operative, label-free and wide-field tool is available to
assist and guide the surgeon to find the relevant demarcations for these
tumors. While marker-based methods exist for the high-grade glioma case, there
is no convenient solution available for the low-grade case; thus, marker-free
optical techniques represent an attractive option. Although RGB imaging is a
standard tool in surgical microscopes, it does not contain sufficient
information for tissue differentiation. We leverage the richer information from
hyperspectral imaging (HSI), acquired with a snapscan camera in the 468-787 nm
range, coupled to a surgical microscope, to build a deep-learning-based
diagnostic tool for cancer resection with potential for intra-operative
guidance. However, the main limitation of the HSI snapscan camera is the image
acquisition time, limiting its widespread deployment in the operation theater.
Here, we investigate the effect of HSI channel reduction and pre-selection to
scope the design space for the development of cheaper and faster sensors.
Neural networks are used to identify the most important spectral channels for
tumor tissue differentiation, optimizing the trade-off between the number of
channels and precision to enable real-time intra-surgical application. We
evaluate the performance of our method on a clinical dataset that was acquired
during surgery on five patients. By demonstrating the possibility to
efficiently detect low-grade glioma, these results can lead to better cancer
resection demarcations, potentially improving treatment effectiveness and
patient outcome. | Tommaso Giannantonio, Anna Alperovich, Piercosimo Semeraro, Manfredo Atzori, Xiaohan Zhang, Christoph Hauger, Alexander Freytag, Siri Luthman, Roeland Vandebriel, Murali Jayapala, Lien Solie, Steven de Vleeschouwer | 2023-02-06T15:52:03Z | http://arxiv.org/abs/2302.02884v1 | # Intra-operative Brain Tumor Detection with
###### Abstract
Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is challenging due to the infiltrative nature of the lesion. Currently, no real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors. While marker-based methods exist for the high-grade glioma case, there is no convenient solution available for the low-grade case; thus, marker-free optical techniques represent an attractive option. Although RGB imaging is a standard tool in surgical microscopes, it does not contain sufficient information for tissue differentiation. We leverage the richer information from hyperspectral imaging (HSI), acquired with a snapscan camera in the \(468-787\,\mathrm{nm}\) range, coupled to a surgical microscope, to build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance. However, the main limitation of the HSI snapshot camera is the image acquisition time, limiting its widespread deployment in the operation theater. Here, we investigate the effect of HSI channel reduction and pre-selection to scope the design space for the development of cheaper and faster sensors. Neural networks are used to identify the most important spectral channels for tumor tissue differentiation, optimizing the trade-off between the number of channels and precision to enable real-time intra-surgical application. We evaluate the performance of our method on a clinical dataset that was acquired during surgery on five patients. By demonstrating the possibility to efficiently detect low-grade glioma, these results can lead to better cancer resection demarcations, potentially improving treatment effectiveness and patient outcome.
**Keywords:** Hyperspectral imaging, intra-operative diagnostics, optical biopsy, tumor demarcation, oncology, assisted surgery, surgical microscopes.
Further author information: (Send correspondence to T.G.)
T.G.: E-mail: name dot surname at Zeiss dot com
## 1 Introduction
Brain tumors are among the most frequent tumor types worldwide with a high mortality. The most common type of brain tumors are gliomas, which are classified into high-grade gliomas (HGG, grade III and IV) and low-grade gliomas (LGG, grade I and II)[1]. Among others, neurosurgical resection of gliomas still represents the primary treatment method but remains challenging due to their indefinite tumor margin under white light, as a result of their heterogeneity and infiltrative growth into the surrounding brain tissue[2, 3]. Consequently, tumor tissues can be resected incompletely, leading to a probable recurrence, which is a major cause of mortality[4]. On the contrary, a larger safety margin might result in over-resection, i.e. removal of healthy brain tissue, resulting in permanent brain function damage[5].
Recently, various techniques became available to neurosorgeons for improved intra-operative visualization of gliomas, such as neuronavigation, intra-operative MRI and ultrasound[6, 7]. Apart from these techniques, fluorescence-guided surgery using 5-aminolevulinic acid (5-ALA) induced fluorescence has become one of the most powerful methods for visualizing HGGs[8, 9, 10]. Nevertheless, the efficacy of this method was reported to be still unclear for resections of LGGs due to their limited fluorescence response[11]. To the best of our knowledge, there are no fluorescence tracers available marking LGGs till now. It is thus of our interest to develop a real-time, wide-field technique that can improve visualization of LGG intra-operatively without the use of a tracer.
In this work, we propose a novel approach for intra-operative tissue differentiation in neurosurgery. Our solution utilizes hyperspectral image information in order to classify areas with healthy tissue and LGG, which is more challenging than the case of HGG and no previous attempt of which has been made to the best of our knowledge.
We illustrate that information about the tissue type is available in the spectral curves, and can be successfully extracted with a neural network. Furthermore, we perform model explainability and channel selection to investigate what spectral regions and channels carry the most discriminative power. This is important both for trusting the model outputs as well as to design less expensive and faster data acquisition strategies based on a reduced spectral sampling. Finally, we evaluate the predictions' reliability with an ensemble method[12] by generating a map of reliable predictions to support the doctor during surgery.
The rest of this paper is structured as follows: in Sec. 2 we summarize previous related work; in Sec. 3 we describe the hardware, data acquisition process and resulting dataset. In Sec. 4 we describe our analysis methods, and in Sec. 5 we summarize our results before concluding in Sec. 6.
## 2 Related Work
### Hyperspectral imaging
Hyperspectral imaging (HSI)[13] is a technology allowing the acquisition of images in many (dozens to hundreds) narrow spectral bands. As such it combines the main advantages of traditional imaging (description of morphological features) and spectroscopy (sensitivity to chemical composition). Besides medicine, HSI has been successfully used in multiple fields such as remote sensing for land cover classification[14], food quality control[15], agriculture[16], garbage sorting[17], astronomy[18, 19], art conservation[20], military[21].
### HSI in medicine
Biological tissues have distinct spectral characteristics driven by their chemical composition[22]. In the range \(450-600\) nm blood is the main component, dominated by hemoglobin (Hb). The Hb spectral shape varies depending on the oxygen saturation level, presenting a single absorption peak at \(560\) nm when deoxygenated and a double peak at \(540\) and \(580\) nm when oxygenated[23]. As this and other parameters driving the spectral shape in absorption, scattering, and fluorescence differ between tissues and pathological states, HSI can be used to discriminate between them.
The use of HSI in medicine has been growing in the last decade[24, 25, 26], and it is establishing itself as a non-invasive, non-ionizing, label-free diagnostic tool. As such, it has seen significant applications in many medical domains, including the analysis of multiple types of cancer (_in vivo_ and _ex vivo_)[27], computational pathology[28], gastroenterology[29], dermatology[30], general surgery[31, 32, 33] (all references here are to recent review articles).
One of the most important aspects underlying all the above-mentioned studies is the quest for the optimal method to extract information from biomedical HSI data [26].
The simplest method is based on optical inverse modelling: if a physical model for the tissue spectra exists, observations can be used to directly infer its parameters [34]; e.g., the ratio of the spectral bands corresponding to oxygenated and de-oxygenated Hb can be used for cancer detection [35]. In most cases however, an accurate physical model is missing, and a data-driven machine learning approach is more suitable, either of the classical feature learning (FL) type, or based on deep learning (DL).
Common classical algorithms used for pixel-wise classification in the medical HSI domain include support vector machines (SVM), random forests (RF), and multinomial logistic regression (MLR) [36]. Dimensionality reduction and feature selection methods are often applied to HSI data to discard irrelevant information, reduce computational time, and as an instrument to design custom HSI cameras that only acquire important information.
Beyond pixel-wise analyses, it is possible to consider the whole HS cube and employ spectra-spatial techniques [37] to address the common issues of low inter-class and high intra-class (patient-to-patient) variability. Different strategies for the spectral-spatial integration exist at the pre-/post-processing, or integrated levels.
In recent years, the number of deep learning applications to the medical HSI domain has been increasing [38]. Several dimensionality approaches are possible: pixel-wise 1D convolutional neural networks (CNN); 2D CNNs applied in parallel to all spectral channel and then concatenated; or fully 3D CNNs. The most popular approach has been 2D [39, 40, 41, 42], but some studies achieved improved results with 3D methods [43, 44].
While most studies implemented (macro-)pixel-based classification or regression tasks, full-image segmentation tasks with 2D U-net architectures have also been addressed [45]. Finally, advanced DL methods have been applied to HSI data, such as generative adversarial networks (GANs) to generate HSI from RGB data [46] and recurrent neural networks (RNN) to include the temporal element for video-based real-time inference [47].
The main bottleneck that hampers a more widespread and successful use of DL in this domain is the scarcity and high cost of training data.
### HSI in brain cancer surgery
We focus here on the applications to brain surgery [48]. A first HSI application to this field is to infer the brain tissue metabolic and hemodynamic signals, such as oxyhemoglobin, deoxyhemoglobin, and cytochrome c-oxidase, to study the functionality of the brain, diagnosing diseases, and for surgical assistance [49, 50, 51, 52, 53]. Our main focus here is however the use of HSI for tumor tissue identification. For malignant cases such as gliomas, surgery is often the best treatment option, but the detection of the tumor edges is challenging with the naked eye [54]. Existing intra-operative navigation tools, such as magnetic resonance imaging, ultrasound, or fluorescent markers, have significant limitations, so that HSI-based margin delineation is an attractive option [27].
The state of the art for the _in vivo_ human brain cancer classification application is represented by the European project _HELICoiD_ (HypErspectraL Imaging Cancer Detection) [55, 56, 57, 58, 59, 60, 61, 39]. These authors developed an intra-operative demonstrator to acquire and process HSI in real time in order to support the operating surgeon during resection [59], acquiring data in the VIS and VNIR ranges with high spectral resolution. First tumor classification results from 33 HS cubes and 22 patients obtained specificity and sensitivity \(>96\%\) using classical and deep learning methods [61]. Later work incorporated further data (36 cubes from 22 patients) and introduced a semi-automatic labelling tool to improve annotation quality; this database is publicly available [62]. The results based on this dataset and a combination of classical and deep learning methods achieved sensitivity and specificity \(>98\%\)[56]. The method was later further improved and tested on a fully functional demonstrator [59] that could classify four tissue types. Later work revised the model implementation, to improve its parallelization [63] and achieve real-time processing on multiple GPUs [64, 65]. Subsequent studies employing a deep learning-based pipeline further improved the accuracy by \(\sim 16\%\)[57, 39] with respect to classical methods, while requiring a larger amount of training data. The results were subsequently further improved with more advanced deep learning architectures: Ref. [44] introduced a 3D-2D hybrid CNN, while Ref. [66] employed a multiple model fusion. Most recently Ref. [67] developed the fusion of VIS+NIR data, obtaining a 21% improvement in the classification.
Several efforts were undertaken towards dimensionality reduction and suppression of redundant information, such as Fixed Reference _t_-distributed Stochastic Neighbors (FR-t-SNE) [58], and methods to identify the most
important spectral bands for classification based on classical feature selection methods such as the genetic algorithm, particle swarm and ant colony optimizations [88] and empirical mode decomposition [69].
As a separate part of the _HELICoiD_ project, an _in vitro_ histology dataset was also produced [70] and employed for classification, achieving high-accuracy results with classical and later deep learning methods [40, 71, 72], including using superpixel aggregation [73]. The experience gathered by the _HELICoiD_ group also led to the application of similar methods for the classification of skin cancers [74, 75], Alzheimer's disease [76], gastroenterology [29], thyroid [42] and ENT cancers [43, 77, 41].
An independent brain cancer dataset containing 13 images of 12 patients was collected and analyzed by Ref. [78] using a simpler HSI snapshot mosaic camera with 25 spectral channels only. These authors performed pixel-wise classification using classical and deep learning models, achieving good overall accuracy (95%) when all patients are used in training, but limited generalization to unseen patients.
## 3 Hardware and Data Acquisition
The data acquisition was carried out intra-operatively in UZ Leuven using an IMEC snapscan VNIR (IMEC, Leuven, Belgium) hyperspectral camera, which was coupled to a ZEISS OPMI(r) PENTERO(r) 900 surgical microscope (Carl Zeiss Meditec AG, Oberkochen, Germany) at its documentation optical port through a video adapter with C-mount interface (\(f=105\) mm) (see Fig. 1). For each included patient, altogether 3 study HSI images were acquired, namely a first image of the brain surface after opening the dura sheet, a second image after partial removal of the tumor and a third image of the cavity after resection. To obtain absolute reflectance values, a Zenith Polymer(r) target with 95% reflectance was scanned by the HSI camera prior to the operation. Immediately prior to each HSI image acquisition, an RGB photo was taken using the PENTERO microscope, without varying the light condition nor the region of interest. For both RGB and HSI acquisitions, the internal illumination of a xenon light source (300 W) was used, and the operating room was darkened. For all image acquisitions, a working distance of 250 mm of the PENTERO microscope was used and the light intensity was set to 50%. All raw HSI images were acquired with the IR800 mode enabled and were subsequently normalized and processed using the HSI snapscan software [79, 80]. The IMEC snapscan HSI camera delivers over 150 spectral bands in the range of \(470-900\) nm. Nevertheless, due to the limitation caused by the IR cut filter in the illumination light path of the PENTERO microscope, we modified the calibration file of the HSI camera such that it covers a scan range of \(470-780\) nm, resulting in 104 spectral bands. A detailed description of the setup and data acquisition workflow can be found in a companion work by some of us [81].
The acquired RGB images were annotated by the operating surgeon directly after the respective surgeries into classes that will be further discussed in Sec. 4.1. The annotations were then registered to the HSI images
Figure 1: A photo of the setup in the operating room (OR) at UZ Leuven: IMEC HSI snapscan camera coupled to a ZEISS OPMI® PENTERO® microscope for intra-operative HSI and RGB data acquisition.
accordingly to compensate the slight mismatch between the FOV of the HSI camera and the internal RGB camera of the PENTERO microscope, based on registration parameters determined using a spatial reference target printed on an A4 paper.
## 4 Analysis Methods
We approach the problem of tissue differentiation in several steps. First, we analyze the data and confirm the assumption that for good quality data the healthy tissue has a different spectrum than a tumor. Then we investigate the discriminative capabilities of classical Machine Learning (ML) and Deep Learning (DL) techniques. We perform an explainability analysis to identify the most important spectral channels for the task of tissue differentiation and suggest an uncertainty estimation method to improve the robustness of the neural network predictions provided to the surgeon.
### Data exploration and preparation
We first explore and characterize the properties of the HSI data. Our dataset consists of five patients who have undergone glioma resection; for each patient three to four HSI images were acquired, for a total of 18 images. The spatial resolution of each image is \(1600\times 1600\) pixels, and it contains 104 spectral channels in the range \(468-787\) nm. Due to the implemented video adapter, only a circular central section of the focal plane is illuminated, hence the edges and corners carry no information. Additionally, due to light fall off of the scene illumination and the coupling optics the signal-to-noise ratio decreases towards the edges, with many missing pixels in that region. Finally, we observe areas of saturated pixels due to reflection.
The following classes were annotated by the operating surgeons: (0) Background: area to be ignored; (1) Healthy: healthy tissue; (2) Foreign object: surgical tools, markers etc.; (3) Blood: Area occluded by liquid blood; (4) Coagulation: Area occluded by coagulated blood; (5) HGG: High-grade glioma; (6) LGG: Low-grade glioma; (7) Histo HGG: Histologically confirmed HGG; (8) Histo LGG: Histologically confirmed LGG; (9) Blood vessel: Capillaries; (10) White matter: Typically deeper brain section exposed after surgery; (11) Deep Cortex: Deeper cortex area, also exposed after surgery; (12) Pia: Pia mater, the innermost of the meninges; (13) Histo Healthy: Histologically confirmed healthy tissue. In this study, we limit our classification analysis to the healthy and LGG classes (aggregating both histologically confirmed areas and those that are not).
We show in Fig. 2 three exemplary annotated images from one patient, taken before, during, and after the resection operation.
In order to assess the distinguishing power of our data, we first investigate the pixel-level distribution of the measured spectra, which we show for one image in Fig. 3. Here we see qualitat
Figure 2: Three annotated images from one patient. The grayscale image represents intensity in the 660 nm channel, and the color overlays describe annotations by the surgeon. Areas without colors were not assigned to any class and belong therefore to the background class. The three images were taken before, during, and at the end of the resection process respectively.
most classes have largely overlapping distributions, with the possible exception of class 2 ("Foreign object"). To quantify this, we define the spectral angle mapper (SAM) distance between two spectra \(s_{j},s_{j}\) as [82]
\[\text{SAM}(s_{i},s_{j})=\cos^{-1}\left(\frac{\sum_{l=1}^{N}s_{il}s_{jl}}{\left[ \sum_{l=1}^{N}s_{il}^{2}\right]^{1/2}\left[\sum_{l=1}^{N}s_{jl}^{2}\right]^{1/ 2}}\right)\,, \tag{1}\]
where \(N\) is the number of spectral channels. We then use this distance to calculate the intra- and inter-cluster distances of the spectra, where we assume each cluster to be the set of pixels belonging to a given tissue class. The quantitative analysis confirms the qualitative impression: in all cases the intra-cluster distances are larger than the inter-cluster centroid distances, i.e. the clusters are largely overlapping and not well separated. A classical analysis with a \(\chi^{2}\) test returns a similar result: the null hypothesis that all pixels are drawn from the same spectral distribution is not falsifiable, except for some images, at weak significance, for the "Foreign object" class (the smallest \(p\)-value is found to be \(p=0.1\) between the classes "Foreign object" and "HGG").
In order to improve signal-to-noise and increase the discriminating power of the data, we thus aggregate pixels by defining macro-pixels (tiles) based on the SLIC algorithm [83]. The algorithm creates tiles by aggregating a given number of neighboring pixels in such a way as to minimize a chosen distance between them, in this case set to be the SAM distance. Instead of considering the spectra of individual pixel, we can now compare the distributions of spectra at the tile level. Furthermore, the signal-to-noise can be boosted by selecting a subset of tiles with good uniformity, and excluding outliers in brightness (particularly dark or saturated regions), as shown in Fig. 4.
By using this procedure, we manage to obtain distributions that are statistically separable between the tissue classes, as can be seen in Fig. 5. The class separations can achieve high statistical significance (down to a \(p\)-value of 0) for aggressive tile filterings, at the cost however of retaining a much reduced number of tiles. This result is comparable with the findings by Ref. [39].
The results of the spectral analysis indicate that some differences between the tissue classes are visible in their spectra, albeit at low statistical significance, and are therefore encouraging: next we want to test whether, by using classical machine learning or deep learning techniques, an algorithm can be found that can classify the individual image regions to a sufficient degree of accuracy.
### Classical methods for tissue differentiation
Based on the results of Sec. 4.1, we want to first investigate whether classical machine learning (ML) methods can be used to distinguish cancerous from healthy tissue. Here we focus on the classification of tiles belonging
Figure 3: Normalized spectra (reflective flux) of all pixels belonging to annotated regions of one HSI image (study image 1). Only classes present in this image are shown: (1) Healthy; (2) Foreign object; (3) Blood; (5) HGG; (6) LGG. The central solid lines represent the mean spectra of all pixels belonging to each class, and the shaded areas describe the \(1\sigma\) region.
to our focus classes of "healthy" and "LGG", aggregating in both cases tiles that are histologically confirmed and those that are not.
We pre-process the data by creating SLIC tiles as described in Sec. 4.1 above, with a target size of 200 pixels per tile, considering tiles consisting entirely of "healthy" or "LGG" pixels, and applying the following quality selection criteria to the tiles: i) lowest 50% percentile in average SAM distance among the tile pixels; ii) lowest 50% percentile in average L2 distance among pixels spectra; iii) average intensity \(\bar{I}\) of the tile pixels should be in the percentile range \(10\%<\bar{I}<90\%\). These quality cuts restrict the dataset to particularly homogeneous tiles, and exclude in particular blood vessels, underexposed and oversaturated areas.
From the complete dataset of 5 patients and 18 images we thus obtain 8671 tiles. Given the limited number of patients at this stage, we define training and test sets by randomly splitting all tiles into a training and a test set (6620 training tiles and 2051 test tiles). Each tile is then isolated, spatially padded with zeros to a shape of \(40\times 40\times 104\), and saved to disk.
Figure 4: Creation of SLIC tiles. The left panel shows the obtained boundaries of generated SLIC tiles for one HSI image, where the tiles straddling annotation class boundaries were discarded. The central histograms show the tile filtration strategy: only tiles with high spectral uniformity are kept, following cuts on the SAM and L2 distances distributions, and tiles that are especially dark or bright are discarded, following cuts in the intensity distribution. The right panels show the resulting “good tiles” sets that can be obtained depending on the filtration level.
Figure 5: Normalized spectra (reflective flux) of the “good” tiles belonging to annotated regions of one HSI image. Only classes present in this image are shown. The central solid lines represent the mean spectra of all pixels belonging to each class, and the shaded areas describe the \(1\sigma\) region. We can see that the differences among classes become statistically significant.
We then choose three classical ML methods: a Random Forest (RF) classifier, a Support Vector Machine (SVM), and a Multi-Layer Perceptron (MLP), which we train on the training tiles and evaluate on the testing tiles. In each case, we consider as input features of the models the tile spectral information averaged over all pixels in each tile. Thus, spatial information within the tile is disregarded by the classical methods.
We report the results of this approach in Sec. 5.1 below.
### Deep Learning methods for tissue differentiation
Deep convolutional neural networks (CNNs) are often used for classification tasks in computer vision. CNNs are the perfect tool to extract the most relevant information from the input data and use it to solve the desired problem. The main drawback of this approach is the need for a large and diverse training dataset that represents well the process that the CNN should approximate.
In the medical domain, the dataset acquisition is often a challenging task that requires time and huge amount of human effort. Thus, the majority of the datasets are small, and the deep learning solution should account for the data deficiency. As discussed in Sec. 4.1, to create a suitable dataset for training, we use tiles instead of the full hyperspectral images. Thus, we end up with approximately \(\sim 5000\) training examples instead of only 10-14 images in the dataset.
We use a CNN, see Fig. 6 for the overview of the network architecture, that takes tiles as an input and predicts tissue class as an output. Similarly to the classical methods from Sec. 4.2, we consider only two classes: healthy and LGG. The encoder path consists of several convolutional layers, followed up by pooling operations; at the readout layer network features are compressed to a classification vector with class probability values in the \([0,1]\) range. The highest value indicates the predicted class. Due to the strong correlation between neighboring spectral channels, we aim to reduce the number of spectral channels needed for the prediction. Thus, we design
Figure 6: The proposed deep learning workflow. First, the input hyperspectral image is divided into SLIC tiles, where each tile shares similar spectra and intensities. Every tile is passed through the deep convolutional neural network that assigns healthy or tumor class to it. After all relevant tiles are processed by the neural network, the resulting prediction shows tumor and healthy areas in the image. For visualization, we show only one spectral channel out of 104, thus images appear in grayscale. The network architecture is shown in the bottom part. Here each encoder block consists of a convolutional layer with feature dimensions written at the bottom of the block and kernel size \((x,y)\) shown near the blue rectangle. The spatial dimensions \((x,y)\) of the input tensor are shown on the top and left vertices of each convolutional block. After each block, we also apply MaxPooling and BatchNorm layers.
the neural network architecture such that the first layer reduces the number of features from 104 to 3, 6 or 12 by forming meta-channels, which are learnable linear combinations of the original input channels. Then the architecture follows the standard encoder structure.
The evaluation of the trained model is performed with two strategies: testing and inference. The testing is the classical method to evaluate deep learning solutions, where some portion of the dataset (in our case, SLIC tiles) is left for testing and that data is not seen during training. During inference, the SLIC tiles are instead re-created from the full hyperspectral image, then the CNN is applied to each tile independently. Thus, the prediction for the whole hyperspectral image can be obtained. In this case all tiles are used to create the prediction, even those that are of poor quality. This evaluation method helps to understand how the neural network would perform in practice, when applied during surgery.
### Spectral channels selection
The main limitation in this exploratory feasibility study is the acquisition speed. Spectrally resolving detector arrays, as used in this study, can acquire HSI data at video rate. Here, we however chose to trade acquisition speed for spectral resolution by using a snapscan camera design. In the snapscan camera, spectral filters are deposited row-wise across the image sensor requiring spatial scanning of the scene for HSI data acquisition. However, by limiting the number of spectral channels and instead depositing the spectral filters in a mosaic pattern across the sensor, spatial scanning can be circumvented, and snapshot spectral imaging achieved. With design modifications, the HSI measurement technique can be straightforwardly translated to a real-time setting, achieving frame rates of 15-30 fps as required in the clinical setting. In order to apply our method in the OR, we thus focus our efforts towards selecting the most relevant spectral channels for tissue differentiation. By doing that, we can decrease the acquisition time and create a solution that can be later used in real-time applications.
We thus search for the 3, 6, and 12 most important spectral channels that the neural network needs for the prediction. In order to select those channels, we use the information from an ensemble of trained neural networks. We investigate for each network what channels contributed the most to its prediction, given that the network achieves good results (above 80% accuracy) on the test dataset. Then we accumulate the channel importance scores from the individual networks and select the most important spectral channels for the final CNN input, see Fig. 7 for an illustration of the channel importance. Note, that after the channel selection, the neural network is retrained with only important channels.
To find what channels networks favor the most we apply the state-of-the-art GradientShap explainability method [84]. However, there are other techniques in the machine learning community, such as IntegratedGradients or Occlusion-based Attribution [84]. The output of these methods is an attribution score to each input feature (the spectral channels) concerning the first target output. Positive attribution scores indicate a positive contribution of the input at that specific position to the final prediction, whereas negative scores indicate the opposite. The strength of the contribution is indicated by the attribution score's size.
### Reliability and out-of-distribution predictions using ensemble models
It is well known that a single neural network can be overconfident [12]. The predictions can have very high score and be incorrect. This situation often occurs when input data is no longer statistically similar to the training dataset. Since the neural network was never trained with such an out-of-distribution sample, it still predicts one of the classes that it was trained for.
Figure 7: Explainability analysis of the ensemble of the neural networks. For each spectral channel on the \(x\) axis the channel importance is calculated and averaged though the ensemble. The higher the bar, the more important is the channel for the tissue classification task. The red line indicates the standard deviation of the channel importance over the ensemble.
The state-of-the-art approach to estimate how confident is the network about its prediction is to use an ensemble of neural networks. The final prediction is computed based on the average class probability prediction of multiple neural networks that have similar architecture and are trained on the same dataset. The assumption is that for the out-of-distribution sample networks in the ensemble would not agree on the predicted class, thus the average prediction scores (class probabilities) would be low for this sample. A simple thresholding technique can rule out low score predictions and mark the corresponding input samples as "unknown". The predictions where all networks agree are considered reliable. We illustrate the effect of using the ensemble method to verify the prediction quality and identify the out-of-distribution areas in the input hyperspectral image.
## 5 Results
We performed the data analysis and model training and evaluation on a Linux workstation with 12 dual-core CPUs with a total 64 GB RAM, and an Nvidia GeForce RTX 3090 GPU, with 24 GB RAM.
### Results for classical methods
We first report the results of evaluating the classification models on the test set of unseen tiles. This is the default choice, but it still contains only "good" tiles that belong to the two classes we consider, i.e. it does not include any out-of-distribution tissue, nor any lower-quality tiles. All classical models underwent hyperparameter tuning and the results are referred to a 0.5 operating point.
Our first baseline classical ML model, a Random Forest classifier with 100 trees, achieves an overall 86% accuracy on the test set, with 94% precision and 88% recall.
The second model considered, an SVM with Radial Basis Function (RBF) kernel, achieves a 91% accuracy (95% precision, 92% recall).
Figure 8: Inference on a selection of three full images using the classical ML methods. The left panels show the ground-truth labels for the two focus classes (healthy in red and LGG in blue), while the following panels show the inference results for the Support Vector Machine, Random Forest and Multi-Layer Perceptron methods respectively. The prediction quality is significantly lower than on the good-tiles test set.
Finally the third model, an MLP classifier with one hidden layer, achieves an improved accuracy of 92%, with 98% precision and 91% recall.
While promising, these results are based on the selected good-quality tiles only. In order to assess a more realistic performance in the operating theatre, we apply the trained models in inference mode onto the full images. With this procedure, we achieve as expected a lower accuracy, which is typically between \(\sim 50\%\) and \(\sim 70\%\), as we show for three example images in Fig. 8. From the figure it is qualitatively clear that these classical classifiers often produce incorrect predictions over broad image regions.
Note also that this evaluation method does include all lower-quality tiles that were excluded from our dataset as prepared in Sec. 4.2, but it also includes the tiles from the training set, and thus it still represents an optimistic estimate of the actual model performance in the wild.
### Results for Deep Learning methods
We summarize in Tab. 1 the quantitative results of our deep learning classification models evaluated on the test dataset, where we can see that an accuracy \(>80\%\) is generally achieved. In order to investigate the stability of the models, for each network configuration we retrain the model three times with random initialization of the trainable parameters. The "Channel compression" column contains the information about the first convolutional layer that compresses the 104 input spectral channels to 3, 6 or 12 channels. The other columns illustrate the performance of the neural networks, where we observe an increase in all metrics with the increase of the number of features. Thus, the more features are used, the more information can be extracted from the input spectrum. The difference between the runs with the same architecture is quite small, thus we conclude that the neural network does not depend much on the input initialization. However, the small difference in the performance supports the idea to use ensemble instead of a single network to compensate for the effect of the random initialization.
Figure 9 illustrates the inference results on several full images: here the models with compression to 3, 6, and 12 channels are applied to the full hyperspectral images. For some samples, the model with 3 channels underperforms compared to the models with 6 and 12 channels. This happens due to the discarding of information in the first layer of the neural network, where 104 channels were aggregated into 3. Models with 6 and 12 channels show comparable performance on the test images, both qualitatively and quantitatively. These results are as expected qualitatively and quantitatively superior to those obtained with classical ML methods shown in Fig. 8, not only due to the superior algorithm used, but also as for the deep learning case the full spectral information of every pixel of each tile is retained. When using classical ML techniques, only the average spectrum over all pixels was considered as the descriptive feature of each tile.
Then we study the channel importance, where we identify the most important channels from the trained models. We illustrate the explainability analysis on the model with the compression to 12 channels. First, we show the channel importance in Fig. 10, where the most important channels are concentrated in the red part of the spectrum, starting from approximately 650 nm. Then we select the 12 channels with the highest scores and retrain the model using these channels only. Results on the test dataset show that the model achieves
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Channel compression & Accuracy & Precision & Recall & TP & TN & FP & FN \\ \hline \hline
104-3 & 76\% & 92\% & 74\% & 1137 & 428 & 95 & 391 \\ \hline
104-3 & 81\% & 93\% & 81\% & 1238 & 426 & 97 & 290 \\ \hline
104-3 & 78\% & 93\% & 77\% & 1178 & 430 & 93 & 350 \\ \hline \hline
104-6 & 87\% & 94\% & 87\% & 1334 & 444 & 79 & 194 \\ \hline
104-6 & 82\% & 95\% & 80\% & 1221 & 460 & 63 & 307 \\ \hline
104-6 & 85\% & 95\% & 85\% & 1298 & 450 & 73 & 230 \\ \hline \hline
104-12 & 89\% & 94\% & 90\% & 1378 & 443 & 80 & 150 \\ \hline
104-12 & 86\% & 96\% & 85\% & 1297 & 446 & 57 & 231 \\ \hline
104-12 & 89\% & 94\% & 90\% & 1373 & 445 & 78 & 155 \\ \hline \end{tabular}
\end{table}
Table 1: Deep learning classification models: quantitative evaluation on the test dataset.
accuracy of 81%, precision of 94 %, and recall 80 %. Compared to the results from Tab. 1, we conclude that the model performance is marginally lower compared to the network that was trained on the full spectrum with later channel aggregation. However, the results still show that a channel reduction is possible, but more research is required in order to identify an optimal channel selection.
The final evaluation step is the uncertainty estimation via ensemble networks. As discussed in Sec. 4.5, instead of training one model, we train several models and aggregate their predictions. All neural networks are trained to predict one of the two classes: tumor or healthy. By using the ensemble, we can threshold the average score from all trained models and add the 'unknown' class. All predictions that are below the threshold will be
Figure 10: Importance of the spectral channels. The blue bars correspond to the mean importance score, while the red lines illustrate the standard deviation across 3 models. The models have exactly the same architecture, but trained with random initialization, thus differ in the initial values of all trainable parameters.
Figure 9: Qualitative evaluation of the models that compress input spectral channels to 3, 6, and 12 meta-channels. The new channels are encoded with the deep convolutional neural network. The results illustrate the application of the trained models to input hyperspectral images taken from different patients. Healthy tissue is depicted in red, while tumor tissue is shown in blue. The first column displays annotations from the surgeon. The other columns are the results. Note, that accuracy is calculated for every image separately. The accuracy of the prediction is different from the performance of those models on the test set due to the varying acquisition quality of some areas in the images. The test dataset consists only of the good quality samples (tiles).
marked as 'unknown'. We trained 10 models with random initialization and illustrate the ensemble results with two thresholds 0.7 and 0.8, see Fig. 11. We added some areas from other classes, which the networks never saw during training. We expect that all those areas should be marked as 'unknown'. According to the qualitative results from Fig. 11, most of the unknown areas are already found when thresholding the predictions with 0.7. When a higher threshold is applied, more areas fall into the 'unknown' category. The desired balance between the amount of predicted area and the reliability of this prediction can be achieved by adjusting the threshold.
## 6 Conclusions
In this study, we illustrated the development of a deep-learning algorithm to visualize LGG based on snapscan HSI technology. RGB and HSI images were acquired on 5 patients with LGG intra-operatively and were annotated by the operating surgeon for subsequent training.
The results using classical methods including Random Forest, SVM with RBF and MLP classifiers show an accuracy \(>90\%\) using only "good" tiles (i.e. tiles with low spectral variability and excluding under- and over-exposed regions) and all 104 channels. When using all tiles, a much lower accuracy was found using these
Figure 11: Results of the applying ensemble to the full hyperspectral images. The first column represents the ground truth, where the healthy areas are marked in red, tumor areas are marked in blue and unknown regions are orange. The second column illustrates the results after thresholding the prediction scores at 0.7 and the third column shows the results with a 0.8 threshold.
classical methods, which is typically \(\sim 70\%\) at best. The accuracy using all tiles was increased to \(>80\%\) using deep learning methods. Furthermore, the performed analysis of channel importance demonstrated that the channels in red and NIR, i.e. from ca. 650 nm to 780 nm, carry most classification weight. A channel reduction experiment showed that the overall performance does not decrease dramatically with channels being reduced to either 12 or 6 channels. A much worse performance was only found when the number of channels was reduced to 3.
The presented work is still ongoing in cooperation with UZ Leuven and IMEC and has some limitations: (i) Only 5 patients with altogether 18 study images were acquired so far and this dataset might still be too small to address inter-patient variability. Indeed, due to the limited number of patients, we have built our training and testing datasets randomly from all patients. (ii) Only tiles inside the annotated areas were evaluated using the deep-learning method. Therefore, in the near future, we will carry out additional research as we continue our data acquisition in UZ Leuven. First, once more data will be available, we look forward to extend the train/test split to the case of datasets that are separated patient-wise, in order to validate the trained network on unseen data and to verify whether it starts to generalize on larger datasets. Moreover, we will validate the trained networks on the full images, including all unannotated areas, as a next step towards a more practical tool that can aid surgeons in the OR resecting LGGs. Despite these facts, the findings of this preliminary work demonstrate the feasibility of applying HSI technology to differentiate LGG from healthy tissues, which paves the way towards a real product that could be used in the OR in the near future.
|
2302.13751 | Non-vanishing modulo $p$ of Hecke $L$-values over imaginary quadratic
fields | Let $p$ and $q$ be two distinct odd primes. Let $K$ be an imaginary quadratic
field over which $p$ and $q$ are both split. Let $\Psi$ be a Hecke character
over $K$ of infinity type $(k,j)$ with $0\le-j< k$. Under certain technical
hypotheses, we show that for a Zariski dense set of finite-order characters
$\kappa$ over $K$ which factor through the $\mathbb{Z}_q^2$-extension of $K$,
the $p$-adic valuation of the algebraic part of the $L$-value
$L(\overline{\kappa\Psi},k+j)$ is a constant independent of $\kappa$. In
addition, when $j=0$ and certain technical hypothesis holds, this constant is
zero. | Debanjana Kundu, Antonio Lei | 2023-02-27T13:26:26Z | http://arxiv.org/abs/2302.13751v1 | # Non-vanishing modulo \(p\) of Hecke \(L\)-values over imaginary quadratic fields
###### Abstract.
Let \(p\) and \(q\) be two distinct odd primes. Let \(K\) be an imaginary quadratic field over which \(p\) and \(q\) are both split. Let \(\Psi\) be a Hecke character over \(K\) of infinity type \((k,j)\) with \(0\leq-j<k\). Under certain technical hypotheses, we show that for a Zariski dense set of finite-order characters \(\kappa\) over \(K\) which factor through the \(\mathbb{Z}_{q}^{2}\)-extension of \(K\), the \(p\)-adic valuation of the algebraic part of the \(L\)-value \(L(\kappa\overline{\Psi},k+j)\) is a constant independent of \(\kappa\). In addition, when \(j=0\) and certain technical hypothesis holds, this constant is zero.
Key words and phrases:Hecke characters, imaginary quadratic fields, \(p\)-divisibility of Hecke \(L\)-values 2020 Mathematics Subject Classification: Primary 11S40, 11G15; Secondary 11F67, 11R20, 11R23
## 1. Introduction
Let \(p\) and \(q\) be two distinct odd primes. It is a classical problem to study the divisibility of the algebraic part of (Hecke) \(L\)-values by a given prime \(p\) as one varies the (Hecke) characters of \(q\)-power conductor. For Dirichlet \(L\)-values, such questions were studied by L. Washington in [23, 24]. He showed that for almost all Dirichlet characters of \(q\)-power conductor, the algebraic parts of their \(L\)-values are coprime to \(p\). As an application, he proved that the \(p\)-part of the class number stabilizes in cyclotomic \(\mathbb{Z}_{q}\)-extensions of abelian number fields. Washington's results have been extended to the case of (finite) product of cyclotomic \(\mathbb{Z}_{q}\)-extensions of abelian number fields (for distinct primes \(q_{i}\) with \(q_{i}\neq p\)) by E. Friedman in [13].
In [24], W. Sinnott introduced the idea of relating non-vanishing of such \(L\)-values modulo \(p\) to Zariski density (modulo \(p\)) of special points of the algebraic variety underlying the \(L\)-values. Using this machinery, J. Lamplugh generalized Washington's theorem to _split prime_\(\mathbb{Z}_{q}\)-extensions of imaginary quadratic fields in [1]. Let \(K\) be an imaginary quadratic field such that \(q\mathcal{O}_{K}=\mathfrak{qq}^{*}\) with \(\mathfrak{q}\neq\mathfrak{q}^{*}\), then the _split prime_\(\mathbb{Z}_{q}\)-extension of \(K\) is one where only one of \(\mathfrak{q}\) or \(\mathfrak{q}^{*}\) is ramified.
In [12, 13], H. Hida studied analogous questions for anticyclotomic characters. He proved that when \(p\) is split in \(K\) and the tame conductor of characters is a product of split primes (which excludes the self-dual characters), the algebraic parts of the \(L\)-values of "almost all" anticyclotomic characters of \(q\)-power conductor over a CM field are non-zero mod \(p\). Here, "almost all" means "Zariski dense" after identifying the characters with a product of the multiplicative group (see Remark 5.2). This has been generalized by M.-L. Hsieh to include self-dual characters assuming that \(p\) is split in \(K\) in [11] and that the inert part of the conductor is square-free. The hypothesis on the inert part of the conductor was removed in [11, Remark 6.4]. In the case where the CM field is an imaginary quadratic field, T. Finis has proved similar results for self-dual characters allowing \(p\) to be either inert or ramified in \(K\), and has determined precisely the \(p\)-adic valuations of the algebraic parts of anticyclotomic Hecke characters of \(q\)-power conductor; see [13]. More
## 1. Introduction
Let \(K\) be a field over a field \(\mathbb{F}\). A _field_\(\mathbb{F}\) is a field over \(\mathbb{F}\), and \(\mathbb{F}\) is a field over \(\mathbb{F}\). The field \(\mathbb{F}\) is called the _field
**Remark 1.2**.: _While Theorems A and B are deduced using Lamplugh's techniques developed in [15], our results are strictly stronger than the one-variable analogue [15, Theorem 6.9]. Indeed, after identifying the characters of \(\operatorname{Gal}(F_{\infty}/F)\) with a subset of \(\mathbb{G}^{2}_{m/\overline{\mathbb{Q}}_{q}}\), the Zariski closure of the set of characters given by \(\operatorname{loc.\ cit.}\) is one copy of \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}\). In particular, it is not Zariski dense in \(\mathbb{G}^{2}_{m/\overline{\mathbb{Q}}_{q}}\)._
_Furthermore, we consider Hecke characters of much more general infinity type than the ones considered in [15]. In addition, the class number of \(K\) is assumed to be \(1\) in [15], whereas Theorem A assumes that \(q\) does not divide \([\mathscr{R}(\mathfrak{f}):K]\) instead._
**Remark 1.3**.: _Using an argument similar to the one presented in [15, Section 7], we expect that Theorem B combined with the Iwasawa main conjecture (proved by K. Rubin) should show that the \(p\)-part of the class groups over a \(\mathbb{Z}^{2}_{q}\)-tower is "generically zero". However, it does not seem to be enough to give a generalization of [15, Theorem 7.10] in our setting, unless we replace "almost all" by "all but finitely many"._
We conclude by discussing some follow-up questions.
* In [14], we study the growth of the \(p\)-part of the class groups in the anticyclotomic \(\mathbb{Z}_{q}\)-extension making use of the aforementioned result of Hida.
* Similar to how we build on Lamplugh's results to obtain our results, it may be possible to prove a similar result for Hecke characters of \(q\)-power conductor over general CM fields, relying on results of Hida and Hsieh on anticyclotomic characters.
* It may also be interesting to generalize our results to the setting of Hida families, utilizing ideas of Burungale developed in [1].
## Acknowledgements
DK thanks Jack Lamplugh for providing a copy of his thesis. AL thanks Ashay Burungale for helpful discussions and for his comments on an earlier draft. DK is supported by the PIMS postdoctoral fellowship. AL is supported by the NSERC Discovery Grants Program RGPIN-2020-04259 and RGPAS-2020-00096. This work was initiated during the thematic semester "Number Theory - Cohomology in Arithmetic" at Centre de Recherches Mathematiques (CRM) in Fall 2020. The authors thank the CRM for the hospitality and generous supports. Finally, we thank the referee for their comments on earlier versions of the article and their valuable suggestions, which led to the removal of several technical hypotheses from our main results.
## 2. Basic Notions
Let \(K\) be a fixed imaginary quadratic field of discriminant \(d_{K}\) and \(H\) denote its Hilbert class field. Throughout, we assume that \(q\) is coprime to the class number of \(K\) and we fix a Hecke character \(\Psi\) given as in the statement of Theorem A. The character \(\overline{\Psi}\circ N^{-j}\) (where \(N\) is the norm map on \(K\)) is of infinity-type \((0,k-j)\). There exists a character \(\chi_{0}\) of \(\operatorname{Gal}(\mathscr{R}(\mathfrak{f})/K)\) and an elliptic curve \(E\) defined over \(\mathscr{R}(\mathfrak{f})\) with complex multiplication by \(\mathcal{O}_{K}\), i.e., \(\mathcal{O}_{K}\simeq\operatorname{End}(E)\), such that
\[\overline{\Psi}N^{-j}=\overline{\varphi^{k-j}}\chi_{0},\]
where \(\varphi\) is a Hecke character of infinity type \((1,0)\) satisfying
\[\psi=\varphi\circ N_{\mathscr{R}(\mathfrak{f})/K}\]
with \(\psi\) being the Hecke character over \(\mathscr{R}(\mathfrak{f})\) attached to \(E\). Furthermore, \(\mathscr{R}(\mathfrak{f})(E_{\mathrm{tor}})\) is an abelian extension of \(K\). (See [10, Chapter II, proofs of Theorems 4.12 and 4.14] where the existence of \(E\) and \(\chi_{0}\) is discussed.) Let \(q\geq 5\) be a prime number that splits in \(K\), i.e.,
\[q\mathcal{O}_{K}=\mathfrak{qq}^{*}\text{ with }\mathfrak{q}\neq\mathfrak{q}^{*}.\]
For any integral ideal \(\mathfrak{a}\) in \(\mathcal{O}_{K}\), we write \(E_{\mathfrak{a}}\) to denote
\[\ker\left(\mathfrak{a}:E\to E\right).\]
We write \(\mu_{K}\) to denote the set of roots of unity in \(K\) and \(w_{K}\) to denote the size of this set.
We fix a different prime \(p\) such that \(p\mathcal{O}_{K}=\mathfrak{pp}^{*}\) in \(K\) with \(\mathfrak{p}\neq\mathfrak{p}^{*}\) and \(\gcd(p,6\mathfrak{f}q)=1\). Note in particular that \(E\) has good reduction at all primes above \(pq\).
## 3. Distributions and measures on \(\mathbb{Z}_{q}^{2}\)
The goal of this section is to generalize the notion of Gamma transform from [14] and elliptic function measures studied in [13, Section 3.2] to the two-variable setting.
Let \(E\) be the elliptic curve given in SS2 and \(k/\mathbb{Q}_{p}\) be a finite unramified extension containing \(\mathbb{Q}_{p}(E_{\mathfrak{f}q})\). Set \(J=k(\mu_{q^{\infty}})\). This is the unramified \(\mathbb{Z}_{q}\)-extension of \(k\) (since \(\mu_{q}\subset k\) by assumption). Let \(\mathfrak{O}\) denote the ring of integers of \(J\). Fix a uniformizer \(\pi\) of \(k\) and let \(\mathrm{ord}_{\pi}\) denote the normalized valuation map
\[\mathrm{ord}_{\pi}:J\to\mathbb{Z}\cup\{\infty\}.\]
**Definition 3.1**.: _Let \(\alpha\) be a \(J\)-valued distribution on \(\mathbb{Z}_{q}^{2}\), i.e., \(\alpha\) is a finitely additive function on the set of compact open subsets of \(\mathbb{Z}_{q}^{2}\) with values in \(J\)._
1. _Given any_ \(c=(c_{1},c_{2})\in(\mathbb{Z}_{q}^{\times})^{2}\)_, define_ \(\alpha\circ c\) _to be the distribution given by_ \(\alpha\circ c(X)=\alpha(cX)\) _for all open compact subsets_ \(X\) _of_ \(\mathbb{Z}_{q}^{2}\)_._
2. _The_ Fourier transform _of_ \(\alpha\) _is defined to be_ \[\hat{\alpha}:\mu_{q^{\infty}}^{2} \to J\] \[(\zeta_{1},\zeta_{2}) \mapsto\int_{(x,y)\in\mathbb{Z}_{q}^{2}}\zeta_{1}^{x}\zeta_{2}^{ y}d\alpha(x,y).\]
3. _Given a finite character_ \(\chi\) _on_ \((\mathbb{Z}_{q}^{\times})^{2}\) _with values in_ \(J\)_, we define_ Leopoldt's \(\Gamma\)-transform _as_ \[\Gamma_{\alpha}(\chi)=\int_{\mathbb{Z}_{q}^{2}}\chi d\alpha,\] _where we extend_ \(\chi\) _to_ \(\mathbb{Z}_{q}^{2}\) _by sending all elements not inside_ \((\mathbb{Z}_{q}^{\times})^{2}\) _to zero._
4. _We call_ \(\alpha\) _a measure on_ \(\mathbb{Z}_{q}^{2}\) _if the image of_ \(\alpha\) _has bounded values with respect to_ \(\mathrm{ord}_{\pi}\)_._
**Lemma 3.2**.: _Suppose that \(\chi\) is a finite-order character on \((\mathbb{Z}_{q}^{\times})^{2}\) factoring through \((\mathbb{Z}/q^{m})^{\times}\times(\mathbb{Z}/q^{n})^{\times}\), then_
\[\Gamma_{\alpha}(\chi)=\tau(\chi)\sum_{\underline{x}\in\mathbb{Z}/q^{m}\times \mathbb{Z}/q^{n}}\chi^{-1}(\underline{x})\hat{\alpha}(\underline{\zeta}^{ \underline{x}}),\]
_where \(\underline{\zeta}=(\zeta_{m},\zeta_{n})\) with \(\zeta_{m}\) and \(\zeta_{n}\) being primitive \(p^{m}\)-th and \(p^{n}\)-th roots of unity respectively, and \(\tau(\chi)\) is the Gauss sum of \(\chi\) defined by_
\[\tau(\chi)=\frac{1}{q^{m+n}}\sum_{(x_{1},x_{2})\in\mathbb{Z}/q^{m}\times \mathbb{Z}/q^{n}}\chi(x_{1},x_{2})\zeta_{m}^{-x_{1}}\zeta_{n}^{-x_{2}}.\]
Proof.: See [11, proof of Proposition 2.2, equation (2.6)] (or [12, proof of Lemma 2.2.3]).
**Lemma 3.3**.: _A distribution \(\alpha\) on \(\mathbb{Z}_{q}^{2}\) is uniquely determined by its Fourier transform \(\hat{\alpha}\)._
Proof.: The characteristic function on the open subset \(U_{a,b}:=(a+q^{m}\mathbb{Z}_{q})\times(b+q^{n}\mathbb{Z}_{q})\) of \(\mathbb{Z}_{q}^{2}\) satisfies
\[\mathbf{1}|_{U_{a,b}}=\frac{1}{q^{m+n}}\sum_{(\zeta_{1},\zeta_{2})\in\mu_{q^{m }}\times\mu_{q^{m}}}\zeta_{1}^{-a}\zeta_{2}^{-b}\chi_{(\zeta_{1},\zeta_{2})},\]
where \(\chi_{(\zeta_{1},\zeta_{2})}:\mathbb{Z}_{q}^{2}\mapsto J\) is the character sending \((x,y)\) to \(\zeta_{1}^{x}\zeta_{2}^{y}\). In particular, we see that \(\alpha\left(U_{a,b}\right)\) is a linear combination of \(\hat{\alpha}(\zeta_{1},\zeta_{2})\). Since the subsets \(U_{a,b}\) form a basis of open compact sets of \(\mathbb{Z}_{q}^{2}\), \(\alpha\) is uniquely determined by \(\hat{\alpha}\).
For the rest of the article, we fix an isomorphism of groups \(\delta:(\mu_{q^{\infty}})^{2}\stackrel{{\sim}}{{\longrightarrow }}E_{q^{\infty}}\).
**Definition 3.4**.: _A \(J\)-valued distribution \(\alpha\) on \(\mathbb{Z}_{q}^{2}\) is an elliptic function measure for our fixed elliptic curve \(E\) (with respect to \(\delta\)) if there exists a rational function \(R\in J(E)\) such that for almost all \(\underline{\zeta}\in(\mu_{q^{\infty}})^{2}\), we have_
\[\hat{\alpha}(\underline{\zeta})=R(\delta(\underline{\zeta})).\]
**Lemma 3.5**.: _Let \(f\in\mathfrak{O}[x,y]\) such that the image of \(f\) in \(J(E)\) is non-zero. Then, there exists a unique integer \(n\geq 0\) such that_
\[\mathrm{ord}_{\pi}(f(Q))\geq n\ \forall Q\in E_{q^{\infty}}\setminus\{0\}\]
_with equality holding for almost all \(Q\in E_{q^{\infty}}\)._
Proof.: The proof of [12, Lemma 3.2] goes through in verbatim on replacing \(E_{q^{\infty}}\) by \(E_{q^{\infty}}\).
This lemma allows us to define a valuation on \(J(E)\).
**Definition 3.6**.: _Given an \(R\in J(E)\). If \(R\neq 0\), we define \(\mathrm{ord}_{\pi}(R)\) to be the integer \(n\) such that \(\mathrm{ord}_{\pi}(R(Q))=n\) for almost all \(Q\in E_{q^{\infty}}\). If \(R=0\), we set \(\mathrm{ord}_{\pi}(R)=\infty\)._
By Lemma 3.5, if \(\alpha\) is an elliptic function measure, then it is in fact a measure (not just a distribution) since the values of \(\alpha\) are linear combinations of \(\hat{\alpha}=R\circ\delta\) as we have seen in the proof of Lemma 3.3 and \(\frac{1}{q^{m+n}}\in\mathfrak{O}^{\times}\) (as \(p\neq q\)).
Note that for any given rational function \(R\in J(E)\), we can define a \(J\)-valued measure attached to \(R\) as given by the following lemma:
**Lemma 3.7**.: _Let \(R\in J(E)\) be a rational function. There exists a unique measure \(\alpha\) on \(\mathbb{Z}_{q}^{2}\) such that the Fourier transform \(\hat{\alpha}\) coincides with \(R\circ\delta\). In other words, \(\alpha\) is an elliptic function measure associated to \(R\) in the sense of Definition 3.4._
Proof.: By the proof of Lemma 3.3, we may define a measure \(\alpha\) satisfying
\[\alpha\left((a+q^{m}\mathbb{Z}_{q})\times(b+q^{n}\mathbb{Z}_{q})\right)=\frac {1}{q^{m+n}}\sum_{(\zeta_{1},\zeta_{2})\in\mu_{q^{m}}\times\mu_{q^{n}}}\zeta_ {1}^{-a}\zeta_{2}^{-b}R\circ\delta(\zeta_{1},\zeta_{2}).\]
It follows from direct calculations that \(\alpha\) is additive and that \(\hat{\alpha}=R\circ\delta\)
We now show how Gamma transforms behave under Galois actions. This will be utilized in subsequent sections. Let us define the following homomorphisms of groups
\[\chi_{\mu} :\operatorname{Gal}(J/k)\hookrightarrow\operatorname{Aut}(\mu_{q^{ \infty}})^{2}\simeq(\mathbb{Z}_{q}^{\times})^{2},\] \[\chi_{E} :\operatorname{Gal}(J/k)\hookrightarrow\operatorname{Aut}(E_{q^{ \infty}})\times\operatorname{Aut}(E_{\overline{q}^{\infty}})\simeq(\mathbb{Z} _{q}^{\times})^{2}.\]
Note that \(\chi_{\mu}=\chi_{\operatorname{cyc}}\times\chi_{\operatorname{cyc}}\), where \(\chi_{\operatorname{cyc}}\) is the cyclotomic character.
**Definition 3.8**.: _An **elliptic function measure**\(\alpha\) for \(E\) is said to be defined over \(k\), if \(\hat{\alpha}=R\circ\delta\) for a rational function \(R\in k(E)\)._
**Lemma 3.9**.: _Suppose that \(\alpha\) is an elliptic function measure defined over \(k\). Then, for almost all finite-order characters \(\kappa\) of \((\mathbb{Z}_{q}^{\times})^{2}\) and for all \(\sigma\in\operatorname{Gal}(J/k)\), we have_
\[\Gamma_{\alpha}(\kappa)^{\sigma}=\frac{\kappa^{\sigma}(\chi_{E}(\sigma))}{ \kappa^{\sigma}(\chi_{\mu}(\sigma))}\Gamma_{\alpha}(\kappa^{\sigma}).\]
Proof.: It follows from Lemma 3.2 that
\[\Gamma_{\alpha}(\kappa)^{\sigma}=\tau(\kappa)^{\sigma}\sum_{\underline{x} \in\mathbb{Z}/q^{m}\times\mathbb{Z}/q^{n}}\kappa^{-1}(\underline{x})^{\sigma }\hat{\alpha}(\underline{\zeta}^{\underline{x}})^{\sigma}.\]
We have
\[\tau(\kappa)^{\sigma} =\frac{1}{q^{m+n}}\sum_{(x_{1},x_{2})\in\mathbb{Z}/q^{m}\times \mathbb{Z}/q^{n}}\kappa(x_{1},x_{2})^{\sigma}(\zeta_{m}^{-x_{1}}\zeta_{n}^{-x_ {2}})^{\sigma}\] \[=\frac{1}{q^{m+n}}\sum_{(x_{1},x_{2})\in\mathbb{Z}/q^{m}\times \mathbb{Z}/q^{n}}\kappa(x_{1},x_{2})^{\sigma}\zeta_{m}^{-\chi_{\operatorname{ cyc}}(\sigma)x_{1}}\zeta_{n}^{-\chi_{\operatorname{cyc}}(\sigma)x_{2}}\] \[=\frac{\kappa^{\sigma}(\chi_{\operatorname{cyc}}(\sigma),\chi_{ \operatorname{cyc}}(\sigma))^{-1}}{q^{m+n}}\sum_{(x_{1},x_{2})\in\mathbb{Z}/q ^{m}\times\mathbb{Z}/q^{n}}\kappa(x_{1},x_{2})^{\sigma}\zeta_{m}^{-x_{1}} \zeta_{n}^{-x_{2}}\] \[=\kappa^{\sigma}(\chi_{\mu}(\sigma))^{-1}\tau(\kappa^{\sigma}).\]
Since \(\alpha\) is an elliptic function measure, we have
\[\hat{\alpha}(\underline{\zeta}^{\underline{x}})^{\sigma}=R(\delta(\underline {\zeta}^{\underline{x}})^{\sigma})=R(\delta(\underline{\zeta}^{\chi_{E}( \sigma)\underline{x}}))=\hat{\alpha}(\underline{\zeta}^{\chi_{E}(\sigma) \underline{x}})\]
for some \(R\in k(E)\). Therefore, combining these equations gives
\[\Gamma_{\alpha}(\kappa)^{\sigma} =\kappa^{\sigma}(\chi_{\mu}(\sigma))^{-1}\tau(\kappa^{\sigma}) \sum_{\underline{x}\in\mathbb{Z}/q^{m}\times\mathbb{Z}/q^{n}}\kappa^{-1}( \underline{x})^{\sigma}\hat{\alpha}(\underline{\zeta}^{\chi_{E}(\sigma) \underline{x}})\] \[=\frac{\kappa^{\sigma}(\chi_{E}(\sigma))}{\kappa^{\sigma}(\chi_{ \mu}(\sigma))}\tau(\kappa^{\sigma})\sum_{\underline{x}\in\mathbb{Z}/q^{m} \times\mathbb{Z}/q^{n}}\kappa^{-1}(\underline{x})^{\sigma}\hat{\alpha}( \underline{\zeta}^{\underline{x}})\] \[=\frac{\kappa^{\sigma}(\chi_{E}(\sigma))}{\kappa^{\sigma}(\chi_{ \mu}(\sigma))}\Gamma_{\alpha}(\kappa^{\sigma})\]
where the last equality follows from Lemma 3.2 applied to \(\kappa^{\sigma}\)
## 4. Algebraic Independence Results
The main result of this section is Theorem 4.6, where we prove an algebraic independence result of functions on \(E_{q^{\infty}}\) taking values in a _finite field_ whose characteristic is distinct from \(q\). The first step is Theorem 4.2, which is an analogue of [14, Proposition 3.1] (and also [1, Theorem 4.5]). This step involves proving an algebraic independence result of functions on \(E_{q^{\infty}}\) taking values in a _general field_, \(\mathcal{F}\). Let \(E\) be an elliptic curve as fixed in Section 2. We suppose that \(E\) can be considered as a curve over the field \(\mathcal{F}\) (for example, the residue field of \(H\) modulo a prime ideal). Suppose that \(q>3\) is a rational prime that splits in \(\mathcal{O}_{K}\) and \(\operatorname{char}(\mathcal{F})\neq q\). This result essentially says that endomorphisms in \(\operatorname{End}(E_{q^{\infty}})\times\operatorname{End}(E_{q^{*\infty}})\) which are independent over \(\operatorname{End}_{\mathcal{F}}(E)\), are algebraically independent.
The following lemma is required for the proof of Theorem 4.2.
**Lemma 4.1**.: _Let \(\Phi_{1},\ldots,\Phi_{s}\) be non-trivial morphisms from \(E^{n}\) to \(E\) of the form_
\[\Phi_{j}:(P_{i})_{i=1}^{n}\mapsto\sum_{i=1}^{n}\alpha_{ij}(P_{i})\]
_where \(\alpha_{ij}\in\operatorname{End}_{\mathcal{F}}(E)\) for all \(1\leq i\leq n\) and \(1\leq j\leq s\). Suppose that the only relation of the kind \(\alpha\Phi_{k}=\beta\Phi_{\ell}\) for \(\alpha,\beta\in\operatorname{End}_{\mathcal{F}}(E)\) and \(k\neq\ell\), is when \(\alpha=\beta=0\). If \(r_{1},\ldots,r_{s}\in\mathcal{F}(E)\) with \(\sum_{j=1}^{s}r_{j}\circ\Phi_{j}=0\), then each \(r_{j}\) is a constant function._
Proof.: See [1, Proposition 4.4].
**Theorem 4.2**.: _Let \(\mathcal{F}\) be any field as above, and \(E\) an elliptic curve defined over \(\mathcal{F}\) such that \(\operatorname{End}_{\mathcal{F}}(E)\simeq\mathcal{O}_{K}\). Suppose that \(\underline{\eta}_{1},\ldots,\underline{\eta}_{s}\in\operatorname{End}(E_{q^{ \infty}})\times\operatorname{End}(E_{q^{*\infty}})\) are such that \(\alpha\underline{\eta}_{k}=\beta\underline{\eta}_{\ell}\) for \(k\neq\ell\) and some \(\alpha,\beta\in\operatorname{End}_{\mathcal{F}}(E)\) only when \(\alpha=\beta=0\). Consider the function_
\[R=\sum_{j=1}^{s}r_{j}\circ\underline{\eta}_{j}:E_{q^{\infty}}\times E_{q^{* \infty}}\to\overline{\mathcal{F}}\]
_where \(r_{j}\in\mathcal{F}(E)\) and \(\overline{\mathcal{F}}\) denotes an algebraic closure of \(\mathcal{F}\). If \(R(Q)=0\) for all \(Q\in E_{q^{\infty}}\times E_{q^{*\infty}}\), then all \(r_{j}\)'s are constant functions._
Proof.: We recall that \(\operatorname{End}(E_{q^{\infty}})\simeq\mathcal{O}_{q}\), \(\operatorname{End}(E_{q^{*\infty}})\simeq\mathcal{O}_{q^{*}}\) and \(\operatorname{End}_{\mathcal{F}}(E)\simeq\mathcal{O}_{K}\). Consider a free \(\mathcal{O}_{K}\) submodule \(A\) of \(\mathcal{O}_{q}\times\mathcal{O}_{q^{*}}\) of rank \(n\) that contains \(\underline{\eta}_{j}\) for \(1\leq j\leq s\). Let \(\{\underline{\varepsilon}_{i}\}_{i=1}^{n}\) be an \(\mathcal{O}_{K}\)-basis of \(A\). Then, there exist unique \(\alpha_{ij}\in\mathcal{O}_{K}\) such that
\[\underline{\eta}_{j}=\sum_{i=1}^{n}\alpha_{ij}\underline{\varepsilon}_{i}.\]
Define the map
\[\iota:E_{q^{\infty}}\times E_{q^{*\infty}}=E_{q^{\infty}}\to E^{n}\text{ given by }Q\mapsto(\underline{\varepsilon}_{i}Q)_{i=1}^{n}\,.\]
For each \(1\leq j\leq s\), denote the morphism
\[\Phi_{j}:E^{n}\to E;\qquad\left(P_{i}\right)_{i=1}^{n}\mapsto\sum_{i=1}^{n} \alpha_{ij}P_{i}.\]
We have assumed that
\[\sum_{j=1}^{s}r_{j}\circ\Phi_{j}(\mathcal{Q})=0\qquad\text{for all }\mathcal{Q}\in \iota(E_{q^{\infty}}\times E_{q^{*\infty}})\subseteq E^{n}.\]
Hence, the above equality must hold for all \(\mathcal{Q}\) in the Zariski closure of \(\iota(E_{\mathfrak{q}^{\infty}}\times E_{\mathfrak{q}^{*\infty}})\). It follows from basic facts about Zariski closed subgroups of \(E^{n}\) (see [13, Lemmas 1 and 3]) that either the Zariski closure of \(\iota(E_{q^{\infty}})\) is \(E^{n}\) or there exist \(\alpha_{i}\in\mathcal{O}_{K}\) (not all zero) such that
\[\sum_{i=1}^{n}\alpha_{i}\underline{\varepsilon}_{i}(Q)=0\text{ for all }Q\in E_{q^{\infty}}.\]
If the latter holds, it means that \(\sum_{i=1}^{n}\alpha_{i}\underline{\varepsilon}_{i}=0\). However, this contradicts the fact that \(\underline{\varepsilon}_{1},\ldots,\underline{\varepsilon}_{n}\) is a basis for \(A\). Thus, the Zariski closure of \(\iota(E_{q^{\infty}})\) is \(E^{n}\). Lemma 4.1 implies that each \(r_{i}\) is a constant function.
To prove the main result in this section, we need a strengthened version of Theorem 4.2. This is achieved by combining the following Diophantine approximation result (Lemma 4.3) with a special case of a lemma due to Hida (Lemma 4.4), which we record below.
**Lemma 4.3**.: _Given \(\underline{\beta}_{1},\ldots,\underline{\beta}_{d}\in\mathcal{O}_{\mathfrak{ q}}\times\mathcal{O}_{\mathfrak{q}^{*}}\) for any integer \(d\geq 1\), and a positive constant \(c\leq 1\), there exists an integer \(N\) such that for all \(n\geq N\), there exist algebraic integers \(b_{1},\ldots,b_{d}\in\mathcal{O}_{K}\) and a unit \(u\in\mathcal{O}_{\mathfrak{q}}^{\times}\times\mathcal{O}_{\mathfrak{q}^{*}}^{\times}\) satisfying_
\[v_{\mathfrak{p}}(u\underline{\beta}_{i}-b_{i}) \geq n\text{ for }\mathfrak{p}\in\{\mathfrak{q},\mathfrak{q}^{*}\}\text{ and }\] \[N_{K/\mathbb{Q}}(b_{i}) <c\cdot q^{2n}.\]
Proof.: See [16, Lemma 2.3.9].
**Lemma 4.4**.: _Let \(r\) be a positive integer. Let \(X=\bigcup_{i=1}^{k}X_{i}\) be a proper subset of \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\) such that_
* \(X\) _is Zariski closed._
* _For each_ \(i\)_, there exists a closed subscheme_ \(Y_{i}\) _that is stable under_ \(t\mapsto t^{p^{rn}}\) _for all_ \(n\in\mathbb{Z}\)_, such that_ \(X_{i}=\underline{\zeta}Y_{i}\) _for certain_ \(\underline{\zeta}\in\mu_{q^{\infty}}^{2}\)_;_
_There exists \(P\), which is a \(p^{r}\)-power, and an infinite sequence of integers \(0<n_{1}<n_{2}<\cdots\) such that for all \(j\geq 1\),_
\[\Xi_{j}\cap X=\emptyset,\]
_where \(\Xi_{j}\) is defined by_
\[\left\{\left(\frac{P^{x}}{q^{n_{j}}},\frac{P^{y}}{q^{n_{j}}}\right)\mod \mathbb{Z}_{q}^{2}:x,y\in\mathbb{Z}\right\}\subset(\mathbb{Q}_{q}/\mathbb{Z}_{q })^{2}\]
_after identifying \(\mu_{q^{\infty}}^{2}\) with \((\mathbb{Q}_{q}/\mathbb{Z}_{q})^{2}\) under an appropriate choice of basis._
Proof.: See [14, Lemma 3.4].
**Remark 4.5**.: _On studying he proof of the above lemma, we see that \(P\equiv 1\mod q\). If we write \(P=1+q^{v}u\), where \(q\nmid u\), then \(\left|\Xi_{j}\right|=q^{2(n_{j}-v)}\)._
**Theorem 4.6**.: _Let \(\mathbb{F}\) be a finite field. Suppose that \(\underline{\eta}_{1},\ldots,\underline{\eta}_{s}\in\operatorname{End}(E_{ \mathfrak{q}^{\infty}})\times\operatorname{End}(E_{\mathfrak{q}^{*\infty}})\) are such that the only relation of the kind \(\alpha\underline{\eta}_{k}=\beta\underline{\eta}_{\ell}\) for \(k\neq\ell\) and \(\alpha,\beta\in\operatorname{End}_{\mathbb{F}}(E)\) is when \(\alpha=\beta=0\). Consider the function_
\[R=\sum_{i=1}^{s}r_{i}\circ\underline{\eta}_{i}:E_{\mathfrak{q}^{\infty}}\times E _{\mathfrak{q}^{*\infty}}\rightarrow\overline{\mathbb{F}}\]
_where \(r_{i}\in\mathbb{F}(E)\). We identify \(E_{q^{\infty}}\) with \(\mu_{q^{\infty}}^{2}\subset\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\). Then either \(\{Q\in E_{q^{\infty}}:R(Q)\neq 0\}\) is Zariski dense in \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\) or \(R\) is identically zero. In the latter case, all \(r_{i}\)'s are constant functions._
Proof.: Suppose that \(\{Q\in E_{q^{\infty}}:R(Q)\neq 0\}\) is not Zariski dense and that \(R\) is not identically zero. We take \(P\) to be a large enough \(p\)-power so that \(R\) is defined over \(\mathbb{F}_{P}\) (the finite field of cardinality \(P\)). Let \(X\) be the Zariski closure of \(\{Q\in E_{q^{\infty}}:R(Q)\neq 0\}\) in \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\). Then, \(X\) is a proper subset of \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\) and \(X^{P}\subseteq X\).
Let \(\log_{q}:\widehat{\mathbb{G}}_{m/\overline{\mathbb{Q}}_{q}}^{2}\to\widehat{ \mathbb{G}}_{a/\overline{\mathbb{Q}}_{q}}^{2}\) be the \(q\)-adic logarithm map. We decompose \(\log_{q}(X)\) into a finite union of closed subsets, each of which is stable under the multiplication by \(P\). This allows us to write \(X\) as a finite union of closed subschemes \(X_{i}\) of the form \(\underline{\zeta}Y_{i}\), where \(\underline{\zeta}\in\mu_{q^{\infty}}^{2}\) and \(Y_{i}\) is stable under \(t\mapsto t^{P}\). Therefore, Lemma 4.4 applies. In particular, there exists a sequence of integers \(0<n_{1}<n_{2}<\cdots\) and a collection of subsets \(\Xi_{j}\) of \(q^{n_{j}}\)-torsion points in \(E_{q^{\infty}}\) on which \(R\) vanishes, with \(\left|\Xi_{j}\right|=q^{2(n_{j}-v)}\) for some fixed integer \(v\).
Define
\[\delta:=\max_{1\leq i\leq s}\deg(r_{i}).\]
We apply Lemma 4.3 to \(\underline{\eta}_{1},\cdots,\underline{\eta}_{s}\) and \(c=\dfrac{1}{q^{2v}\cdot s\cdot\delta}\). There exists an integer \(N\) such that for all \(n_{j}\geq N\), there are algebraic integers \(b_{1},\ldots,b_{s}\in\mathcal{O}_{K}\) and \(u\in\mathcal{O}_{\mathfrak{q}}^{\times}\times\mathcal{O}_{\mathfrak{q}^{*}}^ {\times}\) (depending on \(n_{j}\)) satisfying
\[v_{\mathfrak{p}}(u\underline{\eta}_{i}-b_{i})\geq n_{j}\text{ for } \mathfrak{p}\in\{\mathfrak{q},\mathfrak{q}^{*}\}\text{ and }\] \[\left|N_{K/\mathbb{Q}}(b_{i})\right|<c\cdot q^{2n_{j}}.\]
In particular, the rational function
\[R_{n_{j}}:=\sum_{i=1}^{s}r_{i}\circ b_{i}\in\mathbb{F}(E)\]
agrees with \(R\circ u\) on \(E_{q^{n_{j}}}\). Thus, it vanishes on \(\Xi_{j}\). Moreover,
\[\deg(R_{n_{j}})\leq\sum_{i=1}^{s}\delta\cdot N_{K/\mathbb{Q}}(b_{i})<s\delta \cdot c\cdot q^{2n_{j}}=q^{2n_{j}-2v}=\left|\Xi_{j}\right|.\]
Therefore, \(R_{n_{j}}=0\) and thus \(R\) is zero on \(E_{q^{n_{j}}}\). But \(n_{j}\) can be arbitrarily large. This implies that \(R\) is identically zero, which is a contradiction. This concludes the first assertion of the theorem. The last assertion follows immediately from Theorem 4.2.
## 5. A theorem on two-variable Gamma transforms
The purpose of this section is to prove a two-variable version of [1, Theorem 5.1] (which in turn generalizes a result of Sinnott [26, Theorem 3.1]). Our proof utilizes crucially Theorem 4.6 from the previous section. Throughout, we use the same notation introduced in Sections 2 and 3.
**Theorem 5.1**.: _Let \(\alpha\) be an elliptic function measure for \(E\) defined over \(k\) on \(\mathbb{Z}_{q}^{2}\) that is supported on \((\mathbb{Z}_{q}^{\times})^{2}\), and satisfies \(\alpha\circ\omega=\alpha\) for all \(\omega\in\mu_{K}^{2}\). Let \(R\) denote the corresponding rational function (so that \(\hat{\alpha}=R\circ\delta\) as in Definition 3.4), and let \(n=\operatorname{ord}_{\pi}(R)\) (as in Definition 3.6). Then for a Zariski dense set of finite-order characters \(\kappa\) of \((1+q\mathbb{Z}_{q})^{2}\), we have_
\[\operatorname{ord}_{\pi}\left(\Gamma_{\alpha}(\kappa)\right)=n.\]
**Remark 5.2**.: _We view \(\mathrm{Hom}(\mathbb{Z}_{q}^{2},\mu_{q^{\infty}})\) as a subset of \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\) by sending \(\kappa\) to \((\kappa(1,0),\kappa(0,1))\). A set of finite-order characters is called Zariski dense if its image in \(\mathbb{G}_{m/\overline{\mathbb{Q}}_{q}}^{2}\) is a dense subset under the Zariski topology._
The following lemma is a key technical ingredient of the proof of Theorem 5.1.
**Lemma 5.3**.: _Let \(\alpha\) be an elliptic function measure as in the statement of Theorem 5.1. Define_
\[\beta=\sum_{\eta}\left(\alpha\circ\eta\right)|_{(1+q\mathbb{Z}_{q})^{2}},\]
_where \(\eta\) runs over a set of representatives for \((\mu_{q-1}/\mu_{K})^{2}\) and \(\alpha\circ\eta\) is defined as in Definition 3.1(i). For each \(y=(y_{1},y_{2}){\in\mu_{q-1}^{2}}\), we write_
\[\beta_{y}=\beta|_{y_{1}(1+q^{M}\mathbb{Z}_{q})\times y_{2}(1+q^{M}\mathbb{Z}_{ q})},\]
_where \(M\geq 1\) is the integer such that \(\mu_{q^{\infty}}\cap k=\mu_{q^{M}}\). Let \(\kappa=(\kappa_{1},\kappa_{2})\) be a finite-order character of \((1+q\mathbb{Z}_{q})^{2}\). Suppose that there exist integers \(m,n\geq M\) satisfying_
\[\ker(\kappa_{1})=1+q^{m+M}\mathbb{Z}_{q},\quad\ker(\kappa_{2})=1+q^{n+M} \mathbb{Z}_{q}.\]
_Let \(\underline{\zeta}=(\zeta_{1},\zeta_{2})\in\mu_{q^{\infty}}\) such that_
\[\zeta_{1}^{q^{m}}=\kappa_{1}(1+q^{m}),\quad\zeta_{2}^{q^{n}}=\kappa_{2}(1+q^{n }).\]
_Then \(\Gamma_{\beta}(\kappa)\in\pi\mathfrak{O}\) if and only if \(\hat{\beta}_{y}(\underline{\zeta}^{y^{-1}})\in\pi\mathfrak{O}\) for all \(y\in\mu_{q-1}^{2}\)._
Proof.: Suppose that \(\Gamma_{\beta}(\kappa)\in\pi\mathfrak{O}\). Let \(\sigma\in\mathrm{Gal}(J/k)\) and \(\underline{\xi}\in\mu_{q^{\infty}}^{2}\). Recall from the proof of Lemma 3.9 that
\[\hat{\alpha}(\underline{\xi})^{\sigma}=\hat{\alpha}(\underline{\xi}^{\chi_{E} (\sigma)}).\]
Since Fourier transform is additive, we have equivalently
\[\hat{\beta}(\underline{\xi})^{\sigma}=\hat{\beta}(\underline{\xi}^{\chi_{E} (\sigma)}).\]
Furthermore, Lemma 3.9 asserts that
\[\Gamma_{\beta}(\kappa)^{\sigma}=\frac{\kappa^{\sigma}(\chi_{E}(\sigma))}{ \kappa^{\sigma}(\chi_{\mu}(\sigma))}\Gamma_{\beta}(\kappa^{\sigma}).\]
Thus, \(\mathrm{ord}_{\pi}\left(\Gamma_{\beta}(\kappa^{\sigma})\right)\) is independent of \(\sigma\in\mathrm{Gal}(J/k)\) because \(\Gamma_{\beta}(\kappa)\in\pi\mathfrak{O}\) by assumption and \(\kappa\) takes values in the group of roots of unity. In particular, \(\Gamma_{\beta}(\kappa^{\sigma})\in\pi\mathfrak{O}\) for all \(\sigma\in\mathrm{Gal}(J/k)\) under our hypothesis that \(\Gamma_{\beta}(\kappa)\in\pi\mathfrak{O}\).
Let \(N=\max(m,n)\). Write \(k_{N-1}\) to denote the \((N-1)\)-th layer of the \(\mathbb{Z}_{q}\)-extension \(J/k\), and set \(H=\mathrm{Gal}(k_{N-1}/k)\). Let \(y\in(1+q\mathbb{Z}_{q})^{2}\). We have
\[\sum_{\sigma\in H}\kappa^{\sigma}(y)^{-1}\Gamma_{\beta}(\kappa^{ \sigma}) =\sum_{\sigma\in H}\kappa^{\sigma}(y)^{-1}\int_{(1+q\mathbb{Z}_{q}) ^{2}}\kappa^{\sigma}(x)d\beta(x)\] \[=\sum_{\sigma\in H}\int_{(1+q\mathbb{Z}_{q})^{2}}\kappa^{\sigma}( x/y)d\beta(x)\] \[=\int_{(1+q\mathbb{Z}_{q})^{2}}\mathrm{Tr}_{k_{N-1}/k}\circ \kappa(x/y)d\beta(x)\] \[=q^{N-1}\int_{y_{1}(1+q^{m}\mathbb{Z}_{q})\times y_{2}(1+q^{n} \mathbb{Z}_{q})}\kappa(x/y)d\beta(x).\]
Note that \(q^{N-1}\) is a unit in \(\mathfrak{O}\) (since \(q\neq p\)). Therefore, \(\Gamma_{\beta}(\kappa)\in\pi\mathfrak{O}\) implies that
\[\int_{y_{1}(1+q^{m}\mathbb{Z}_{q})\times y_{2}(1+q^{n}\mathbb{Z}_{q})}\kappa(x/y )d\beta(x)\in\pi\mathfrak{O}.\]
Let \(x=(x_{1},x_{2})=y(1+q^{m}z_{1},1+q^{n}z_{2})=(y_{1}(1+q^{m}z_{1}),y_{2}(1+q^{n }z_{2}))\), where \(z_{1},z_{2}\in\mathbb{Z}_{q}\). Then
\[\kappa(x/y)=\kappa(1+q^{m}z_{1},1+q^{n}z_{2})=\kappa((1+q^{m})^{z_{1}},(1+q^{n })^{z_{2}})=\zeta_{1}^{x_{1}/y_{1}-1}\zeta_{2}^{x_{2}/y_{2}-1}.\]
Thus, we deduce that
\[\int_{y_{1}(1+q^{m}\mathbb{Z}_{q})\times y_{2}(1+q^{n}\mathbb{Z}_{q})}\zeta_{ 1}^{x_{1}/y_{1}}\zeta_{2}^{x_{2}/y_{2}}d\beta(x)\in\pi\mathfrak{O}.\]
If we replace \(y\) by \(yt=(y_{1}t_{1},y_{2}t_{2})\) and \((\zeta_{1},\zeta_{2})\) by \((\zeta_{1}^{t_{1}},\zeta_{2}^{t_{2}})\) for any \(t=(t_{1},t_{2})\in(1+q^{M}\mathbb{Z}_{q})^{2}\), the same containment holds. Hence, summing over \(t\in(1+q^{M}\mathbb{Z}_{q})^{2}/(1+q^{m}\mathbb{Z}_{q})\times(1+q^{n}\mathbb{Z }_{q})\), we deduce that
\[\hat{\beta}_{y}(\underline{\zeta}^{y^{-1}})=\int_{y(1+q^{2}\mathbb{Z}_{q})^{2 }}\zeta_{1}^{x_{1}/y_{1}}\zeta_{2}^{x_{2}/y_{2}}d\beta(x)\in\pi\mathfrak{O}.\]
The converse follows from Lemma 3.2 and the fact that the Gauss sum \(\tau(\kappa)\) is a \(\pi\)-adic unit (which is a consequence of the fact that its conductor is coprime to \(p\)).
Proof of Theorem 5.1.: Without loss of generality, we assume that \(n=\operatorname{ord}_{\pi}(R)=0\). Let \(\beta\) be as defined in the statement of Lemma 5.3, and \(w_{K}\) denote the number of elements in \(\mu_{K}\) (which is coprime to \(p>3\)). We have
\[\frac{1}{w_{K}^{2}}\Gamma_{\alpha}(\kappa)=\Gamma_{\beta}(\kappa).\]
Let us write
\[\alpha_{\eta y}=\alpha|_{\eta_{1}y_{1}(1+q^{M}\mathbb{Z}_{q})\times\eta_{2}y_ {2}(1+q^{M}\mathbb{Z}_{q})}\]
for \(\eta=(\eta_{1},\eta_{2})\in\mu_{q-1}^{2}\) and \(y=(y_{1},y_{2})\in(1+q\mathbb{Z}_{q})^{2}\). Note that \(\alpha_{\eta y}\) is an elliptic function measure since it is a restriction of \(\alpha\). Furthermore, we write \(R_{\eta y}\) for the rational function on \(E\) attached to \(\alpha_{\eta y}\) (meaning that \(\hat{\alpha}_{\eta y}=R_{\eta y}\circ\delta\) as functions on \(\mu_{q^{\infty}}^{2}\)). As can be seen in the proof of Lemma 3.7, \(R_{\eta y}\) takes values in \(\mathfrak{O}\). Let \(\tilde{R}_{\eta y}\) denote the function \(R_{\eta y}\) modulo \(\pi\).
Suppose that the set of characters \(\kappa\) with \(\operatorname{ord}_{\pi}(\Gamma_{\alpha}(\kappa))=0\) is not Zariski dense. Note that for all \(\kappa\), we have \(\operatorname{ord}_{\pi}(\Gamma_{\alpha}(\kappa))=\operatorname{ord}_{\pi}( \Gamma_{\beta}(\kappa))\) by Lemma 3.5 and the fact that \(p\nmid w_{K}\). Equivalently, the set of characters \(\kappa\) such that \(\Gamma_{\beta}(\kappa)\not\in\pi\mathfrak{O}\) is not Zariski dense. By Lemma 5.3, the set of elements \(Q\in E_{q^{\infty}}\) such that
\[\sum_{\eta\in(\mu_{q-1}/\mu_{K})^{2}}\tilde{R}_{\eta y}([\eta^{-1}]\circ Q)\neq 0\]
is not Zariski dense.
Applying Theorem 4.6, it follows that \(\tilde{R}_{\eta y}\) is a constant function. Let \(c_{\eta y}\) denote a constant of \(\mathfrak{O}\) lifting \(\tilde{R}_{\eta y}\) and let \(\delta_{0}\) denote the Dirac measure of \(\mathbb{Z}_{q}^{2}\) concentrated at \((0,0)\). By definition, the Fourier transform \(\hat{\delta}_{0}\) sends all \(\underline{\zeta}\in\mu_{q^{\infty}}\) to \(1\). Therefore, the Fourier transform of \(\alpha_{\eta y}-c_{\eta y}\delta_{0}\) takes values in \(\pi\mathfrak{O}\). In particular,
\[\operatorname{ord}_{\pi}(\alpha_{\eta y}-c_{\eta y}\delta_{0})>0.\]
However, if we restrict the measure \(\alpha_{\eta y}-c_{\eta y}\) to \((\mathbb{Z}_{q}^{\times})^{2}\), it agrees with \(\alpha_{\eta y}\). Thus,
\[\operatorname{ord}_{\pi}(\alpha_{\eta y})=\operatorname{ord}_{\pi}(\alpha_{ \eta y}-c_{\eta y}\delta_{0})>0.\]
This contradicts our hypothesis that \(\operatorname{ord}_{\pi}(R)=0\)
## 6. Proof of Theorem A
In this section we apply Theorem 5.1 to study \(\pi\)-adic valuations of special values of \(L\)-functions and prove Theorem A stated in the introduction.
### Notation on ray class fields and CM elliptic curves
We keep the notation introduced in Section 2. Recall that \(K\) is a fixed imaginary quadratic field, and \(H\) is its Hilbert class field.
**Definition 6.1**.: _Let \(\mathfrak{a}\) be an integral ideal of \(K\)._
* _We write_ \(\mathscr{R}(\mathfrak{a})\) _for the ray class field of_ \(K\) _with conductor_ \(\mathfrak{a}\)_._
* _Given another ideal_ \(\mathfrak{b}\) _of_ \(K\) _which is coprime to_ \(\mathfrak{a}\)_, we write_ \((\mathfrak{b},\mathscr{R}(\mathfrak{a}))\in\operatorname{Gal}(\mathscr{R}( \mathfrak{a})/K)\) _for the Artin symbol of_ \(\mathfrak{b}\)_._
* _Given a character_ \(\rho\) _on_ \(\operatorname{Gal}(\mathscr{R}(\mathfrak{a})/K)\)_, we shall write_ \(\rho(\mathfrak{b})\) _and_ \(\rho\left((\mathfrak{b},\mathscr{R}(\mathfrak{a}))\right)\) _interchangeably._
Recall from SS2 that \(E\) is an elliptic curve with complex multiplication by \(\mathcal{O}_{K}\) with good reduction at the primes above \(p\) and \(q\). Let \(\omega_{E}\) denote the Neron differential for \(E_{/\mathscr{R}(\mathfrak{f})}\) and \(\mathcal{L}=\Omega_{\infty}\mathcal{O}_{K}\) be its period lattice. Note that \(\Omega_{\infty}\) is uniquely determined up to a root of unity in \(K\).
Given an ideal \(\mathfrak{b}\) of \(K\) coprime to \(\mathfrak{f}\), there exists \(\Lambda(\mathfrak{b})\in\mathscr{R}(\mathfrak{f})^{\times}\) such that
\[\mathcal{L}_{\mathfrak{b}}=\Lambda(\mathfrak{b})\mathfrak{b}^{-1}\mathcal{L} \tag{6.1}\]
is the lattice associated with \(E^{(\mathfrak{b},\mathscr{R}(\mathfrak{f}))}\), as given by [10, (16) on p. 42] (see also [11, Definition, p. 198]). For simplicity, we shall write \(E^{(\mathfrak{b})}\) for the CM elliptic curve \(E^{(\mathfrak{b},\mathscr{R}(\mathfrak{f}))}\) and denote by
\[\lambda(\mathfrak{b}):E\to E^{(\mathfrak{b})}\]
the unique isogeny given by [10, (15) on p. 42].
Consider the complex analytic isomorphism of complex Lie groups
\[\xi_{\mathfrak{b}}:\mathbb{C}/\mathcal{L}_{\mathfrak{b}}\xrightarrow{\sim}E ^{(\mathfrak{b})}(\mathbb{C})\text{ given by }\xi_{\mathfrak{b}}(z)=\left(\wp(z, \mathcal{L}_{\mathfrak{b}}),\wp^{\prime}(z,\mathcal{L}_{\mathfrak{b}})\right), \tag{6.2}\]
where \(\wp\) is the Weierstrass \(\wp\)-function and \(\wp^{\prime}\) is the corresponding derivative. We have the Weierstrass equation
\[y^{2}=4x^{3}-g_{2}(\mathcal{L}_{\mathfrak{b}})x-g_{3}(\mathcal{L}_{\mathfrak{b }}) \tag{6.3}\]
describing \(E^{(\mathfrak{b})}\).
When \(\mathfrak{b}=\mathcal{O}_{K}\), we shall write \(\xi_{1}\) in place of \(\xi_{\mathcal{O}_{K}}\). We recall the following relation:
\[\xi_{\mathfrak{b}}\left(\Lambda(\mathfrak{b})z\right)=\lambda(\mathfrak{b})( \xi_{1}(z)) \tag{6.4}\]
as discussed in [10, commutative diagram (21) on p. 43] and [11, Proposition 4.10].
### Review on \(L\)-functions
**Definition 6.2**.: _Let \(\mathfrak{h}\) be any integral ideal of \(K\). Let \(\epsilon\) be any Hecke character of \(K\) with conductor dividing some power of \(\mathfrak{h}\). The imprimitive \(L\)-function of \(\epsilon\) modulo \(\mathfrak{h}\) is defined as follows_
\[L_{\mathfrak{h}}(\epsilon,s)=\sum_{\gcd(\mathfrak{a},\mathfrak{h})=1}\frac{ \epsilon(\mathfrak{a})}{(N\mathfrak{a})^{s}}.\]
Let \(\epsilon\) be a Hecke character over \(K\) of infinity type \((a,b)\). Denote by \(L(\epsilon,s)\) the _primitive Hecke \(L\)-function_ of \(\epsilon\). Recall that the imprimitive (or partial) \(L\)-function differs from the primitive
(or classical) \(L\)-function by a finite number of Euler factors. We can further define the _primitive algebraic Hecke \(L\)-function_,
\[L^{\rm(alg)}(\overline{\epsilon}):=\frac{L\left(\overline{\epsilon},a+b\right)}{ (2\pi)^{b}\Omega_{\infty}^{b-a}}.\]
If \(\Psi\) and \(\kappa\) are as in the statement of Theorem A, then
\[L^{\rm(alg)}\left(\overline{\Psi\kappa}\right)=L^{\rm(alg)}\left(\overline{ \varphi^{k-j}\kappa}\chi_{0}N^{j}\right)=\frac{L\left(\overline{\varphi^{k-j} \kappa}\chi_{0}N^{j},k+j\right)}{(2\pi)^{j}\Omega_{\infty}^{k-j}}=\frac{L\left( \overline{\varphi^{k-j}\kappa}\chi_{0},k\right)}{(2\pi)^{j}\Omega_{\infty}^{k- j}},\]
where \(\varphi\) and \(\chi_{0}\) are given as in SS2.
Henceforth, we assume that \(\kappa\) is of conductor \(\mathfrak{q}^{m+1}\mathfrak{q}^{*}{}^{n+1}\) and set \(F_{m,n}=\mathscr{R}(\mathfrak{h})\) with \(\mathfrak{h}=\mathfrak{g}\mathfrak{q}^{m+1}\mathfrak{q}^{*}{}^{n+1}\). Let \(\mathfrak{g}\) be an auxiliary principal ideal that is divisible \(\mathfrak{f}\) and is relatively prime to \(pq\). Then \(\upsilon=\kappa\overline{\chi}_{0}\) is a character of \({\rm Gal}\left(\mathscr{R}(\mathfrak{h})/K\right)\). The imprimitive \(L\)-function of \(\overline{\upsilon\varphi^{k-j}}\) modulo \(\mathfrak{h}\) can be written as
\[L_{\mathfrak{h}}\left(\overline{\upsilon\varphi^{k-j}},s\right)=\sum_{\tau \in{\rm Gal}(\mathscr{R}(\mathfrak{h})/K)}\overline{\upsilon}(\tau)\sum_{( \mathfrak{b},\mathscr{R}(\mathfrak{h}))=\tau}\frac{\overline{\varphi^{k-j}}( \mathfrak{b})}{(N\mathfrak{b})^{s}},\]
where the second sum runs over integral ideals \(\mathfrak{b}\) of \(\mathcal{O}_{K}\) such that \({\rm gcd}(\mathfrak{b},\mathfrak{h})=1\). We define the following partial imprimitive L-functions:
**Definition 6.3**.: _Let \(\mathfrak{h}\) and \(\varphi\) be as above. For \(\tau\in{\rm Gal}(\mathscr{R}(\mathfrak{h})/K)\), we define_
\[L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}},s,(\mathfrak{b},\mathscr{R}( \mathfrak{h}))\right)=\sum_{\begin{subarray}{c}\mathfrak{b}\preceq\mathcal{O} _{K}\\ (\mathfrak{b},\mathscr{R}(\mathfrak{h}))=\tau\\ {\rm gcd}(\mathfrak{b},\mathfrak{h})=1\end{subarray}}\frac{\overline{ \varphi^{k-j}}(\mathfrak{b})}{(N\mathfrak{b})^{s}}.\]
In particular, we have
\[L_{\mathfrak{h}}\left(\overline{\upsilon\varphi^{k-j}},s\right)=\sum_{\tau\in {\rm Gal}(\mathscr{R}(\mathfrak{h})/K)}\overline{\upsilon}(\tau)L_{\mathfrak{ h}}(\overline{\varphi^{k-j}},s,\tau).\]
**Remark 6.4**.: _The (primitive and imprimitive) \(L\)-functions we have discussed so far only converge on some right half-plane. However, they admit analytic continuations to the entire complex plane. In order to prove Theorem A, we shall relate \(L^{\rm(alg)}\left(\overline{\upsilon\varphi^{k-j}}\right)\) to Gamma transforms of certain elliptic function measure that we construct in the following subsection._
Let \(F=\mathscr{R}(\mathfrak{g}q)\) and write \(\Delta={\rm Gal}(F/K)\). Since \(\mathfrak{f}\mid\mathfrak{g}\), we have
\[F=\mathscr{R}(\mathfrak{g}q)=K\left(j(E),h(E_{\mathfrak{g}q})\right)=H\left(x (E_{\mathfrak{g}q})\right).\]
Here \(h\) denotes a Weber function and we may choose this to be the \(x\)-coordinates on a Weierstrass model for the elliptic curve. Set \(F_{\infty}=\bigcup_{n\geq 1}\mathscr{R}(\mathfrak{g}q^{n})\); this is a \(\mathbb{Z}_{q}^{2}\)-extension of \(F\). Recall that \(K_{\infty}\) is the \(\mathbb{Z}_{q}^{2}\)-extension of \(K\), we fix an isomorphism
\[{\rm Gal}(F_{\infty}/K)\simeq{\rm Gal}(F/K)\times{\rm Gal}(F_{\infty}/F) \simeq{\rm Gal}(F/K)\times{\rm Gal}(K_{\infty}/K)\simeq\Delta\times\mathbb{Z} _{q}^{2}.\]
By definition, \(\upsilon=\kappa\overline{\chi}_{0}\) is a character of \({\rm Gal}(\mathscr{R}(\mathfrak{f})\cdot K_{\infty}/K)\), which is a quotient of \({\rm Gal}(F_{\infty}/K)\). Our hypothesis that \(q\nmid[\mathscr{R}(\mathfrak{f}):K]\) allows us to regard \(\kappa\) (resp. \(\overline{\chi}_{0}\)) as a character of \(\mathbb{Z}_{q}^{2}\) (resp. \(\Delta\)). Then \(\kappa\) (resp. \(\upsilon\)) may be regarded as a character of \({\rm Gal}(F_{m,n}/F)\) (resp. \(\Delta\times{\rm Gal}(F_{m,n}/F)\)).
**Definition 6.5**.: _Given an ideal \(\mathfrak{c}\) of \(\mathcal{O}_{K}\) that is coprime to \(\mathfrak{h}\), let \(\tau_{\mathfrak{c}}\) denote \((\mathfrak{c},F_{m,n})=(\mathfrak{c},\mathscr{R}(\mathfrak{h}))\)._
We conclude this subsection with the following lemma on the Galois action on partial imprimitive \(L\)-values.
**Lemma 6.6**.: _Let \(\mathfrak{b}\) be an ideal of \(\mathcal{O}_{K}\) coprime to \(\mathfrak{h}\) such that \((\mathfrak{b},\mathscr{R}(\mathfrak{f}))=1\). For any \(\rho\in\mathfrak{h}^{-1}\mathcal{L}/\mathcal{L}\) and any integral ideal \(\mathfrak{c}\) of \(\mathcal{O}_{K}\) that is coprime to \(\mathfrak{h}\), we have_
\[\tau_{\mathfrak{h}}\cdot\frac{L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{c}}\right)}{(2\pi)^{j}\rho^{k-j}}=\frac{L_{\mathfrak{h}} \left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{bc}}\right)}{(2\pi)^{j}\rho^{ k-j}}.\]
Proof.: Equation (A.4) in the appendix tells us that
\[\frac{L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{c}} \right)}{(2\pi)^{j}\rho^{k-j}}=\frac{(N\mathfrak{h}\sqrt{d_{K}})^{-j}\Lambda( \mathfrak{c})^{k-j}}{(k-1)!\varphi(\mathfrak{c})^{k-j}}E_{j,k}\left(\rho, \mathcal{L}\right)^{\tau_{\mathfrak{c}}}. \tag{6.5}\]
Since \(\tau_{\mathfrak{b}}\) acts trivially on \(\mathscr{R}(\mathfrak{f})\), we deduce that
\[\tau_{\mathfrak{b}}\cdot\frac{L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}}, k,\tau_{\mathfrak{c}}\right)}{(2\pi)^{j}\rho^{k-j}}=\frac{(N\mathfrak{h}\sqrt{d_{K}}) ^{-j}\Lambda(\mathfrak{c})^{k-j}}{(k-1)!\varphi(\mathfrak{c})^{k-j}}E_{j,k} \left(\rho,\mathcal{L}\right)^{\tau_{\mathfrak{bc}}}.\]
On replacing \(\mathfrak{c}\) by \(\mathfrak{bc}\) in (6.5), we have
\[\frac{L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{bc}} \right)}{(2\pi)^{j}\rho^{k-j}}=\frac{(N\mathfrak{h}\sqrt{d_{K}})^{-j}\Lambda( \mathfrak{bc})^{k-j}}{(k-1)!\varphi(\mathfrak{bc})^{k-j}}E_{j,k}\left(\rho, \mathcal{L}\right).\]
The hypothesis that \((\mathfrak{b},\mathscr{R}(\mathfrak{f}))=1\) implies that \(\varphi(\mathfrak{b})=\Lambda(\mathfrak{b})\) by [11, (18) on p. 42]. Thus, equation (17) in _op. cit._ tells us that
\[\frac{\Lambda(\mathfrak{bc})}{\varphi(\mathfrak{bc})}=\frac{\Lambda( \mathfrak{c})^{\tau_{\mathfrak{h}}}\Lambda(\mathfrak{b})}{\varphi(\mathfrak{ c})\varphi(\mathfrak{b})}=\frac{\Lambda(\mathfrak{c})}{\varphi(\mathfrak{c})}.\]
Hence the result follows.
### A rational function with a canonical divisor
The goal of this section is to generalize the construction of a rational function on a CM elliptic curve from [14, Section 6.3]. In order to consider Hecke characters of more general infinity-type, we introduce a new derivative operator, which did not make an appearance in _loc. cit._ This allows us to carry out step (3) outlined in the introduction. The notation introduced in the previous section will continue to be utilized.
Let \(\mathfrak{b}\) be an integral ideal of \(K\) that is coprime to \(\mathfrak{f}\). We fix an auxiliary ideal \(\mathfrak{a}\) of \(\mathcal{O}_{K}\) that is coprime to \(6\mathfrak{h}\) and that \((\mathfrak{a},\mathscr{R}(\mathfrak{f}))=1\). Define the rational function \(\zeta_{\mathfrak{b},\mathfrak{a}}\) on \(E^{(\mathfrak{b})}\) by
\[\zeta_{\mathfrak{b},\mathfrak{a}}(P)=\prod_{Q}\left(x(P)-x(Q)\right)^{-1}, \tag{6.6}\]
where \(Q\) runs over a set of representatives of \(E^{(\mathfrak{b})}_{\mathfrak{a}}\setminus\{0\}\ (\mathrm{mod}\ \pm 1)\). There exists a constant \(c(\mathfrak{b},\mathfrak{a})\in H^{\times}\) such that the function
\[\gamma_{\mathfrak{b},\mathfrak{a}}(P):=c(\mathfrak{b},\mathfrak{a})\zeta_{ \mathfrak{b},\mathfrak{a}}(P)\]
has the property that for all \(\beta\in\operatorname{End}\left(E^{(\mathfrak{b})}\right)\) with \(\gcd(\beta,\mathfrak{a})=1\),
\[\gamma_{\mathfrak{b},\mathfrak{a}}(\beta(P))=\prod_{R\in\ker(\beta)}\gamma_{ \mathfrak{b},\mathfrak{a}}(P\oplus R)\]
(see [17, Appendix]).
We can write
\[\mathcal{L}_{\mathfrak{b}}=\mathbb{Z}\omega_{1,\mathfrak{b}}+\mathbb{Z}\omega_{2, \mathfrak{b}}\]
such that \(\frac{\omega_{1,\mathfrak{b}}}{\omega_{2,\mathfrak{b}}}\) lies in the upper half plane. We define the constant (see [11, (4) on p. 48])
\[A(\mathcal{L}_{\mathfrak{b}}):=\frac{1}{2\pi i}\left(\omega_{1,\mathfrak{b}} \overline{\omega_{2,\mathfrak{b}}}-\overline{\omega_{1,\mathfrak{b}}}\omega_ {2,\mathfrak{b}}\right).\]
As in [11, p. 57, (4)], let
\[\partial=-\frac{\partial}{\partial z},\quad\mathcal{D}_{\mathfrak{b}}=-A( \mathcal{L}_{\mathfrak{b}})^{-1}\left(\overline{z}\frac{\partial}{\partial z} +\overline{\omega}_{1,\mathfrak{b}}\frac{\partial}{\partial\omega_{1, \mathfrak{b}}}+\overline{\omega}_{2,\mathfrak{b}}\frac{\partial}{\partial \omega_{2,\mathfrak{b}}}\right).\]
For integers \(0\leq-j<k\), define the derivative operator \(\mathscr{D}_{j,k}\) on \(\mathbb{C}(E^{(\mathfrak{b})})\) by
\[\mathscr{D}_{j,k}(f)=\mathcal{D}_{\mathfrak{b}}^{-j}\partial^{k+j}\log f(z),\]
where \(z\) is a complex variable after identifying \(E^{(\mathfrak{b})}\) with \(\mathbb{C}/\mathcal{L}_{\mathfrak{b}}\) via \(\xi_{\mathfrak{b}}\) as given by (6.4).
**Lemma 6.7**.: _Let \(Q\) be a primitive \(\mathfrak{b}\)-division point on \(E\) and \(\rho\in\mathfrak{h}^{-1}\mathcal{L}\setminus\mathcal{L}\). Then there exist \(\sigma\in\operatorname{Gal}(F_{m,n}/H)\) and \(\zeta\in\mu_{K}\) (which we identify with \(\operatorname{Aut}(E)\)) such that_
\[Q=\zeta\left(\xi_{1}(\rho)^{\sigma}\right).\]
_Fix \(\mathfrak{c}_{0}\) to be an ideal of \(\mathcal{O}_{K}\) coprime to \(\mathfrak{h}\) such that \(\tau_{\mathfrak{c}_{0}}=\sigma\). Suppose that \((\mathfrak{c}_{0},\mathscr{B}(\mathfrak{f}))=1\). Let \(\mathfrak{b}\) and \(\mathfrak{c}\) be ideals of \(\mathcal{O}_{K}\) coprime to \(\mathfrak{h}\) with \((\mathfrak{c},\mathscr{B}(\mathfrak{f}))=1\). Then_
\[\mathscr{D}_{j,k}(\gamma_{\mathfrak{b},\mathfrak{a}})\circ \lambda(\mathfrak{b})(Q^{\tau_{\mathfrak{c}}})=-(k-1)!\left((N\mathfrak{a})- \Lambda(\mathfrak{a})^{k-j}\tau_{\mathfrak{a}}\right)\left(\frac{\varphi( \mathfrak{b})}{\Lambda(\mathfrak{b})}\right)^{k-j}\times\\ \left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j}\frac{L_ {\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{b}\mathfrak{c} \mathfrak{c}_{0}}\right)}{(\zeta\rho)^{k-j}}. \tag{6.7}\]
Proof.: By Class Field Theory, we have
\[(\mathcal{O}_{K}/\mathfrak{h})^{\times}/\mu_{K}\simeq\operatorname{Gal}(F_{m, n}/H)\hookrightarrow\operatorname{Aut}(E[\mathfrak{h}])\]
(see [17, Chapter 2, proof of Theorem 2.3]). It follows that \(\operatorname{Aut}(E[\mathfrak{h}])\) is generated by the image of \(\operatorname{Gal}(F_{m,n}/H)\) and \(\mu_{K}\). The first assertion now follows just as in the proof of [16, Lemma 3.1.4] or [16, Lemma 6.4].
In the appendix, we prove in (A.5) that with \(P=\xi_{\mathfrak{b}}(\Lambda(\mathfrak{b})\rho)\)
\[\mathscr{D}_{j,k}(\gamma_{\mathfrak{b},\mathfrak{a}})(P)=-(k-1)!\left((N \mathfrak{a})-\Lambda(\mathfrak{a})^{k-j}\tau_{\mathfrak{a}}\right)\left( \frac{\varphi(\mathfrak{b})}{\Lambda(\mathfrak{b})}\right)^{k-j}\left(\frac{N \mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j}\frac{L_{\mathfrak{h}}\left(\overline {\varphi^{k-j}},k,\tau_{\mathfrak{b}}\right)}{\rho^{k-j}}.\]
On replacing \(P\) (resp. \(\rho\)) by \(\zeta P\) (resp. \(\zeta\rho\)), we deduce that
\[\mathscr{D}_{j,k}(\gamma_{\mathfrak{b},\mathfrak{a}})(\zeta P)=-(k-1)!\left((N \mathfrak{a})-\Lambda(\mathfrak{a})^{k-j}\tau_{\mathfrak{a}}\right)\left(\frac {\varphi(\mathfrak{c})}{\Lambda(\mathfrak{c})}\right)^{k-j}\left(\frac{N \mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j}\frac{L_{\mathfrak{h}}\left(\overline {\varphi^{k-j}},k,\tau_{\mathfrak{b}}\right)}{(\zeta\rho)^{k-j}}.\]
If we let \(\tau_{\mathfrak{c}\mathfrak{c}_{0}}\) act on both sides of this equation, Lemma 6.6 tells us that
\[\mathscr{D}_{j,k}(\gamma_{\mathfrak{b},\mathfrak{a}})(\zeta P^{\tau_{\mathfrak{ c}\mathfrak{c}_{0}}})=-(k-1)!\left((N\mathfrak{a})-\Lambda(\mathfrak{a})^{k-j}\tau_{ \mathfrak{a}}\right)\left(\frac{\varphi(\mathfrak{c})}{\Lambda(\mathfrak{c}) }\right)^{k-j}\left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j}\frac{L_ {\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\tau_{\mathfrak{b}\mathfrak{c} \mathfrak{c}_{0}}\right)}{(\zeta\rho)^{k-j}}.\]
The result now follows from (6.4).
Define
\[\rho_{m,n}=\frac{\Omega_{\infty}}{g\nu^{m+1}{\nu^{*}}^{n+1}}\in\mathbb{C}^{\times},\]
where \(g\), \(\nu\), \(\nu^{*}\) are fixed generators of \(\mathfrak{g}\), \(\mathfrak{q}\) and \(\mathfrak{q}^{*}\) respectively (such generators exist since these ideals are assumed to be principal). Then \(\xi_{1}(\rho_{m,n})\) is a primitive \(\mathfrak{h}\)-division point of \(E\) (since \(\mathfrak{h}=\mathfrak{g}\mathfrak{q}^{m+1}\mathfrak{q}^{*}^{n+1}\)).
Let \(V\) (respectively \(Q_{m,n}\)) be a fixed primitive \(\mathfrak{g}\)-division (respectively \(\mathfrak{q}^{m+1}\mathfrak{q}^{*}^{n+1}\)-division) point on \(E\). By Lemma 6.7, there exist \(\zeta\in\mu_{K}\) and \(\sigma_{0}=\tau_{\mathfrak{c}_{0}}\), where \(\mathfrak{c}_{0}\) is an ideal of \(K\), coprime to \(\mathfrak{h}\), depending on \(V\) and \(Q_{m,n}\), such that
\[V\oplus Q_{m,n}=\zeta(\xi_{1}(\rho_{m,n})^{\sigma_{0}}).\]
Since \((\mathfrak{g},q)=1\), there is an isomorphism of groups
\[\operatorname{Aut}\left(E[\mathfrak{g}\mathfrak{q}^{m+1}(\mathfrak{q}^{*})^{n +1}]\right)\simeq\operatorname{Aut}\left(E[\mathfrak{g}]\right)\times \operatorname{Aut}\left(E[\mathfrak{q}^{m+1}(\mathfrak{q}^{*})^{n+1}]\right),\]
which in turn induces the decomposition
\[\operatorname{Gal}(F_{m,n}/H)\simeq\operatorname{Gal}(F_{m,n}/\mathscr{R}( \mathfrak{g}))\times\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/H).\]
Therefore, we may choose \(V\) so that \((\mathfrak{c}_{0},\mathscr{R}(\mathfrak{f}))=1\) for all \(m\) and \(n\).
By Lemma 6.7, given any ideals \(\mathfrak{b}\) and \(\mathfrak{c}\) of \(\mathcal{O}_{K}\) coprime to \(\mathfrak{h}\) such that \((\mathfrak{c},\mathscr{R}(\mathfrak{f}))=1\), we have
\[= \tag{6.8}\]
We fix \(\{\mathfrak{b}_{i}:i\in I\}\) to be a set of representatives of integral ideals in \(K\) such that \(\operatorname{Gal}(\mathscr{R}(\mathfrak{f})/K)=\{(\mathfrak{b}_{i},\mathscr{ R}(\mathfrak{f})):i\in I\}\). Recall that \(\operatorname{Gal}(F_{m,n}/K)\simeq\Delta\times\operatorname{Gal}(F_{m,n}/F)\), where \(\Delta=\operatorname{Gal}(F/K)\). Then,
\[[\mathscr{R}(q\mathfrak{g}):\mathscr{R}(\mathfrak{f})]\sum_{\sigma\in \operatorname{Gal}(F_{m,n}/\mathscr{R}(\mathfrak{g}))}\kappa^{-1}(\sigma) \sum_{i\in I}\chi_{0}(\mathfrak{b}_{i})=\sum_{\eta\in\operatorname{Gal}(F_{m,n}/K)}\upsilon^{-1}(\eta).\]
Let us regard \(\kappa\) as a character of \(\operatorname{Gal}(F_{m,n}/\mathscr{R}(\mathfrak{g}))\simeq\operatorname{Gal}(F_{m,n }/F)\times\operatorname{Gal}(F/\mathscr{R}(\mathfrak{g}))\) sending the elements of \(\operatorname{Gal}(F/\mathscr{R}(\mathfrak{g}))\) to \(1\). We deduce from (6.8) that
\[\sum_{\sigma\in\operatorname{Gal}(F_{m,n}/\mathscr{R}(\mathfrak{ g}))}\kappa^{-1}(\sigma)\sum_{\delta\in\operatorname{Gal}(\mathscr{R}( \mathfrak{g})/\mathscr{R}(\mathfrak{f})),i\in I}\frac{\chi_{0}(\mathfrak{b}_ {i})\Lambda(\mathfrak{b}_{i})^{k-j}}{\varphi(\mathfrak{b}_{i})^{k-j}}\mathscr{D }_{j,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ\lambda(\mathfrak{b}_{i})( V^{\delta}\oplus Q_{m,n}^{\sigma})\] \[= -(k-1)!\left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j} \sum_{\eta\in\operatorname{Gal}(F_{m,n}/K)}v^{-1}(\eta)\frac{N(\mathfrak{a})L _{\mathfrak{h}}\left(\overline{\varphi^{k-j}},k,\eta\tau_{\mathfrak{c}_{0}} \right)-\varphi(\mathfrak{a})^{k-j}L_{\mathfrak{h}}\left(\overline{\varphi^{k- j}},k,\eta\tau_{\mathfrak{a}\mathfrak{c}_{0}}\right)}{(\zeta\rho_{m,n})^{k-j}}\] \[= -(k-1)!\left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j} \frac{N(\mathfrak{a})\upsilon(\sigma_{0})L_{\mathfrak{h}}\left(\overline{ \varphi^{k-j}}\upsilon,k\right)-\varphi(\mathfrak{a})^{k-j}\upsilon(\sigma_{0 }\tau_{\mathfrak{a}})L_{\mathfrak{h}}\left(\overline{\varphi^{k-j}}\upsilon,k \right)}{(\zeta\rho_{m,n})^{k-j}}\] \[= -(k-1)!\left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi}\right)^{j} \left(N(\mathfrak{a})-\varphi(\mathfrak{a})^{k-j}\upsilon(\tau_{\mathfrak{a}}) \right)\upsilon(\sigma_{0})\frac{L_{\mathfrak{h}}\left(\overline{\varphi^{k-j} }\upsilon,k\right)}{(\zeta\rho_{m,n})^{k-j}}. \tag{6.9}\]
The above calculations lead us to define the following rational function on \(E\).
**Definition 6.8**.: _Let \(\mathfrak{a}\) be an ideal of \(\mathcal{O}_{K}\) chosen as above. Let \(V\) be a primitive \(\mathfrak{g}\)-division point of \(E\), we define a rational function on \(E\) sending \(P\in E\) to_
\[\vartheta_{\mathfrak{a},V}^{\Psi}(P)=\sum_{\delta\in\operatorname{Gal}( \mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{f})),i\in I}\frac{\chi_{0}( \mathfrak{b}_{i})\Lambda(\mathfrak{b}_{i})^{k-j}}{\varphi(\mathfrak{b}_{i})^{ k-j}}\mathscr{D}_{j,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ\lambda( \mathfrak{b}_{i})(V^{\delta}\oplus P).\]
### Gamma transforms and \(L\)-values
We can associate with \(\vartheta_{\mathfrak{a},V}^{\Psi}\) an elliptic function measure, \(\alpha\) on \(\mathbb{Z}_{q}^{2}\) via Lemma 3.7. The measure \(\alpha\) depends on \(\Psi\) and our choice of \(\mathfrak{a}\) and \(V\). We further define \(\alpha^{*}=\alpha\big{|}_{(\mathbb{Z}_{q}^{\times})^{2}}\).
We now relate the Gamma transform of \(\alpha^{*}\) to special values of imprimitive algebraic \(L\)-functions. Recall that \(p\) is a rational prime satisfying \((p)=\mathfrak{p}\mathfrak{p}^{*}\) in \(\mathcal{O}_{K}\) with \(\mathfrak{p}\neq\mathfrak{p}^{*}\) and \(\gcd(p,6q)=1\). As before, set \(\pi\) to be the uniformizer of the local field \(k\), which is a finite unramified extension of \(\mathbb{Q}_{p}\) containing \(\mathbb{Q}_{p}(E_{q\mathfrak{g}})\).
**Lemma 6.9**.: _Let \(\kappa\) be as before. Then_
\[\operatorname{ord}_{\pi}\left(\Gamma_{\alpha^{*}}(\kappa)\right)=\operatorname {ord}_{\pi}\left((k-1)!\left(N(\mathfrak{a})-\varphi(\mathfrak{a})^{k-j} \upsilon(\tau_{\mathfrak{a}})\right)L_{\mathfrak{h}}^{(\operatorname{alg})}( \overline{\upsilon\varphi^{k-j}})\right),\]
_where \(\upsilon=\kappa\overline{\chi}_{0}\)._
Proof.: Let \(\underline{\zeta}=(\zeta_{1},\zeta_{2})\in\mu_{q^{\infty}}^{2}\) and set \(Q_{m,n}=\delta(\underline{\zeta})\) in our construction above. Using Lemma 3.2 in conjunction with (6.9), yields
\[\Gamma_{\alpha^{*}}(\kappa) =\tau(\kappa)\sum_{\sigma\in\operatorname{Gal}(F_{m,n}/\mathscr{R }(\mathfrak{g}))}\chi^{-1}(\sigma)\vartheta_{\mathfrak{a},V}^{\Psi}(Q_{m,n}^{\sigma})\] \[=-(k-1)!\tau(\kappa)\left(\frac{N\mathfrak{h}\sqrt{d_{K}}}{2\pi} \right)^{j}\left(N(\mathfrak{a})-\varphi(\mathfrak{a})^{k-j}\upsilon(\tau_{ \mathfrak{a}})\right)\upsilon(\sigma_{0})\frac{L_{\mathfrak{h}}\left(\overline{ \varphi^{k-j}}\upsilon,k\right)}{(\zeta\rho_{m,n})^{k-j}}.\]
Standard facts about Gauss sums tell us that \(\operatorname{ord}_{\pi}(\tau(\kappa))=0\) since the conductor of \(\kappa\) is coprime to \(p\). Finally, as \(\upsilon\) is a finite character, \(\upsilon(\sigma_{0})\) is a root of unity. By our choice of \(\mathfrak{h}\), we also know that \(N\mathfrak{h}\) is coprime to \(p\). This completes the proof of the lemma.
We now study the factor \(N(\mathfrak{a})-\varphi(\mathfrak{a})^{k-j}\upsilon(\tau_{\mathfrak{a}})\). Recall that \(\tau_{\mathfrak{a}}\) denotes \((\mathfrak{a},F_{m,n})\), and thus depends on \(m\) and \(n\), a priori. However, we may regard it as an element of \(\operatorname{Gal}(F_{\infty}/K)\) since the Artin symbols \(\tau_{\mathfrak{a}}\) are compatible under restriction as \(\mathfrak{h}\) varies over ideals dividing \(\mathfrak{g}q^{\infty}\).
**Lemma 6.10**.: _For a Zariski dense set of \(\kappa\), we have_
\[\operatorname{ord}_{\pi}\left(N(\mathfrak{a})-\varphi^{k-j}(\mathfrak{a}) \upsilon(\tau_{\mathfrak{a}})\right)=0.\]
Proof.: Suppose the contrary. Let \((\zeta_{1},\zeta_{2})=(\kappa(\tau_{\mathfrak{a},\mathfrak{a}}),\kappa(\tau_{ \mathfrak{a},\mathfrak{a}^{*}}))\in\mu_{q^{\infty}}^{2}\), where \(\tau_{\mathfrak{a},\mathfrak{l}}\) denotes the restriction of \(\tau_{\mathfrak{a}}\) to \(\operatorname{Gal}(\mathscr{R}(\mathfrak{g}^{[\infty})/\mathscr{R}(\mathfrak{ g}))\). Then,
\[\operatorname{ord}_{\pi}\left(N(\mathfrak{a})-\varphi^{k-j}(\mathfrak{a}) \upsilon(\tau_{\mathfrak{a}})\right)=0\]
if and only if
\[N(\mathfrak{a})\varphi(\mathfrak{a})^{j-k}\not\equiv\zeta_{1}\zeta_{2}\mod \pi\mathfrak{O}\]
since \((\mathfrak{a},\mathscr{R}(\mathfrak{f}))=1\), which implies that \(\chi_{0}(\tau_{\mathfrak{a}})=1\). Note that the left-hand side is independent of \(\kappa\). In particular, this condition is invariant under the map \((\zeta_{1},\zeta_{2})\mapsto(\zeta_{1},\zeta_{2})^{p^{r}}\), where \(p^{r}\) is the cardinality of the residue field of \(\mathfrak{O}\).
Our assumption that the set of \(\kappa\) satisfying the stated property above is not Zariski dense allows us to apply Lemma 4.4. Let \(P\) be the power of \(p^{r}\) given by the said lemma. In particular, under the isomorphism \(\mu_{q^{\infty}}^{2}\simeq(\mathbb{Q}_{q}/\mathbb{Z}_{q})^{2}\), there exists an arbitrary large \(n\) such that
\[N(\mathfrak{a})\varphi(\mathfrak{a})^{j-k}\equiv\zeta_{1}\zeta_{2}\mod\pi \mathfrak{O} \tag{6.10}\]
for all \((\zeta_{1},\zeta_{2})\) which can be identified with \(\left(\frac{P^{x}}{q^{n}},\frac{P^{y}}{q^{n}}\right)\), where \(x,y\in\mathbb{Z}\). In particular, Remark 4.5 tells us that there are \(q^{2(n-v)}\) such elements, where \(v=\operatorname{ord}_{q}(P-1)\).
Note that \(q\)-power roots of unity modulo \(\pi\mathfrak{O}\) are distinct since \(p\neq q\). Suppose that the left-hand side of (6.10) modulo \(\pi\) is a \(q^{m}\)-th root of unity, where \(m<n\). Then, for each \(q^{n}\)-th root of unity \(\zeta_{1}\), there are exactly \(q^{m}\) choices of \(q^{n}\)-th roots of unity \(\zeta_{2}\) such that (6.10) holds. This gives us at most \(q^{n+m}\) choices of \((\zeta_{1},\zeta_{2})\in\mu_{q^{n}}^{2}\). But this is a contradiction as soon as \(n+m<2(n-v)\).
**Remark 6.11**.: _For a given ideal \(\mathfrak{a}\), denote the Zariski dense set of characters described in Lemma 6.10 by \(Z_{\mathfrak{a}}\). This set is defined by the equation_
\[f(\mathfrak{a}):=N(\mathfrak{a})-\varphi^{k-j}(\mathfrak{a})\upsilon(\tau_{ \mathfrak{a}})\not\equiv 0\mod\pi\mathfrak{O}.\]
_But note that \(Z_{\mathfrak{a}}=\bigcup_{m\in\pi\mathfrak{O}}Z_{m,\mathfrak{a}}\) where each \(Z_{m,\mathfrak{a}}\) is defined by equation_
\[f_{m}(\mathfrak{a}):=f(\mathfrak{a})-m\neq 0,\]
_as \(m\) varies over elements of \(\pi\mathfrak{O}\). Since each \(Z_{m,\mathfrak{a}}\) is Zariski open, we have that \(Z_{\mathfrak{a}}\) is Zariski open._
Once we combine Lemmas 6.9 and 6.10 with Theorem 5.1, Theorem A follows.
## 7. Proof of Theorem B
We continue employing the notation introduced in SS6. Throughout this section, we assume that \(j=0\). In addition, we assume that the character \(\chi_{0}\) of \(\operatorname{Gal}(\mathscr{R}(\mathfrak{f})/K)\) from SS2 satisfies
\[\operatorname{ord}_{\pi}\left(\sum_{i\in I}\frac{\chi_{0}(\mathfrak{b}_{i})}{ \varphi(\mathfrak{b}_{i})^{k}}\right)=0. \tag{7.1}\]
**Remark 7.1**.: _Note that the \(\pi\)-adic valuation in (7.1) is always non-negative since \(\mathfrak{b}_{i}\) are coprime to \(\mathfrak{p}\). Suppose that \(p\nmid[\mathscr{R}(\mathfrak{f}):K]\), then there exists at least a character \(\rho\) of \(\operatorname{Gal}(\mathscr{R}(\mathfrak{f})/K)\) such that \(\rho\chi_{0}\) satisfies (7.1). Indeed,_
\[\sum_{\rho\in\operatorname{Gal}(\widehat{\mathscr{R}}(\mathfrak{f})/K)}\sum_{i \in I}\frac{\rho\chi_{0}(\mathfrak{b}_{i})}{\varphi(\mathfrak{b}_{i})^{k}}=[ \mathscr{R}(\mathfrak{f}):K].\]
_Therefore, if \(\operatorname{ord}_{\pi}([\mathscr{R}(\mathfrak{f}):K])=0\), at least one of the summands should have zero \(\pi\)-adic valuation._
The following lemma generalizes [15, Lemma 6.7] and is crucial in our proof of Theorem B.
**Lemma 7.2**.: _Suppose that our auxiliary ideal \(\mathfrak{a}\) is chosen so that \(\gcd\left(\mathfrak{a},6q\mathfrak{p}\mathfrak{g}\prod_{i\in I}\mathfrak{b}_{ i}\right)=1\), \((\mathfrak{a},\mathscr{R}(\mathfrak{f}))=1\), and \(\mathfrak{a}\equiv 1\mod\mathfrak{f}\). Then,_
\[\operatorname{ord}_{\pi}\left(\vartheta_{\mathfrak{a},V}^{\Psi}\right)= \operatorname{ord}_{\pi}\left((k-1)!\right).\]
Proof.: Let us first recall the following facts proved in [15, proof of Lemma 6.7].
1. The rational function \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b},\mathfrak{a}})\) on \(E^{(\mathfrak{b})}\) has poles of order \(k\) at all the elements of \(P\in E^{(\mathfrak{b})}_{\mathfrak{a}}\setminus\{0\}\), with leading coefficient with respect to \(z-z_{P}\) equal to \((k-1)!\).
2. Furthermore, \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b},\mathfrak{a}})\) has a pole of order \(k\) at \(P=0\), with leading coefficient with respect to \(z\) equal to \(N(\mathfrak{a})-1\).
3. The poles described above are the only poles of \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b},\mathfrak{a}})\).
4. Let \(x_{\mathfrak{b}}\) and \(y_{\mathfrak{b}}\) be the functions sending a point \(P\in E^{(\mathfrak{b})}\) to its \(x\)- and \(y\)-coordinates given by the Weierstrass equation (6.3). The only zeros of the function \(x_{\mathfrak{b}}(P)-x_{\mathfrak{b}}(R)\) are \(P=R\) and \(P=\ominus R\). If \(x_{\mathfrak{b}}(R)\neq 0\), these are simple zeros and the leading coefficient with respect to \(z-z_{P}\) is given by \(y_{\mathfrak{b}}(P)\).
Let \(i\in I\). Since \(\mathfrak{a}\) is coprime to \(\mathfrak{b}_{i}\), the isogeny \(\lambda(\mathfrak{b}_{i})\) induces an isomorphism \(E_{\mathfrak{a}}\simeq E^{(\mathfrak{b}_{i})}_{\mathfrak{a}}\). Therefore, by (c), the poles of \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ\lambda( \mathfrak{b}_{i})\) are precisely the elements in \(E_{\mathfrak{a}}\). Recall from Definition 6.8 that
\[\vartheta_{\mathfrak{a},V}^{\Psi}(P)=\sum_{i\in I}\frac{\chi_{0}(\mathfrak{b} _{i})\Lambda(\mathfrak{b}_{i})^{k}}{\varphi(\mathfrak{b}_{i})^{k}}\sum_{ \delta\in\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{ f}))}\mathscr{D}_{0,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ \lambda(\mathfrak{b}_{i})(V^{\delta}\oplus P).\]
In particular, the poles of \(\vartheta_{\mathfrak{a},V}^{\Psi}(P)\) are given by \(U\ominus V^{\delta}\), where \(U\in E_{\mathfrak{a}}\) and \(\delta\in\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{ f}))\).
Let \(P\) be a pole of \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ\lambda( \mathfrak{b}_{i})\). By (6.4), the leading coefficient of \(\mathscr{D}_{0,k}(\gamma_{\mathfrak{b}_{i},\mathfrak{a}})\circ\lambda( \mathfrak{b}_{i})\) with respect to \(z-z_{P}\) is that of \(\mathscr{D}_{0,k}(\gamma_{1,\mathfrak{a}})\) multiplied by \(\Lambda(\mathfrak{b}_{i})^{-k}\), where \(\gamma_{1,\mathfrak{a}}\) denotes the rational function on \(E\) (so corresponding to the choice of \(i\) gives \(E^{(\mathfrak{b}_{i})}=E\)). Consequently, by (a) the leading coefficient of \(\vartheta_{\mathfrak{a},V}^{\Psi}\) with respect to \(z-z_{P}\), when \(P\) is the pole \(U\ominus V^{\sigma}\) where \(U\in E_{\mathfrak{a}}\setminus\{0\}\), is given by
\[(k-1)!\sum_{i\in I}\frac{\chi_{0}(\mathfrak{b}_{i})}{\varphi(\mathfrak{b}_{i}) ^{k}},\]
which has \(\pi\)-adic valuation equal to \(\operatorname{ord}_{\pi}\left((k-1)!\right)\) by assumption (7.1).
Let \(i\in I\), \(\delta\in\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{ f}))\) and \(Q\in E_{\mathfrak{a}}\setminus\{0\}\). By (d), the rational functions (on \(E\)) given by \(x_{\mathfrak{b}_{i}}\circ\lambda(\mathfrak{b}_{i})(P\oplus V^{\delta})-x_{ \mathfrak{b}_{i}}\circ\lambda(\mathfrak{b}_{i})(Q)\) and \(x(P\oplus V^{\delta})-x(Q)\) (where \(x\) denotes the \(x\)-coordinate function on \(E\)) have the same zeros. Furthermore, by (6.4), the leading terms of these two rational
functions differ by the constant \(\Lambda(\mathfrak{b}_{i})\). Consequently, these two functions differ by a unit in \(\mathfrak{O}\). Therefore, as in [15, proof of Lemma 6.7], we can write
\[\vartheta_{\mathfrak{a},V}^{\Psi}(P)=g(P)\prod_{\begin{subarray}{c}\delta\in \operatorname{Gal}(\mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{f}))\\ Q\in\left(E_{\mathfrak{a}}\setminus\{0\}\right)/\pm 1\end{subarray}}\left((x(P\oplus V ^{\delta})-x(Q)\right)^{-k},\]
where \(g\) is a rational function on \(E\) belonging to
\[\mathfrak{O}\Big{[}x\left(\lambda(\mathfrak{b}_{i})(P\oplus V^{\delta}) \right),y\left(\lambda(\mathfrak{b}_{i})(P\oplus V^{\delta})\right):i\in I, \delta\in\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/\mathscr{R}(\mathfrak{f }))\Big{]}.\]
In particular \(\operatorname{ord}_{\pi}(g)\geq 0\).
As has been established in [15, proof of Lemma 6.7], the functions \(x(P\oplus V^{\delta})-x(Q)\) take values in \(\mathfrak{O}^{\times}\) for almost all \(P\). Furthermore, by comparing leading terms at \(P=U\ominus V^{\delta}\), we deduce that \(g\) takes values in \(\mathfrak{O}^{\times}\) at these points. Thus, \(\operatorname{ord}_{\pi}(g)=0\), which concludes the proof.
We can now prove Theorem B. Let \(\upsilon=\kappa\overline{\chi}_{0}\) as before. By an argument similar to Lemma 6.10 it suffices to prove the theorem for imprimitive values \(L^{(\operatorname{alg})}_{\mathfrak{h}}\left(\overline{\Psi\kappa}\right)\) because for almost all finite-order characters \(\kappa\) of \(\operatorname{Gal}(K_{\infty}/K)\), we have
\[\operatorname{ord}_{\pi}\left(L^{(\operatorname{alg})}\left(\overline{\Psi \kappa}\right)\right)=\operatorname{ord}_{\pi}\left(L^{(\operatorname{alg})}_ {\mathfrak{h}}\left(\overline{\Psi\kappa}\right)\right).\]
Indeed, for any prime ideal \(\mathfrak{r}\) of \(K\) and for almost all characters \(\kappa\),
\[\operatorname{ord}_{\pi}\left(1-\frac{\overline{\Psi\kappa(\mathfrak{r})}}{N( \mathfrak{r})^{k}}\right)=0\]
as the \(q\)-power roots of unity modulo \(\pi\mathfrak{O}\) are distinct since \(p\neq q\).
Lemma 7.2 asserts that \(\operatorname{ord}_{\pi}\vartheta_{\mathfrak{a},V}^{\Psi}=\operatorname{ord}_ {\pi}\left((k-1)!\right)\). In particular, the associated elliptic function measure \(\alpha^{*}\) satisfies \(\operatorname{ord}_{\pi}\alpha^{*}=\operatorname{ord}_{\pi}\left((k-1)!\right)\). Therefore, on combining Lemma 6.9 with Theorem 5.1, we deduce that for a Zariski dense set of \(\kappa\), we have
\[\operatorname{ord}_{\pi}\left(\left(N(\mathfrak{a})-\varphi(\mathfrak{a})^{k} \upsilon(\tau_{\mathfrak{a}})\right)L^{(\operatorname{alg})}_{\mathfrak{h}} \left(\overline{\Psi\kappa}\right)\right)=0.\]
The same argument as in Remark 6.11 shows that this Zariski dense set is also open. Since the intersection of two open dense sets is open dense, there exists a dense set of characters \(\kappa\) with
\[\operatorname{ord}_{\pi}\left(L^{(\operatorname{alg})}\left(\overline{\Psi \kappa}\right)\right)=0.\]
## Appendix A appendix
In this appendix we carry out a technical calculation required in the proof of Lemma 6.7. For this calculation, we rely heavily on the work of de Shalit in [11]. In particular, we express special \(L\)-values in terms of logarithmic derivatives of rational functions. We do so by relating both of these quantities to values of Eisenstein series.
### Relating rational functions to Eisenstein series
As in the main text, let \(K\) be an imaginary quadratic field and \(H/K\) be the Hilbert class field of \(K\). Let \(E_{/H}\) be a CM elliptic curve with CM by \(\mathcal{O}_{K}\) and \(\mathcal{L}\) be the associated lattice. Let \(\mathfrak{a}\) and \(\mathfrak{b}\) be ideals of \(K\) such that \(\mathfrak{b}\) is coprime to \(6\mathfrak{f}\). With respect to \(\mathcal{L}_{\mathfrak{b}}\), we can define an _elliptic function_, denoted by \(\Theta(z;\mathcal{L}_{\mathfrak{b}},\mathfrak{a})\), as in [11, Chapter II, Section 2.3, (10) on p. 49]. Let \(\xi_{\mathfrak{b}}\) be the isomorphism of complex Lie groups defined in (6.2). It follows from [11, (16) on p. 54] that for any \(z\in\mathbb{C}\) with \(P=\xi_{\mathfrak{b}}(z)\in E^{(\mathfrak{b})}\),
(A.1) \[\Theta(z;\mathcal{L}_{\mathfrak{b}},\mathfrak{a})=C_{\mathfrak{b},\mathfrak{ a}}\cdot\zeta_{\mathfrak{b},\mathfrak{a}}(P)^{12},\]
where \(\zeta_{\mathfrak{b},\mathfrak{a}}(P)\) is the rational function introduced in (6.6) and \(C_{\mathfrak{b},\mathfrak{a}}\) is some constant that is independent of \(P\) and \(z\) (the power of \(12\) appears because the product in (6.6) is taken over \(\mathfrak{a}\)-torsions modulo \(\pm 1\), whereas the product in [11, (16) on p. 54] is taken over all non-trivial \(\mathfrak{a}\)-torsions, without modulo \(\pm 1\)).
For integers \(k\geq 1\) and \(0\leq-j<k\), let \(E_{j,k}(z,\mathcal{L}_{\mathfrak{b}})\) be the \((j,k)\)-th _Eisenstein series_ associated to the lattice \(\mathcal{L}_{\mathfrak{b}}\) given as in [11, (5) on p. 57]. Notice that when \(k+j\geq 3\), we have explicitly
\[E_{j,k}(z,\mathcal{L}_{\mathfrak{b}})=(k-1)!A(\mathcal{L}_{\mathfrak{b}})^{j} \sideset{}{{}^{\prime}}{\sum}_{w\in\mathcal{L}_{\mathfrak{b}}}\frac{(\overline {z}+\overline{w})^{k-j}}{\left|z+w\right|^{2k}}=(k-1)!A(\mathcal{L}_{\mathfrak{ b}})^{j}\sideset{}{{}^{\prime}}{\sum}_{w\in\mathcal{L}_{\mathfrak{b}}}\frac{( \overline{z}+\overline{w})^{k}(z+w)^{j}}{\left|z+w\right|^{2(k+j)}}.\]
Here, the sum runs over all \(w\in\mathcal{L}_{\mathfrak{b}}\) except possibly \(w=-z\) if \(z\in\mathcal{L}_{\mathfrak{b}}\). Further, for each integral ideal \(\mathfrak{a}\), we can define (see [11, (5) on p. 57])
\[E_{j,k}(z;\mathcal{L}_{\mathfrak{b}},\mathfrak{a})=(N\mathfrak{a})E_{j,k}(z, \mathcal{L}_{\mathfrak{b}})-E_{j,k}(z,\mathfrak{a}^{-1}\mathcal{L}_{\mathfrak{ b}}).\]
From (A.1), we deduce that, for \(k\geq 1\),
(A.2) \[\begin{split} 12\mathscr{D}_{j,k}(\gamma_{\mathfrak{b}, \mathfrak{a}})(P)&=\mathcal{D}_{\mathfrak{b}}^{-j}\partial^{k+j} \log\Theta(z;\mathcal{L}_{\mathfrak{b}},\mathfrak{a})\\ &=-12E_{j,k}(z,\mathcal{L}_{\mathfrak{b}},\mathfrak{a})\quad \text{ by \@@cite[cite]{[\@@bibref{}{Kas:2012}{}{}, Chapter II, Section 3.1, (7) on p. 58]}.}\end{split}\]
### Relating Eisenstein series to rational \(L\)-values
Recall that \(\mathfrak{f}\) is an ideal of \(\mathcal{O}_{K}\) that is divisible by the conductor of the Hecke character \(\varphi\). Let \(\mathfrak{m}\) be a principal ideal of \(\mathcal{O}_{K}\) such that \(\mathfrak{f}\mid\mathfrak{m}\). Let \(\mathfrak{c}\) be another ideal which is coprime to \(\mathfrak{m}\). Then for any \(\Omega\in\mathbb{C}^{\times}\)[11, Chapter II, Proposition 3.5, p. 62] asserts that
(A.3) \[(N\mathfrak{m}^{-j})E_{j,k}\left(\Omega,\mathfrak{c}^{-1}\mathfrak{m}\Omega \right)=(k-1)!\left(\frac{\sqrt{d_{K}}}{2\pi}\right)^{j}\Omega^{j-k}\varphi( \mathfrak{c})^{k-j}L_{\mathfrak{m}}(\overline{\varphi^{k-j}},k,(\mathfrak{c },\mathscr{R}(\mathfrak{m}))).\]
Let \(\alpha\in\mathcal{O}_{K}\) be a generator of our chosen principal ideal \(\mathfrak{m}\). We choose \(\Omega\in\mathbb{C}^{\times}\) in (A.3) to be the period \(\Omega_{\infty}\) so that
\[\mathcal{L}=\Omega_{\infty}\mathcal{O}_{K}.\]
Let \(\rho\) be the primitive \(\mathfrak{m}\)-division point on \(\mathbb{C}/\mathcal{L}\) given by \(\rho=\frac{\Omega_{\infty}}{\alpha}\). Then,
\[E_{j,k}\left(\Omega_{\infty},\mathfrak{c}^{-1}\mathfrak{m} \Omega_{\infty}\right) =E_{j,k}\left(\rho\alpha,\mathfrak{c}^{-1}\mathfrak{m}\Omega_{ \infty}\right)\] \[=E_{j,k}\left(\rho\alpha,\mathfrak{c}^{-1}\mathfrak{m}\mathcal{L}\right)\] \[=\alpha^{j-k}E_{j,k}\left(\rho,\mathfrak{c}^{-1}\mathcal{L} \right)\quad\text{by \@@cite[cite]{[\@@bibref{}{Kas:2012}{}{}, Proposition 3.3(i), p. 58]}}\] \[=\alpha^{j-k}\Lambda(\mathfrak{c})^{k-j}E_{j,k}\left(\rho, \mathcal{L}\right)^{(\mathfrak{c},\mathscr{R}(\mathfrak{m}))}\quad\text{ by \@@cite[cite]{[\@@bibref{}{Kas:2012}{}{}, Proposition 3.3(iii), p. 58]},}\]
where \(\Lambda(\mathfrak{c})\in H^{\times}\) is defined as in (6.1).
Combined with (A.3), the above calculation shows that
(A.4) \[\begin{split}(k-1)!L_{\mathfrak{m}}\left(\overline{\varphi^{k-j}},k, (\mathfrak{c},\mathscr{R}(\mathfrak{m}))\right)&=\left(\frac{ \Lambda(\mathfrak{c})\Omega_{\infty}}{\alpha\varphi(\mathfrak{c})}\right)^{k-j }\left(\frac{2\pi}{N\mathfrak{m}\sqrt{d_{K}}}\right)^{j}E_{j,k}\left(\rho, \mathcal{L}\right)^{(\mathfrak{c},\mathscr{R}(\mathfrak{m}))}\\ &=\left(\frac{\Lambda(\mathfrak{c})\rho}{\varphi(\mathfrak{c})} \right)^{k-j}\left(\frac{2\pi}{N\mathfrak{m}\sqrt{d_{K}}}\right)^{j}E_{j,k} \left(\Lambda(\mathfrak{c})\rho,\mathcal{L}_{\mathfrak{c}}\right)\quad\text{ by \@@cite[cite]{[\@@bibref{}{AJS87}{}{}, (8), p. 58]}}.\end{split}\]
**Remark A.1**.: _In the special case when \(H=K\) and \(E\) is defined over \(K\), (i.e., \(K\) has class number 1) we know from [10] that \(\Lambda(\mathfrak{c})=\varphi(\mathfrak{c})\). Moreover, it is also clear in this case that \(\psi=\varphi\). Therefore, on taking \(j=0\), we obtain_
\[L_{\mathfrak{m}}\left(\overline{\psi^{k}},k,(\mathfrak{c},\mathscr{R}( \mathfrak{m}))\right)=\frac{\rho^{k}}{(k-1)!}E_{k}\left(\psi(\mathfrak{c})\rho,\mathcal{L}_{\mathfrak{c}}\right)\]
_(c.f. [1, Theorem 6.2])._
### Relating rational functions to \(L\)-values
Our final step is to combine the calculations in the previous two sections to relate the image of the operator \(\mathscr{D}_{j,k}\) applied to our chosen rational function to the \((j,k)\)-th Eisenstein series. Let \(P\) be an \(\mathfrak{m}\)-torsion on \(E\). We know from (A.2) that
\[\mathscr{D}_{j,k}(\gamma_{\mathfrak{c},\mathfrak{a}})(P) =-E_{j,k}(z;\mathcal{L}_{\mathfrak{c}},\mathfrak{a})\] \[=-\left((N\mathfrak{a})E_{j,k}(z,\mathcal{L}_{\mathfrak{c}})-E_{ j,k}(z,\mathfrak{a}^{-1}\mathcal{L}_{\mathfrak{c}})\right)\] \[=-\left((N\mathfrak{a})E_{j,k}(z,\mathcal{L}_{\mathfrak{c}})- \Lambda(\mathfrak{a})^{k-j}E_{j,k}(z,\mathcal{L}_{\mathfrak{c}})^{( \mathfrak{a},\mathscr{R}(\mathfrak{m}))}\right)\text{ by \@@cite[cite]{[\@@bibref{}{AJS87}{}{}, Prop. 3.3(iii), p. 58]}}\] \[=-\left((N\mathfrak{a})-\Lambda(\mathfrak{a})^{k-j}(\mathfrak{a},\mathscr{R}(\mathfrak{m}))\right)E_{j,k}(z,\mathcal{L}_{\mathfrak{c}}).\]
Now, choosing \(P=\xi_{\mathfrak{c}}\left(\Lambda(\mathfrak{c})\rho\right)\), we deduce that
(A.5) \[\begin{split}\mathscr{D}_{j,k}(\gamma_{\mathfrak{c},\mathfrak{a }})(P)&=-(k-1)!\left((N\mathfrak{a})-\Lambda(\mathfrak{a})^{k-j}( \mathfrak{a},\mathscr{R}(\mathfrak{m}))\right)\left(\frac{\varphi(\mathfrak{c })}{\rho\Lambda(\mathfrak{c})}\right)^{k-j}\times\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \left(\frac{N\mathfrak{m}\sqrt{d_{K}}}{2\pi}\right)^{j}L_{\mathfrak{m}}\left( \overline{\varphi^{k-j}},k,(\mathfrak{c},\mathscr{R}(\mathfrak{m}))\right), \end{split}\]
which is the formula that is utilized in the proof of Lemma 6.7.
|
2301.05463 | Evolution of the spectral lineshape at the magnetic transition in
Sr2IrO4 and Sr3Ir2O7 | Sr2IrO4 and Sr3Ir2O7 form two families of spin-orbit Mott insulators with
quite different charge gaps and an antiferromagnetic (AF) ground state. This
offers a unique opportunity to study the impact of long-range magnetic order in
Mott insulators. It appears to play a different role in the two families, as
there is almost no change of the resistivity at the magnetic transition TN in
Sr2IrO4 and a large one in Sr3Ir2O7. We use angle-resolved photoemission to
study the evolution of the spectral lineshape through the magnetic transition.
We use Ru and La substitutions to tune TN and discriminate changes due to
temperature from those due to magnetic order. We evidence a shift and a
transfer of spectral weight in the gap at TN in Sr3Ir2O7, which is absent in
Sr2IrO4. We assign this behavior to a significantly larger coherent
contribution to the spectral lineshape in Sr3Ir2O7, which evolves strongly at
TN. On the contrary, the Sr2IrO4 lineshape is dominated by the incoherent part,
which is insensitive to TN. We compare these findings to theoretical expections
of the Slater vs Mott antiferromagnetism within Dynamical Mean Field Theory. | Paul Foulquier, Marcello Civelli, Marcelo Rozenberg, Alberto Camjayi, Joel Bobadilla, Dorothee Colson, Anne Forget, Pierre Thuery, Francois Bertran, Patrick Le Fevre, Veronique Brouet | 2023-01-13T10:10:46Z | http://arxiv.org/abs/2301.05463v1 | Evolution of the spectral lineshape at the magnetic transition in Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)
###### Abstract
Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) form two families of spin-orbit Mott insulators with quite different charge gaps and an antiferromagnetic (AF) ground state. This offers a unique opportunity to study the impact of long-range magnetic order in Mott insulators. It appears to play a different role in the two families, as there is almost no change of the resistivity at the magnetic transition \(T_{N}\) in Sr\({}_{2}\)IrO\({}_{4}\) and a large one in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). We use angle-resolved photoemission to study the evolution of the spectral lineshape through the magnetic transition. We use Ru and La substitutions to tune \(T_{N}\) and discriminate changes due to temperature from those due to magnetic order. We evidence a shift and a transfer of spectral weight in the gap at \(T_{N}\) in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), which is absent in Sr\({}_{2}\)IrO\({}_{4}\). We assign this behavior to a significantly larger coherent contribution to the spectral lineshape in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), which evolves strongly at \(T_{N}\). On the contrary, the Sr\({}_{2}\)IrO\({}_{4}\) lineshape is dominated by the incoherent part, which is insensitive to \(T_{N}\). We compare these findings to theoretical expections of the Slater vs Mott antiferromagnetism within Dynamical Mean Field Theory.
## I Introduction
The motion of one hole in an antiferromagnetic (AF) background is a central problem for many correlated systems, among which high temperature cuprate superconductors remain a hallmark. As moving one hole necessarily breaks some AF bonds, the charge and spin sectors become intimately intricated. The relative energy scale for charges are of the order of the charge gap \(\Delta\), set by Coulomb repulsion U in a Mott insulator, independently of the magnetic order. AF order usually sets in at a lower temperature, depending on magnetic couplings J. Most treatments of the metal-insulator transition (MIT) are implicitely deep in the Mott insulating state and neglect the role of magnetic order. This is typically relevant for cuprates where U\(\rtimes\)J. However, when the two energy scales become similar, closer to the MIT, the interplay between the two degrees of freedom may become quite complex [1; 2]. The role of AF long-range order on the MIT has been treated with DMFT theory in ref. [3] and predicts the coexistence of Mott-like and Slater-like excitations. To our knowledge, this has never been directly compared to experiments.
Iridates offer a unique opportunity to carry over this comparison by tuning (\(\Delta\), J) parameters. Indeed, the first two members of the Ruddlesden-Popper perovskite serie, Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), have quite significantly different charge gaps \(\Delta\) at low temperatures, but similar magnetic transition temperature T\({}_{N}\). The two families can essentially be understood [4] as built from two filled J\({}_{3/2}\) bands and a half-filled J\({}_{1/2}\) band, which is split by the electronic correlation opening a gap \(\Delta\) (see sketch in Fig. 1a, J is the effective angular momentum). The Mott nature of the insulating state in Sr\({}_{2}\)IrO\({}_{4}\) was supported by cluster-DMFT calculations [5]. Sr\({}_{2}\)IrO\({}_{4}\) is built from single IrO\({}_{2}\) layers, stacked with SrO layers, and \(\Delta\simeq 0.6eV\) was estimated by optical spectroscopy [6], STM [7; 8; 9] and ARPES using a small electron doping to visualize the whole gap [10]. For the bilayer version Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), the gap is smaller \(\Delta\simeq 0.2eV\)[11; 12; 13]. The different \(\Delta\) in the two families is essentially understood from the different effective dimensionality of the structure, leading to larger bandwidth for the bilayer compared to single layer [6]. There are also some more subtle differences in the electronic structures. For example, interactions within the bilayer bands in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) leads to a non-interacting semi-metallic electronic structure with an indirect gap of 0.05eV [14], which is simply enlarged by correlations [15; 13].
The two compounds display slight differences also in the AF magnetic structure. In Sr\({}_{2}\)IrO\({}_{4}\) a transition takes place at T\({}_{N}\)=240K to a canted in-plane AF state, where RIXS measured Heisenberg-like magnon dispersion over 0.2eV, characterized by J=0.06eV between first neighbors [16]. In Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), the magnetic transition at T\({}_{N}\)=280K gives rise to an AF order with moments along the c-axis [17]. The magnetic interaction have similar order of magnitude (J=0.09eV), but the magnon dispersion is characterized by a large gap of 0.07eV due to pseudodipolar interactions in this geometry [18].
On one hand, the two compounds display insulating behavior in transport measurements above T\({}_{N}\), up to more than 600K [19; 20]. This suggests that correlations, including short-range magnetic correlations, which persist above T\({}_{N}\)[8; 21], are responsible for the insulating properties, rather than the magnetic order. On the other hand, there is a rather strong temperature evolution in optical spectroscopy, characterized by weight appearing in the gap at high temperatures [22; 23; 24], indicative of bad metal
properties. As the evolution is smooth and the transition temperature relatively high, it is difficult to disentangle the role of temperature and magnetic transition. These were tentatively attributed to the temperature dependence of polaronic excitations [25], but never fully clarified. More recently, Song et al. used Ru doping, which decreases T\({}_{N}\), to correlate a transfer of spectral weight with T\({}_{N}\) in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)[26]. Another optical study in Rh-doped Sr\({}_{2}\)IrO\({}_{4}\) observed a transfer of spectral weight to a mid-infrared peak, which was interpreted as a spin-polaron feature from a one band Hubbard model [24]. From a different viewpoint, STM favors an inhomogeneous picture, where in-gap states are observed near dopants [7; 27] or defects [11] and may lead to a percolative-like MIT [9]. Angle-resolved photoemission could in principle go further by resolving the gapped structure in k-space. Its lineshape could help understand the nature of coherent and incoherent excitations and the possible emergence of in-gap states. However, there are few ARPES data available as a function of temperature and only for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), either pure [28] or doped with La [29] or 30% Ru [26].
In this paper, we study systematically the evolution of the ARPES lineshape through T\({}_{N}\) in Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) and use La [30; 31] and Ru [32; 33; 34] substitutions to tune T\({}_{N}\). La substitutes out-of-plane for Sr (we define x as Sr\({}_{1-x}\)La\({}_{x}\)) and induces electron doping. This leads to the reduction of the magnetic order, which vanishes around x=0.04 [30; 31]. On the contrary, Ru substitutes in-plane for Ir (Ir\({}_{1-x}\)Ru\({}_{x}\)) and, although Ru has one less electron than Ir, it seems there is an electronic phase separation at early dopings, so that Ru dilutes the magnetic state rather than dopes it [32; 33; 34; 35]. A high concentration around x=0.35 is required to suppress the magnetic order and induce a metallic state.
We shall qualitatively compare our results with reference theoretical results obtained by Dynamical Mean Field Theory on the doped Hubbard Model. A detailed
Figure 1: (a) Sketch of the electronic structure expected for Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). The J\({}_{3/2}\) band is filled with 4 electrons, the J\({}_{1/2}\) band is half-filled and split into Lower Hubbard Band (LHB) and Upper Hubbard Band (UHB) by \(\Delta\). The gap between the tail of the bands is \(\delta\). (b) Temperature dependence of the resistivity in Sr\({}_{2}\)IrO\({}_{4}\) either pure (blue line), or doped with Ru (red-brown lines) or La (green line). The doping levels are indicated on the graph. T\({}_{N}\), determined by magnetic measurements, is indicated by an arrow. (c) Same as (b) for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\).
Figure 2: (a) Temperature dependence of the resistivity obtained within DMFT for strong interaction \(U/D=4\) for various doping x. The magnetic transition temperature \(T_{N}\) is indicated by an arrow. (b) same as (a) for \(U/D=1.7\).
quantitative comparison with iridates would require employing more realistic material approaches, which can include short ranged spacial correlations (e.g. the cluster extension of DMFT), spin-orbit coupling, multi-bands effects and possibly weakly correlated bands within the ab-initio framework. However such studies, which demand future developments [36; 37; 5; 38], are beyond the scope of this paper which focus on the general properties of the antiferromagnetic Slater to Mott crossover. We shall consider two different interaction regimes: the rather weakly correlated one (\(U\)/\(D\)= 1.7), which is dominated by the Slater AF mechanism, and the strongly correlated one (\(U\)/\(D\)= 4), dominated by the Mott localization mechanism. We shall show that as a matter of facts many physical properties of Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) can be described with the former Slater regime, while the Sr\({}_{2}\)IrO\({}_{4}\) well fits the latter Mott regime. This is especially true for transport, but also for the ARPES spectral lines, provided we make some assumption on features related to the chemical substitution and disorder, which can broaden the spectra and affect the Fermi level position. Our experimental-theoretical comparison shows that Ir-based oxides can provide a unique platform to study non-trivial correlation phenomena, like the evolution from weak to strong correlation, the interplay of Slater magnetism and Mott localization and the effects of doping, temperature and disorder.
## II Methods
Single crystals of Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) were grown by a standard flux growth technique. High-purity powders of SrCO\({}_{3}\) (99.995%), IrO\({}_{2}\) (99%), RuO\({}_{2}\) (99.9%) were dried, weighed, and mixed in a glove box under argon with SrCl\({}_{2}\) (99.5%) flux. The mixture was loaded into a platinum crucible covered with a platinum tip, under ambient atmosphere, and heated in a muffle furnace. For Sr\({}_{2}\)IrO\({}_{4}\), we used ratios 2:1:10, heated up to 1300 \({}^{\circ}\) C and then slowly cooled down at a rate of 10\({}^{\circ}\) C/h to 800\({}^{\circ}\) C. For Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), we used ratios 3:2:5, heated at a rate of 190\({}^{\circ}\) C/h up to 1100\({}^{\circ}\) C for 6 hours and then slowly cooled down at a rate of 10\({}^{\circ}\) C/h to 600\({}^{\circ}\) C, at which temperature the furnace was turned off. Deionized water was used to dissolve the SrCl\({}_{2}\) flux and extract the single crystals. The crystals were platelets with larger dimensions between 0.3 and 2 mm, and with the smallest dimension along the [001] direction. The exact composition of each studied sample has been determined via energy-dispersive X-ray spectroscopy (EDS) measurements in several spots of the surface of several crystals from the same batch. The structure was further refined by x-ray diffraction. The results for nine samples of Sr\({}_{3}\)(Ir\({}_{1-x}\)Ru\({}_{x}\))\({}_{2}\)O\({}_{7}\) with x in the 0-0.78 range are given as supplementary material.
ARPES experiments were carried out at the CASSIOPEE beamline of SOLEIL synchrotron, with a SCIENTA R-4000 analyser, 100 eV photon energy and an overall resolution better than 15meV.
DMFT calculations where performed on the Hubbard Model, the reference playground to study correlated phenomena [39]. The model has the typical semi-circular density of states of bandwith \(D\), which fixes the energy unit throughout the paper. The DMFT is implemented by means of the continuous time diagrammatic Quantum Monte Carlo [40]. Spectra are obtained via analytic continuation of the one particle propagator performed by the Maximum entropy method [41]. DMFT allows to unbiasedly access the paramagnetic insulating and metallic states, as well as the ordered antiferromagnetic insulator [2; 3]. Here we shall study how these states evolve and compete upon doping the system with holes.
## III Resistivity behavior
A clear indication of the different role of the magnetic transition in the two families is already evident in the evolution of resistivities plotted in Fig. 1. The arrows indicate T\({}_{N}\), as determined from magnetic measurements (by SQUID in Sr\({}_{2}\)IrO\({}_{4}\) and neutrons in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)[15; 32; 42]). There is almost no change in Sr\({}_{2}\)IrO\({}_{4}\) at T\({}_{N}\), while there is a clear anomaly in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) leading to a more conducting state in the paramagnetic regime.
The resistivity does not follow a simple activated behavior on the full temperature range. Fitting to an activated law between 100 and 200K gives a gap of the same order of magnitude in the two systems, \(\delta\simeq\)60meV. This is much smaller than the gap \(\Delta\) previously evaluated. It can be understood as the smallest energy distance between the tail of the peaks (see Fig. 1a). This emphasizes the importance of in-gapped low energy states in these systems, but also that limited information on the evolution of \(\Delta\) can be extracted from resistivities alone.
This qualitative behavior is in good agreement with what is expected within a Slater-to-Mott antiferromagnetic crossover. To show this point, we plot in Fig. 2 the resistivity vs temperature obtained by DMFT for two values of the interactions strength, \(U/D=4.0\) (panel a), which is deep in the strongly correlated Mott regime, and \(U/D=1.7\) (panel b), which is in the weakly correlated regime. The system is lightly doped, up to \(x=0-10\%\).
These theoretical curves display a qualitative behavior in line with what is observed in the experimental curves described above in Fig. 1. Namely, in the correlated regime, alike to Sr\({}_{2}\)IrO\({}_{4}\), the resistivity vs \(T\) curve is rather flat and does not display any anomaly in correspondence to the antiferromagnetic transition temperature \(T_{N}\). In sharp contrast, the weakly correlated regime looks like Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), clearly displaying an anomaly in the curve in line with \(T_{N}\). These features disappear at higher doping level in the absence of magnetic transition.
In tracing the comparison between these experimental and theoretical resistivity curves, some caveats must be taken into consideration. In the strongly correlated regime \(U/D=4\), the system immediately becomes
metallic upon doping, though the metallic character is very weak (for example, the absolute value of the resistivity on a doped state curve for \(U/D=4\) is an order of magnitude higher than the resistivity of the weakly correlated \(U/D=1.7\) case). In a real material, such a state would likely display insulating-like properties. Disorder, not taken into account in our theory, can play an important role and localize a small number of carriers. This was observed for instance in Rh-doped Sr\({}_{2}\)IrO\({}_{4}\)[43]. Overall, our theory-experiment comparison enforces the interpretation that Sr\({}_{2}\)IrO\({}_{4}\) is deep in the Mott state, while in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), both correlation and Slater antiferromagnetism play a key role.
## IV Overview of ARPES lineshapes
Fig. 3 and 4 present ARPES Energy Distribution Curves (EDC) as a function of temperature taken at the top of the J\({}_{1/2}\) band, located at the X point of the reciprocal space (see supplementary material [15] for a sketch of the electronic structure). The position of the magnetic transition is indicated by the change of line color (blue to red). In each family, we examine three cases : pristine compound (T\({}_{N}\)=240K for Sr\({}_{2}\)IrO\({}_{4}\) and 280K for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)), doped with 20% Ru (T\({}_{N}\)=150K and 180K respectively) and with a few percent La [T\({}_{N}\)=200K (2% Sr\({}_{2}\)IrO\({}_{4}\)) and 130K (2.4% Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\))]. More dopings are presented in supplementary information, these ones illustrate the universality of the behavior. Two bands can be observed in each EDC, J\({}_{1/2}\) around -0.2/-0.3 eV binding energies and J\({}_{3/2}\) around -0.8/-1eV [10; 13]. At the bottom of Fig. 3 and 4, we show superimposed spectra at low and high temperatures. To isolate the shape of the J\({}_{1/2}\) peak, we subtract a parabolic baseline (bottom, see supplementary material [15] for more details). The spectra are scaled to their total area. The high temperature spectra is magnified by the indicated number to better compare the lineshapes.
For the pure compounds, a gap \(\Delta\) opens up in J\({}_{1/2}\) at X. The Fermi level is roughly in the middle of the gap \(\Delta\) estimated by STM and optics for the undoped systems.
Figure 3: (a-c) Stacks of EDC spectra at X in the three indicated compounds as a function of temperature (from lowest in blue, 20K, to highest in red, 300K). The top spectrum (light blue) is taken at 20K after a temperature cycle. The blue lines are in the magnetic state, the red ones in the paramagnetic state (T\({}_{N}\)=240K for the pure, 150K doped with 20%Ru and 200K for 2% La). Dotted lines indicate the baseline used in the fit. (d-f) Top : zoom on the spectra superimposed for low and high temperatures, with the baselines as dotted lines. Bottom : corresponding spectra with baseline subtracted. The arrow indicates the size of the gap \(\Delta\) expected at X (0.6eV in each case). The two spectra are normalized to their maximum for easier comparison of the lineshape (high temperature spectra are magnified by the indicated amounts). The fit used to extract position and width in Fig. 5 are shown as dotted lines (often indistinguishable from the raw data). The model chosen for the fit (an asymmetric Gaussian with width \(L_{1}\) and \(L_{2}\)) is also presented.
The peaks are quite broad (0.2eV at half maximum for Sr\({}_{2}\)IrO\({}_{4}\) (Fig. 3d) and 0.15eV for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (Fig. 4d) and the distance between the tail of the peaks will be significantly smaller than the peak to peak positions, in agreement with the smaller gap \(\delta\) dominating the resistivity, as indicate in Fig. 1a. For a quasiparticle (QP) excitation, the ARPES lineshape should be lorentzian-like with a width given by the QP inverse lifetime. However, the peak here is rather gaussian-like, asymmetric and much broader than what would be expected for a QP. This situation is common to many insulating oxides, as cuprates [44] or manganites [45]. This suggests a composite nature of the line, where the spectrum is the envelope of a distribution of excitations. A possible origin is the formation of polarons [45; 46; 47], as was actually proposed to explain the ARPES linewidth in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)[28]. In this case, the shape of the peak is fixed by the strength of electron-phonon coupling, asymmetric for low couplings and gaussian for higher ones. As there is no obvious reason why the electron-phonon coupling should be different in Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), this picture does not explain easily the much more asymmetric lineshape of Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). Also, we will see that the lineshape changes at T\({}_{N}\), whereas no strong evolution of the electron-phonon coupling would a priori be expected there. Indeed, Raman experiments, which are sensitive to the phonon renormalization due to pseudospin-lattice coupling, do not evidence large changes at T\({}_{N}\)[48].
When the samples are lightly doped, the gap essentially does not change (as will be justified later), but the Fermi level moves inside the gap, as a result of filling of the first available states. In Sr\({}_{2}\)IrO\({}_{4}\) (Fig. 3), there is no big change of the lineshape between high and low temperatures, except for a slight broadening on both sides. In particular, there is no sudden shift at T\({}_{N}\), implying that the gap does not suddenly close. The J\({}_{1/2}\) peak intensity however is strongly supressed at high temperature. This effect is only partially reversible and it is difficult to disentangle the role of the temperature increase and of T\({}_{N}\) in this intensity loss.
On the contrary, in pure Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), there is a clear deformation of the spectrum, which extends towards \(E_{F}\) (see Fig. 4). The leading edge shifts up by 50meV, but remains away from \(E_{F}\). A comparison with a Fermi-Dirac distribution at 300K (black line) suggests a remaining "pseudogap" of 50meV in Fig.4d. Nevertheless, some residual density appears at \(E_{F}\), which is consistent with a bad metallic character. The peak maximum itself has not moved significantly, ruling out a sudden collapse of the gap at T\({}_{N}\). This comparison implies that the difference between the two systems is more complex than a gap simply closing in one case and not the other. It seems rather related to a transfer of weight in the gap for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) that is not present for Sr\({}_{2}\)IrO\({}_{4}\). In doped
Figure 4: Same as Figure 3 for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) family. T\({}_{N}\)=280K for the pure, 180K doped with 20%Ru and 130K for 2.4% La. The arrow indicating the size of the gap \(\Delta\) measures 0.3eV. In (d) a Fermi-Dirac step at 300K is shown as thin black line. In (e), the low temperature spectrum shifted to the high temperature one position is shown as thin blue line.
Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), there is in addition a clear shift of the spectra towards \(E_{F}\), both for J\({}_{1/2}\) and J\({}_{3/2}\), bringing weight at the edge of \(E_{F}\). As J\({}_{3/2}\) should not be affected by a gap closure in J\({}_{1/2}\), this behavior suggests a shift of \(E_{F}\) inside the gap. To compare the lineshape, we shift the low temperature spectra to the high temperature one for 20% Ru (thin blue line, shifted up by 70meV). This reveals a similar change of the lineshape as for the pure, with a characteristic extension of the spectrum towards \(E_{F}\). The comparaison is more difficult for the La case, where the low temperature spectrum is broader than the other ones, especially towards \(E_{F}\). This is probably due to some distribution in La content, which hides the intrinsic lineshape at low temperatures.
## V Change at \(T_{n}\)
We now study how these temperature evolutions correlate with the magnetic transition. The magnetic transition is quite broad in doped iridates, probably proceeding through some phase separated region [32]. For Sr\({}_{2}\)IrO\({}_{4}\), we define \(T_{N}\) by the onset of the ferromagnetic signal in SQUID measurements [15]. This is not possible for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), where this signal is very weak, and we rely on neutrons scattering performed in Ru [32] and La [31] doped Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). We define two temperatures limiting the magnetic transition, \(T_{N,1}\) where magnetic Bragg peaks appear, and \(T_{N,2}\) where their intensity saturates. Experimentally, it is this latter value that corresponds to the anomaly of resistivity, reported as T\({}_{N}\) on Figure 1. To characterize the temperature dependence of the lineshape, we fit spectra at each temperature to an asymmetric gaussian, as shown in Fig. 3d and 4d. This emphasizes the key evolution we have described and limits the number of parameters (more examples of fits are shown in the supplementary material [15]).
We first focus on Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) where there are clear changes. In Fig. 5a, we observe that the change in resistivity starts at \(T_{N,2}\) (solid line) and evoluates until roughly \(T_{N,2}\) (dotted line). Similarly, the shift of the peak position of J\({}_{1/2}\) at X, shown in Fig. 5b, is pre
Figure 5: (a) Resistivity in (doped) Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). The vertical lines indicate the width of the magnetic transition, as seen by neutron experiments (see text). (b) Shift of the ARPES peak position in (doped) Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) with respect to its low temperature value, for J\({}_{1/2}\) at X (closed symbol) and J\({}_{3/2}\) at \(\Gamma\) (crosses). Raw data are shown in the supplementary material. (c) The two widths \(L_{1}\) (towards high binding energy) and \(L_{2}\) (towards low binding energy) in (doped) Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) extracted from a fit of the J\({}_{1/2}\) peak at X to an asymmetric gaussian. \(L_{2}\) is magnified to match \(L_{1}\) value at low temperatures, as indicated. (d) Same as (c) for (doped) Sr\({}_{2}\)IrO\({}_{4}\). \(T_{N}\) (arrow) is defined here by the onset of the ferromagnetic signal in SQUID measurements [15].
cisely tied to the magnetic transition, it starts at \(T_{N,2}\) and saturates above \(T_{N,1}\). We compare it to the shift of J\({}_{3/2}\) observed at \(\Gamma\) (details are given in supplementary [15]) and find that it exhibits a very similar temperature evolution, although smaller by a factor 2. This suggests that the shift is at least partly due to a motion of \(E_{F}\) in the gap and not simply to a change in the energy scales \(\Delta\) and \(\delta\).
Interestingly, the lineshape also changes at the magnetic transition. In Fig. 5(c), we present the two widths of the asymmetric gaussian, \(L_{1}\) (towards high binding energies) and \(L_{2}\) (towards \(E_{F}\)), scaled to the low temperature value. We find that they are roughly constant below \(T_{N,2}\), but \(L_{2}\) increases above \(T_{N,2}\), strongly diverging from \(L_{1}\), which remains constant or even decreases. This is in agreement with the evolution described in Fig. 4, where only the low energy side of the peak changes at high temperature. This however further demonstrates that this evolution is triggered by magnetic ordering. A first possibility would be that this reflects a distribution of positions arising above \(T_{N}\). As we have seen that the transition is quite inhomogeneous, it has to be considered. However, a narrower lineshape would then be expected again, when the sample has fully transited, contrary to our observation (the linewidth saturates but remains broad above \(T_{N,1}\)). Therefore, although disorder certainly plays a role, it cannot explain the broadening above \(T_{N}\). We then assume that spectral weight is transferred in the gap, deforming the lineshape towards \(E_{F}\).
This behavior (both the shift and the spectral evolution) is intrinsic to Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). In Fig. 5(d), we give a similar view of the evolution of the widths in Sr\({}_{2}\)IrO\({}_{4}\) (we could not fit reliably the widths at high temperature for high Ru dopings, the spectral weight becoming too small to be separated from the background). They do not point to a systematic change of lineshape at \(T_{N}\). The width broadens moderately with increasing temperature, but it is sometimes \(L_{1}\) that is largest (La doped) or \(L_{2}\) (pure) or they remain similar (Ru doped), with no clear anomaly through \(T_{N}\).
In Fig. 6(a-b), we summarize the evolution of the J\({}_{1/2}\) position in the two families. For Sr\({}_{2}\)IrO\({}_{4}\), there is no particular shift at T\({}_{N}\). In the case of La, we observe a small shift with temperature, but it is not happening at \(T_{N}\) (200K in this case) and it could be due to the formation of defect states in the gap moving \(E_{F}\) to a new position. Indeed, it is not completely reversible (see Fig. 3c). For completeness, we add the case of a small hole doping of Sr\({}_{2}\)IrO\({}_{4}\), obtained by 5% Rh [43] (T\({}_{N}\)=170K), which confirms the absence of shift at \(T_{N}\) in Sr\({}_{2}\)IrO\({}_{4}\) family.
For Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), the shift is significant for all doped compounds and Fig. 6(b) further reveals that all peaks seem to converge to the position of the pure at high temperature, independently of the doping. For the pure compound at low temperature, this position is well understood as corresponding to a Fermi level in the middle of the Mott gap, at \(\Delta/2\). This convergence then means that the Fermi level is fixed at \(\Delta/2\) at high temperatures, for all dopings. This also implies that there remains a Mott-like energy scale \(\Delta\) above T\({}_{N}\) for both families.
Fig. 6(c) summarizes our understanding of the evolution. At low temperatures, the position of the Fermi level is fixed by the doping. It is either near the middle of the gap (pure compound), at the tail of UHB for electron doping (La case) or LHB for hole doping (Rh case). For more disordered situations (Ru case), it is found at intermediate position. Above \(T_{N}\), the Mott-like gap \(\Delta\) is not closing suddenly, neither for Sr\({}_{2}\)IrO\({}_{4}\), nor for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\). There is a remaining LHB at \(\Delta/2\) in both cases, which still dominates the spectral weight. However, spectral weight is transferred in the \(\Delta\) gap, in a much more pronounced way for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) than Sr\({}_{2}\)IrO\({}_{4}\), and this transfer starts and stops over the width of the magnetic transition. This transfer fixes the position of the Fermi level at the center of the \(\Delta\) energy scale, over-ruling the previous disorder/doping dependent position and producing a shift of the spectra in doped cases. By comparison, the absence of shift in Sr\({}_{2}\)IrO\({}_{4}\) appears as a sensitive sign that there is no significant change of the in-gap structure.
Figure 6: Peak position as a function of temperature of the asymmetric gaussian used to fit the J\({}_{1/2}\) EDC at X in Sr\({}_{2}\)IrO\({}_{4}\) (a) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (b) for the indicated dopings. (c) Sketch of the evolution at \(T_{N}\), involving a transfer of spectral weight from the Hubbard bands to in-gap states. The initial position of \(E_{F}\) for the different dopings is sketched by color lines on the top graph, which all move to the middle of the \(\Delta\) gap above \(T_{N}\). The detail of the bands above \(T_{N}\) cannot be known from ARPES.
To distinguish the "filling" of the Mott gap, from a "closure", we add the case of a higher La doping of 6% in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), where the metallic state is realized [13; 14] (dark yellow triangles in Fig. 6(b)). Clearly, the peak is much closer from \(E_{F}\) at low temperatures in this metallic compound. We will further show in Fig. 8 that the lineshape is also completely different, with a narrow peak.
## VI Discussion
We now compare these experimental spectra with the theoretical expectations in a Mott vs Slater scenario. For this purpose we show in Fig. 7 the DMFT spectra of the Hubbard Model at small doping. On the left column we display the strongly correlated Mott regime (\(U=4D\)), on the right one the weakly interacting Slater regime (\(U=1.7D\)), for increasing temperature (from top to bottom). The system is at small hole doping, \(x=2.5\%\) and \(x=3.5\%\), for strong and weak correlations respectively. Spectra put into evidence the sharp difference between the Mott and the Slater mechanism closing of the gap above \(T_{N}\).
In the Mott regime a Mott energy scale is well defined at low temperature, marked as \(\Delta\) on the top left panel, separating the LHB from the UHB. Upon doping, a small QP peak appears at the Fermi level, at the upper edge of the LHB. The AF is evident from the strong spectral weight differentiation between the spin up (red dashed line) and spin down (blue dashed line) species. This weak QP peak feature in the strong correlation regime is likely to be washed away by disorder and impurity effects in a real material. Indeed, we have seen with Figure 1 that lightly doped Sr\({}_{2}\)IrO\({}_{4}\) is not immediately metallic, contrary to this prediction, and that \(E_{F}\) is not fixed to the upper of LHB, but remains within the gap, at a position depending on doping and/or impurities. The key point is that upon increasing temperature, the spectra are little affected, and AF is suppressed only by recovering the balance between the spin up and spin down spectral weights.
In the Slater regime, we can still identify a Mott energy scale \(\Delta\), separating the LHB and the UHB, though,
Figure 8: (a) ARPES lineshape at 15K for J\({}_{1/2}\) (EDC at X with subtracted background) for Sr\({}_{2}\)IrO\({}_{4}\) (black), Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (blue) and metallic Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (doped with 6% La, orange). (b) Same at 300K. (c) Simulation of the low temperature spectra with a peak and hump structure (see text). (d) Same for high temperature spectra. Insets show the evolution in temperature of the peak contributions.
Figure 7: Temperature evolution of antiferromagnetic DMFT spectra (from top to bottom) for the strongly correlated Mott case (\(U/D=4\), left hand side) and the weakly correlated Slater case (\(U/D=1.7\), right hand side). In the strongly correlated case \(U/D=4\), a Mott energy scale \(\Delta\) (separating the Hubbard bands) remains well defined and line-shapes do not essentially move. Antiferromagnetism is restored by the recovering of equal spin up (red) and spin down (blue) spectral weights. For weak correlation \(U/D=1.7\), though Hubbard bands remain visible across T\({}_{N}\), an antiferromagnetic gap \(\delta_{S}\) closes at the Fermi level across T\({}_{N}\), inducing a shift of spectral weight to the Fermi energy, as expected within a Slater mechanism. This forces the Fermi level to be located at the middle of \(\Delta\).
with respect to the correlated Mott regime, this scale is now renormalized to a smaller value than the onsite interaction \(U\). Much more pronounced QP peaks appear now at the edges of the Hubbard bands and they define a small gap \(\delta_{S}\) around the Fermi level. This gap is this time opened by the AF mechanism (besides the spin up and spin down spectral weight differentiation), separating spin up and spin down QP-like peaks. This is the key difference with respect to the Mott regime, for which the QP peak, though weak, is at the Fermi level even at small doping and T\(<\)T\({}_{N}\). Upon increasing the temperature, while the Mott \(\Delta\) scale is only slightly affected, the low \(\delta_{S}\) energy scale collapses at the transition temperature \(T_{N}\). At the same time the spin up and spin down spectral weight imbalance disappears. The distinctive feature is that with the closing of the \(\delta_{S}\) energy scale, a sharp increase of spectral weight appears at the Fermi level explaining the "anomaly" observed in the resistivity.
We now attempt a direct comparison between the experimental spectra and the two theoretical scenarios. In Fig. 8(a-b), we compare three different ARPES lineshapes (a baseline has been subtracted) for large gap \(\Delta\) (Sr\({}_{2}\)IrO\({}_{4}\)), smaller gap (Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\)) and a metallic case (Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) with 6% La). We find that the lineshape at low temperature in Sr\({}_{2}\)IrO\({}_{4}\) (\(L_{1}\)=0.15 and \(L_{2}\)=0.08eV) is broader and more symmetric than in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (\(L_{1}\)=0.12 and \(L_{2}\)=0.05eV). It narrows down significantly in the metallic state, where a peak with \(L\)=0.025eV dominates. Although none of the spectra exhibits a well resolved peak-hump structure, as for the theoretical spectra, it may be present but hidden by broadening. Assuming this, the different lineshapes mean that the hump dominates in Sr\({}_{2}\)IrO\({}_{4}\), the peak and the hump have similar contributions in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) (building a more asymmetric shape) and the peak dominates in doped Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), as sketched in Fig. 8(c) for low temperatures. As there is no well defined separation between peak and hump, it is of course impossible to refine the fitting further. However, it gives a simple and natural way to explain the evolution from the more insulating state to the more metallic situation, by simply transferring weight from the broad hump to the peak. Such a decomposition was actually already proposed to describe the insulator to metal evolution of the lineshape in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) doped with La [49].
The very interesting point is now to use this underlying structure to better understand the evolution at \(T_{N}\). The experimental high temperature spectra after background subtraction are compared in Fig. 8(b). The absence of sensitivity of Sr\({}_{2}\)IrO\({}_{4}\) to \(T_{N}\) can be explained if the incoherent hump is not sensitive to the magnetic order. This is in agreement with the theoretical scenario for the strongly correlated case, where most of the weight correspond to the Hubbard bands. In Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), most of the changes are due to the evolution of the coherent peak. More specifically, to explain a more symmetric lineshape, the coherent peak has to shift towards \(E_{F}\), as sketched in the inset of Fig.8(d). This is in agreement with the closure of the \(\delta_{S}\) gap described before in the theory of the weakly correlated case. This suggests to identify the \(\delta\) and \(\delta_{S}\) energy scales. We note that the shift in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) is only indirectly due to the closing of \(\delta_{S}\). The main driving force for the shift is the relocation of the Fermi level at the middle of the \(\Delta\) energy scale. Interestingly, there is a clear difference in theory for this position in the Mott and Slater cases at small dopings. In the Mott case, the QP peak forms at the edge of the Hubbard band without a gap. In contrast, in the Slater case, the QP forms at the edge of the remaining "Slater" gap \(\delta_{S}\), around \(\Delta/2\). In the experimental case, it seems that as long as a gap is open (either \(\Delta\) or \(\delta_{S}\)), extrinsic degrees of freedom (disorder/impurities/dopant) may fix the position of the E\({}_{F}\) within the gap. Indeed at low temperatures, the peak positions are similar in Sr\({}_{2}\)IrO\({}_{4}\) and Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) for the same dopings. On the other hand, as soon as the low energy gap \(\delta_{S}\) closes, E\({}_{F}\) is fixed to the theoretical position, inducing a shift in the weakly correlated case.
In Fig. 1, there is obviously a continuity between the metallicity found at high temperatures in the compounds keeping a magnetic ground state and the completely metallic ones. This decomposition bridges the two behaviors. The evolution in metallic Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) implies a rather large broadening of the peak with increasing temperature. Consequently, it is not well resolved from the hump anymore at high temperatures and the metallic nature of this spectrum is not obvious. Indeed, the lineshape for Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) pure and doped with 6% La are very similar (Fig. 8b). This is why identifying the nature of the metallic state at high temperatures was difficult. This decomposition offers a qualitative understanding, but, here again, any more advanced fitting is impossible as it is difficult to separate not only the coherent part from incoherent part, but also the incoherent part from the background at high temperatures.
## VII Conclusion
Iridates offer a unique opportunity to study the crossover from Mott to Slater behaviors as a function of correlation strength. Although there are many examples of Mott oxides with magnetic transitions [50; 51], there is often an orbital degree of freedom, which complicates the analysis of the mere impact of long-range magnetic order on the Mott state. This is not the case in iridates, where the filled J\({}_{3/2}\) states do not take an active part in the transition. Here, we have studied the evolution of ARPES lineshapes corresponding to the half-filled J\({}_{1/2}\) across the temperature driven magnetic transition in Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\) and Sr\({}_{2}\)IrO\({}_{4}\) compounds, using different dopings to establish the universality of the behavior. We have then compared our results to theoretical expectations within the Dynamical Mean Field Theory.
We argue that iridates are intermediately correlated systems, where the ARPES lineshape is formed by a continuity between incoherent Hubbard-like features and coherent QP-like excitations. In the more weakly correlated
compound Sr\({}_{3}\)Ir\({}_{2}\)O\({}_{7}\), a key change of the spectral lineshape at the magnetic transition is a broadening towards \(E_{F}\), which we attribute to a shift of the coherent weight towards \(E_{F}\). This result agrees with the closing of a coherent "Slater" gap at the magnetic transition expected by DMFT in the weakly correlated regime. This coherent weight coexists with incoherent features, centered at the position of Hubbard bands \(\pm\Delta/2\). In contrast, we find that the behavior of Sr\({}_{2}\)IrO\({}_{4}\) agrees more with the Hubbard-Mott scenario. The coherent part of the lineshape is small and no clear evolution is observed through T\({}_{N}\), as expected within the DMFT description. This decomposition of the spectra implies the existence of two energy scales, the large Mott-like gap \(\Delta\), essentially fixed by short-range magnetic correlations and the small coherent gap \(\delta_{S}\), fixed by the long range magnetic order. This is consistent with the absence of correlation between the large charge gap and the short-range magnetic order observed in Sr\({}_{2}\)IrO\({}_{4}\) by spin-polarized STM [8].
We further find that in the weakly correlated case, the closing of the small magnetic gap \(\delta_{S}\) redefines the position of the Fermi level in the middle of the Hubbard bands. This drives a shift of the whole spectrum towards this position, exactly starting and stopping over the width of the magnetic transition. This view is quite different from the Lifshitz-like transition that was proposed before, where the shift of the band would bring weight to the Fermi level [26]. In future experimental and theoretical investigations, the role of disorder, also induced by the chemical substitution, should be investigated to fully describe the lineshapes and their behavior as a function of doping and temperature. The width of the magnetic transition, as well as the spectral changes observed by STM near defects and/or dopants [9; 11; 52], implies that there is some degree of heterogeneity in these systems, which certainly plays a role in the absence of well resolved coherent peak in ARPES.
|
2309.01027 | Hausdorff limits of external rays: the topological picture | We study Hausdorff limits of the external rays of a given periodic angle
along a convergent sequence of polynomials of degree $d \geq 2$ with connected
Julia sets. | Carsten Lunde Petersen, Saeed Zakeri | 2023-09-02T21:32:28Z | http://arxiv.org/abs/2309.01027v2 | # Hausdorff limits of external rays: the topological picture
###### Abstract.
We study Hausdorff limits of the external rays of a given periodic angle along a convergent sequence of polynomials of degree \(d\geq 2\) with connected Julia sets.
###### Contents
* 1 Introduction
* 2 Background material
* 3 \(\mathcal{L}\)-arcs and their basic properties
* 4 Proof of Theorem A
* 5 Proof of Theorem B
* 6 Proof of Theorem C
* 7 Proof of Theorem D
## 1. Introduction
This paper investigates Hausdorff limits of the external rays of a given periodic angle along a convergent sequence of polynomials of degree \(\geq 2\) with connected Julia sets. This is a basic question in the context of _geometric limits_ of conformal dynamical systems, but it is particularly motivated by our work in [1] and its higher degree analogs where the limbs of connectedness loci are defined by patterns of co-landing rays and questions about such geometric limits arise naturally.
Let \(\mathcal{C}(d)\) be the connectedness locus of all monic polynomial maps \(\mathbb{C}\to\mathbb{C}\) of degree \(d\geq 2\). We denote the Julia set and filled Julia set of \(P\in\mathcal{C}(d)\) by \(J_{P}\) and \(K_{P}\). The external ray of \(P\) at angle \(\theta\in\mathbb{R}/\mathbb{Z}\) is denoted by \(R_{P\theta}\).
Consider a convergent sequence \(P_{n}\to P\) in \(\mathcal{C}(d)\). Fix an angle \(\theta\) which has period \(q\) under the endomorphism \(t\mapsto dt\;(\operatorname{mod}\mathbb{Z})\) of the circle. Let \(\zeta_{n}\) and \(\zeta\) be the landing points of the external rays \(R_{n}:=R_{P_{n}\theta}\) and \(R:=R_{P\theta}\). After passing to a subsequence, we may assume \(\zeta_{n}\to\zeta_{\infty}\) and \(\overline{R_{n}}:=R_{n}\cup\{\zeta_{n},\infty\}\to\mathcal{L}\) in the Hausdorff metric on
compact subsets of the Riemann sphere \(\hat{\mathbb{C}}\). Our primary goal is to analyze the possible structures for the continuum \(\mathcal{L}\). It is well-known that if the landing point \(\zeta\) of \(R\) is repelling, then \(\mathcal{L}=\overline{R}:=R\cup\{\zeta,\infty\}\). However, if \(\zeta\) is parabolic, then \(\mathcal{L}\) can be strictly larger than \(\overline{R}\), depending on the choice of perturbations \(P_{n}\). This phenomenon was observed as early as the 1990's, for example in the works of Goldberg and Milnor on the fixed point portraits of polynomials [**GM**], Oudkerk on the gate structure of near-parabolic points [**O**], and Douady and Lavaurs on parabolic implosion [**Do**, **L**] (compare Fig. 1). Facets of the phenomenon has also appeared in the work of Pilgrim and Tan Lei on spinning deformations of rational maps [**PT**].
It is easy to see that \(\mathcal{L}\) is a \(P^{\circ q}\)-invariant continuum containing \(\zeta\) and \(\zeta_{\infty}\), with \(\mathcal{L}\smallsetminus K_{P}=R\cup\{\infty\}\). The following lemma gives the main reduction for our analysis of \(\mathcal{L}\). It shows that \(\mathcal{L}\cap J_{P}\) is finite and \(\mathcal{L}\cap\hat{K}_{P}\) is a disjoint union of analytic arcs.
Figure 1. Examples of the Hausdorff limit \(\mathcal{L}\) of the closed external ray \(\overline{R_{n}}\) at angle \(\theta=0\). Left: perturbations of a cubic with a non-degenerate parabolic fixed point, where \(\mathcal{L}\cap K_{P}\) is an embedded arc; Middle: perturbations of the cubic \(z+z^{3}\) with a degenerate parabolic fixed point at \(0\), where \(\mathcal{L}\cap K_{P}\) is a loop; Right: perturbations of the quadratic map \(z+z^{2}\), where \(\mathcal{L}\cap K_{P}\) is a “Hawaiian earring.”
**Convention**. Throughout this paper by a _parabolic basin_ we mean a connected component of the immediate basin of attraction of a parabolic periodic point.
**Basic Structure Lemma**. _Let \(u\in\mathcal{L}\cap K_{P}\)._
_(i) If_ \(u\in J_{P}\)_, then_ \(P^{\circ q}(u)=u\)_._
_(ii) If_ \(u\in\mathring{K}_{P}\)_, then_ \(u\) _belongs to a parabolic basin_ \(B=P^{\circ q}(B)\) _and there is a simply connected neighborhood_ \(V\subset B\) _of_ \(u\) _and a biholomorphism_ \(\psi:V\to\{z\in\mathbb{C}:\operatorname{Re}(z)>0\}\)_, normalized by_ \(\psi(u)=1\)_, which satisfies_
\[\psi\circ P^{\circ q}=d^{q}\psi\qquad\text{in $V$}.\]
_The real analytic arc_ \(\gamma:]0,+\infty[\to V\) _defined by_ \(\gamma(t)=\psi^{-1}(t)\) _is contained in_ \(\mathcal{L}\) _and both limits_
\[w^{-}(\gamma)\coloneqq\lim_{t\to 0}\gamma(t)\qquad\text{and}\qquad w^{+}( \gamma)\coloneqq\lim_{t\to+\infty}\gamma(t)\]
_exist and are fixed under_ \(P^{\circ q}\)_, with_ \(w^{+}(\gamma)\) _parabolic of multiplier_ \(1\)_. If_ \(w^{+}(\gamma)\neq w^{-}(\gamma)\)_, the basin_ \(B\) _contains at least two critical points of_ \(P^{\circ q}\)_._
This is proved in SS3.1. The main idea is to extract Caratheodory limits of the pointed disks \((\mathring{\mathbb{C}}\setminus K_{P_{n}},u_{n})\), where \(u_{n}\to u\in\mathcal{L}\). Statements of a similar nature have appeared in the thesis of A. Deniz [**De**, Propositions 4.2.7 and 4.2.8] and, in a different but related context, in the work of Bonifant, Milnor and Sutherland on the relative Green's function [**BMS**, Lemma 3.4].
The Lemma shows that \(\mathcal{L}\cap\mathring{K}_{P}\) is partitioned into \(P^{\circ q}\)-invariant real analytic arcs in parabolic basins that have well-defined initial and end points in \(J_{P}\) and are naturally oriented by the dynamics. For simplicity, each such arc will be called an _\(\mathcal{L}\)-arc_. Every \(\mathcal{L}\)-arc \(\gamma\) is contained in an invariant strip \(V\) in which \(P^{\circ q}\) acts as \(z\mapsto d^{q}z\) (equivalently, a translation), and \(V\cap\mathcal{L}=\gamma\) (Lemma 3.2). In particular, \(\mathcal{L}\cap\mathring{K}_{P}\) is the disjoint union of at most countably many \(\mathcal{L}\)-arcs. We call \(\gamma\) a _heteroclinic_ arc if \(w^{-}(\gamma)\neq w^{+}(\gamma)\), and a _homoclinic_ arc if \(w^{-}(\gamma)=w^{+}(\gamma)\). A maximal nested chain of homoclinics is called an _earring_ (see Fig. 2). It is a finite or countably infinite nested collection of homoclinics, all sharing the same initial and end point \(w\). We often say that such an earring, or each of its homoclinic arcs, is _based at \(w\)_. An earring with infinitely many homoclinics is referred to as a _Hawaaiian earring_.
**Theorem A** (A trichotomy for \(\mathcal{L}\)). _We have one of the following possibilities:_
* _The tame case:_ \(\mathcal{L}=\overline{R}\coloneqq R\cup\{\zeta,\infty\}\)_. Then_ \(\zeta=\zeta_{\infty}\)_, and this point can be either repelling or parabolic._
* _The semi-wild case:_ \(\mathcal{L}\supset\overline{R}\) _and_ \(\zeta=\zeta_{\infty}\)_. Then there are no heteroclinic arcs in_ \(\mathcal{L}\)_, but_ \(\zeta=\zeta_{\infty}\) _is the endpoint of at least one homoclinic arc._
* _The wild case:_ \(\mathcal{L}\supseteq\overline{R}\) _and_ \(\zeta\neq\zeta_{\infty}\)_. Then the set of heteroclinic arcs in_ \(\mathcal{L}\) _is non-empty and finite. Moreover, we can label the heteroclinics as_ \(\gamma_{1},\ldots,\gamma_{N}\) _and the points of_ \(\mathcal{L}\cap\not{J}_{P}\) _as_ \(w_{0}=\zeta,w_{1},\ldots,w_{N}=\zeta_{\infty}\) _such that_ \[w^{+}(\gamma_{j})=w_{j-1}\quad\text{and}\quad w^{-}(\gamma_{j})=w_{j}\quad\text {for all $1\leq j\leq N$}.\]
Define the _spine_\(\mathcal{L}^{*}\) of \(\mathcal{L}\) as the union of \(\overline{R}\) together with the heteroclinic arcs \(\gamma_{1},\ldots,\gamma_{N}\) (if any) and their endpoints \(w_{0}=\zeta,w_{1},\ldots,w_{N}=\zeta_{\infty}\). If there are no heteroclinics (tame or semi-wild cases), then \(N=0\) and the spine reduces to \(\overline{R}\).
**Theorem B** (Anatomy of \(\mathcal{L}\)).: _Either \(\mathcal{L}=\mathcal{L}^{*}\) or \(\mathcal{L}\setminus\mathcal{L}^{*}\) is a union of finitely many earrings of homoclinic arcs based at \(w_{0},\ldots,w_{N}\). Each parabolic basin of \(w_{j}\) that meets \(\mathcal{L}\) contains either a unique heteroclinic arc or a unique earring of homoclinic arcs, but not both. Moreover, if \(N\geq 1\), any earring based at \(w_{0},\ldots,w_{N-1}\) must consist of a single homoclinic arc._
Compare Fig. \(3\).
The proof of Theorem A is carried out in three stages by verifying the following statements:
\(\bullet\) Each parabolic basin contains at most finitely many heteroclinics and at most two earrings of homoclinics (both counts will eventually be sharpened to at most one). The proof uses Fatou coordinates and a modulus argument (SS\(3\).3).
* \(\mathcal{L}\) and all its connected subsets are arcwise-connected and \(\mathcal{L}^{*}\) is a finite graph embedded in \(\hat{\mathbb{C}}\), with \(\mathcal{L}\cap\not{J}_{P}\) and \(\infty\) as its vertices and the heteroclinics and \(R\) as its edges (SS\(3\).4).
\(\bullet\) Every point of \(\mathcal{L}\cap\not{J}_{P}\) is the initial or end point of at most one heteroclinic or \(R\). As a result, the spine \(\mathcal{L}^{*}\) is a finite tree with vertices of degree \(1\) or \(2\), from which it easily follows that \(\mathcal{L}^{*}\) is topologically a closed arc (SS\(4\).2). The arguments here make use of the notion of _intrinsic potential order_ on \(\mathcal{L}\cap\hat{K}_{P}\), which can be described as follows: Let \(R_{n}(s)\) denote the point on the external ray \(R_{n}\) at Green's potential \(s>0\). If \(u,u^{\prime}\in\mathcal{L}\cap\hat{K}_{P}\), there are sequences of potential \(s_{n},s^{\prime}_{n}\to 0\) such that \(R_{n}(s_{n})\to u\) and \(R_{n}(s^{\prime}_{n})\to u^{\prime}\), and these sequences are well defined up to multiplication by sequences that tend to \(1\). We declare \(u<u^{\prime}\) if and only if \(\lim_{n\to\infty}s_{n}/s^{\prime}_{n}<1\). This puts a linear
Figure 2. From left to right: a heteroclinic arc, a homoclinic arc, and an earring based at \(w\). The arrows indicate the natural dynamical orientation.
order on the union of \(\mathcal{L}\)-arcs, where the above limit belongs to \(]0,1[\) if \(u,u^{\prime}\) are on the same \(\mathcal{L}\)-arc and is \(0\) otherwise (SS4.1).
The proof of Theorem B depends on the more delicate analysis of the order and structure of homoclinics that is carried out in SS5.4.
We say that two earrings of homoclinic arcs in \(\mathcal{L}\) are _equivalent_ if they belong to the same cycle of parabolic basins. Let
\[N :=\text{number of heteroclinic arcs in }\mathcal{L}\] \[M :=\text{number of earrings in }\mathcal{L}\] \[M^{\#} :=\text{number of equivalence classes of earrings in }\mathcal{L}.\]
In SS6 we prove
**Theorem C** (Bounding the complexity of \(\mathcal{L}\)).: _The following inequality holds:_
\[2N+M^{\#}\leq d-1. \tag{1.1}\]
_If \(\rho\) is the period of \(\zeta_{\infty}\) and \(\nu\) is the degeneracy order of \(\zeta_{\infty}\) as a fixed point of \(P^{\circ\dot{\rho}}\), then_
\[2N+M\leq d-1+\Big{(}\frac{q}{\dot{\rho}}-1\Big{)}\nu. \tag{1.2}\]
_In particular, if \(\zeta_{\infty}\) is either repelling so \(\nu=0\) or has top period \(\dot{\rho}=q\), then_
\[2N+M\leq d-1.\]
Thus, for quadratic polynomials the only possibilities for \((N,M^{\#})\) are \((0,0)\) which is tame, and \((0,1)\) which is semi-wild. For cubics, the only possibilities for \((N,M^{\#})\) are \((0,0)\) which is tame, \((0,1),(0,2)\) which are semi-wild, and \((1,0)\) which is wild. Compare Fig. 11 for an example of the case \((N,M^{\#})=(0,2)\).
If \(B=P^{\circ q}(B)\) is a parabolic basin that meets \(\mathcal{L}\), then by classical Fatou-Julia theory the union \(B\cup P(B)\cup\dots\cup P^{\circ q-1}(B)\) contains at least one critical point of \(P\). By the
Figure 3. Schematic picture of \(\mathcal{L}\), as in Theorem B. It consists of the spine \(\mathcal{L}^{*}\) (union of blue and red) and possible additional earring decorations (in gray). “Hawaiian earrings” with infinitely many loops can only occur at the far left point \(w_{N}=\zeta_{\infty}\).
Basic Structure Lemma, if \(B\) meets \(\mathcal{L}\) along a heteroclinic arc, this union contains at least two critical points of \(P\). Thus, to prove the bound (1.1) it suffices to show that the critical points designated this way are not shared between distinct heteroclinics or between a heteroclinic and an earring. While the former is rather easy to verify, the latter requires a more in-depth investigation, which is carried out in SS6. The proof of the second bound (1.2) depends on verifying that if \(N\geq 1\), the homoclinics based at \(w_{0},\ldots,w_{N-1}\) are in distinct equivalence classes (Theorem 6.7).
The bounds in Theorem C are optimal in the following sense. It is not hard to show that in any degree \(d\geq 2\) there is a sequence of perturbations \(P_{n}\) of \(P(z)=z+z^{d}\) for which the Hausdorff limit \(\mathcal{L}\) of the closed fixed rays \(\overline{R_{P_{n},0}}\) has \(N=0\) and \(M^{\#}=M=d-1\). In SS7 we construct sequences of polynomials in any odd degree \(d\geq 3\) for which the corresponding \(\mathcal{L}\) has \(N=(d-1)/2\) and \(M=0\):
**Theorem D** (Existence of maximally wild polynomials).: _For each \(N\geq 1\) there exist real numbers \(w_{N}=0<\cdots<w_{1}<w_{0}\) and a real monic polynomial \(P\) of degree \(d=2N+1\) which has a repelling fixed point at \(w_{N}=0\) and a parabolic fixed point of multiplier \(1\) at \(w_{j}\) with \(\operatorname{resit}(P,w_{j})<0\) for every \(0\leq j\leq N-1\). For \(\varepsilon>0\) sufficiently small, the perturbations \(P_{\varepsilon}:=P+\varepsilon\) are in \(\mathcal{C}(d)\) and the Hausdorff limit \(\mathcal{L}\) of the closed rays \(\overline{R_{P_{\varepsilon},0}}\) as \(\varepsilon\to 0\) consists of \(N\) heteroclinics along the real line:_
\[\mathcal{L}=\mathcal{L}^{*}=[0,+\infty]\,=[w_{N},w_{N-1}]\cup\cdots\cup[w_{1},w_{0}]\cup[w_{0},+\infty]\]
Here \(\operatorname{resit}(P,w_{j})\) is the _residu iteratif_ of \(P\) at the parabolic fixed point \(w_{j}\) (see SS2.2 for a brief account). Negativity of this invariant together with real symmetry ensures that \(w_{j}\) bifurcates into a pair of complex conjugate _attracting_ fixed points for \(P_{\varepsilon}\), which automatically forces the Julia set of \(P_{\varepsilon}\) and therefore \(P\) to be connected. If this pair is repelling, the ray \(R_{P_{\varepsilon}}(0)\) may well hit a critical point along the real line, and the Julia set of \(P_{\varepsilon}\) may well be disconnected. Fig. 4 shows a degree \(d=5\) example with \(N=2\), before and after perturbation.
This work represents an approach to the problem of Hausdorff limits of external rays that entirely avoids parabolic implosions and the gate structure of near-parabolic points. There are related analytic questions, however, for which invoking these tools seems inevitable. This is the subject of a joint project of us (in progress) that aims at a more quantitative understanding of how external rays behave as they approach a near-parabolic point. For example, in Theorem D the multipliers of the attracting fixed points bifurcating off of \(w_{0},\ldots,w_{N-1}\) tend to \(1\) tangentially (in fact horocyclically) as \(\varepsilon\to 0\). We provide an explanation for this phenomenon by proving that a non-tangential multiplier approach always produces tame convergence of external rays. As a simple formulation, suppose \(f(z)=\lambda\,z+O(z^{2})\in\mathcal{C}(d)\) has a non-degenerate parabolic fixed point at \(0\) whose multiplier \(\lambda\) is a primitive \(q\)-th root of unity, and consider a sequence \(f_{n}(z)=\lambda\,_{n}z+O(z^{2})\in\mathcal{C}(d)\) of perturbations of \(f\) where \(\lambda\,_{n}^{q}\to 1\) non-tangentially. If the ray \(R_{f,\theta}\) lands at \(0\), then \(R_{f_{n},\theta}\) lands at \(0\) or at a nearby
Figure 4: An example of a real degree 5 polynomial \(P\) (top) and its perturbations \(P_{\varepsilon}=P+\varepsilon\) (bottom) with the properties asserted by Theorem D. Under such perturbations the repelling fixed point \(w_{2}\) is stable but the parabolic fixed points \(w_{0},w_{1}\) bifurcate into pairs of complex conjugate attracting fixed points. The resulting four immediate attracting basins contain the four critical point of \(P_{\varepsilon}\) and share the repelling fixed point near \(w_{2}\) on their boundary.
period \(q\) point according as \(|\lambda_{n}|>1\) or \(|\lambda_{n}|<1\), and in either case \(\overline{R_{f_{n},\theta}}\to\overline{R_{f,\theta}}\) in the Hausdorff metric. The analytic tools of the theory of near-parabolic germs are perfectly suited for proving such statements, even in more general contexts that go beyond polynomial maps and external rays.
**Acknowledgments**. We thank H. Inou for his comments, especially on the example illustrated in Fig. 11. C. L. P. would like to thank the Danish Council for Independent Research | Natural Sciences for support via grant DFF-1026-00267B. S. Z. acknowledges the partial support of the Research Foundation of The City University of New York via grant TRADB-54-375.
## 2. Background material
Throughout the paper we adopt the following notation:
* \(\mathbb{C}\) and \(\hat{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}\): the complex plane and Riemann sphere
* \(\mathbb{D}(\varrho,r):=\{z\in\mathbb{C}:|z-\varrho|<r\}\), \(\mathbb{D}:=\mathbb{D}(0,1)\)
* \(\mathbb{H}^{r}:=\{z\in\mathbb{C}:\operatorname{Re}(z)>0\}\): the right half-plane
* \(\overline{X}\) and \(\hat{X}\): the closure and interior of \(X\)
* dist: the Euclidean distance in \(\mathbb{C}\)
* dist\({}_{X}\): the hyperbolic distance in a domain \(X\subset\hat{\mathbb{C}}\)
* For \(a,b>0\), the symbol \(a\asymp b\) means \(C^{-1}\leq a/b\leq C\) for a constant \(C>1\) independent of the choice of \(a,b\).
### Polynomial maps and external rays
We assume a working knowledge of basic complex dynamics, as in [**DH**] or [**M**]. Let \(\mathcal{P}(d)\) be the space of all monic polynomial maps \(\mathbb{C}\to\mathbb{C}\) of degree \(d\). For \(P\in\mathcal{P}(d)\) we denote by \(K=K_{P}\) and \(J=J_{P}\) the filled Julia set and Julia set of \(P\), respectively. The complement \(\Omega=\Omega_{P}:=\hat{\mathbb{C}}\setminus K\) is the basin of attraction of \(\infty\). The Green's function of \(P\) is the continuous subharmonic function \(G=G_{P}:\mathbb{C}\to[0,+\infty[\) defined by
\[G(z):=\lim_{n\to\infty}\frac{1}{d^{n}}\log^{+}|P^{\circ n}(z)|,\]
where \(\log^{+}t=\max\{\log t,0\}\). It satisfies the relation
\[G(P(z))=d\;G(z)\qquad\text{for all $z\in\mathbb{C}$},\]
with \(G(z)=0\) if and only if \(z\in K\). We often refer to \(G(z)\) as the _potential_ of \(z\). It is well known that the Green's function also depends continuously on the polynomial; in fact, the function \(\mathcal{P}(d)\times\mathbb{C}\to[0,+\infty[\) defined by \((P,z)\mapsto G_{P}(z)\) is continuous (see e.g. [**DH**, Proposition 8.1]).
The _Bottcher coordinate_ of \(P\) is the unique conformal isomorphism \(\beta=\beta_{P}\), defined in some neighborhood of \(\infty\), which is tangent to the identity at \(\infty\) and satisfies
\[\beta(P(z))=(\beta(z))^{d}\qquad\text{for large $|z|$.} \tag{2.1}\]
The modulus of \(\beta\) is related to the Green's function by the relation
\[\log|\beta(z)|=G(z)\qquad\text{for large $|z|$.}\]
For \(\theta\in\mathbb{R}/\mathbb{Z}\), we denote by \(R_{\theta}=R_{P,\theta}\) the maximally extended smooth field line of \(\nabla G\) which maps under \(\beta\) to the radial line \(s\mapsto\mathrm{e}^{s+2\pi\mathrm{i}\theta}\) for large \(s\). We can parametrize \(R_{\theta}\) by the potential \(s\): For each \(\theta\) there is an \(s_{\theta}=s_{P,\theta}\geq 0\) such that \(G(R_{\theta}(s))=s\) for all \(s>s_{\theta}\). The field line \(R_{\theta}\) either extends all the way to the Julia set \(\int\) in which case \(s_{\theta}=0\), or it crashes into a critical point \(\omega\) of \(G\) at potential \(s_{\theta}>0\) in the sense that \(\lim_{s\to s_{\theta}}R_{\theta}(s)=\omega\). The function \(\theta\mapsto s_{\theta}\) is upper semicontinuous [**PZ1**], so the set
\[V=V_{P}:=\{\mathrm{e}^{s+2\pi\mathrm{i}\theta}:s>s_{\theta}\}\cup\{\infty\}\]
is open. The inverse \(\beta^{-1}\) extends to \(V\), mapping it conformally to its image \(\Omega^{\prime}=\Omega^{\prime}_{P}\subset\Omega\). If the filled Julia set \(K\) is connected, then \(s_{\theta}=0\) for all \(\theta\), \(V=\mathbb{C}\smallsetminus\overline{\mathbb{D}}\), and \(\Omega^{\prime}=\Omega\). In this case \(\beta:\Omega\to\hat{\mathbb{C}}\smallsetminus\overline{\mathbb{D}}\) is a conformal isomorphism. More generally, it is not hard to verify that the unions \(\bigcup_{P\in\mathcal{P}(d)}\,(\{P\}\times\Omega^{\prime}_{P})\) and \(\bigcup_{P\in\mathcal{P}(d)}\,(\{P\}\times V_{P})\) are open and the global Bottcher coordinate \((P,z)\mapsto\beta_{P}(z)\) is a biholomorphism between them.
**Corollary 2.1**.: _Let \(P\in\mathcal{P}(d)\), \(\theta\in\mathbb{R}/\mathbb{Z}\), and \(s_{0}>s_{P,\theta}\). If \(P_{n}\to P\) in \(\mathcal{P}(d)\), then \(R_{P_{n}\theta}(s)\to R_{P,\theta}(s)\) uniformly for \(s\in[s_{0},+\infty[\)._
The following result on the stability of rays landing on repelling points can be found in [**DH**, Proposition 8.5] or [**GM**, Lemma B.1]:
**Theorem 2.2**.: _Suppose \(P_{0}\in\mathcal{P}(d)\) and the ray \(R_{P_{0},\theta}\) lands at a repelling periodic point \(\zeta_{0}\). Then for every neighborhood \(V\subset\hat{\mathbb{C}}\) of \(\zeta_{0}\) there is a neighborhood \(\mathcal{U}\subset\mathcal{P}(d)\) of \(P_{0}\) such that if \(P\in\mathcal{U}\) the ray \(R_{P,\theta}\) lands at a repelling periodic point of \(P\) in \(V\)._
### Local invariants of parabolic points
The following brief presentation will be useful in SS7. For more details, see [**M**], [**B**], and [**BE**]. Let \(z_{0}\) be an isolated fixed point of a holomorphic map \(f\), so \(f(z)=z_{0}+\lambda\,(z-z_{0})+O((z-z_{0})^{2})\) in some neighborhood of \(z_{0}\). Here \(\lambda=f^{\prime}(z_{0})\) is the _multiplier_ of the fixed point \(z_{0}\). The _index_\(\iota(f,z_{0})\) is defined as the residue of the meromorphic \(1\)-form \(dz/(z-f(z))\) at \(z_{0}\):
\[\iota(f,z_{0}):=\operatorname{res}\left(\frac{1}{z-f(z)}\;dz,z_{0}\right).\]
The index is invariant under analytic change of coordinates. Moreover,
\[\iota(f,z_{0})=\frac{1}{1-\lambda}\qquad\text{if $\lambda\neq 1$.} \tag{2.2}\]
There is a variant of the notion of index which is well behaved under iteration and (despite its more complicated definition) is often easier to work with. Define the _residu iteratif_ of \(f\) at \(z_{0}\) by
\[\operatorname{\mathsf{resit}}(f,z_{0}):=-\frac{1}{2}\operatorname{res}\left( \frac{1+f^{\prime}(z)}{z-f(z)}\,dz,z_{0}\right).\]
By the argument principle,
\[\operatorname{\mathsf{resit}}(f,z_{0}) =\frac{1}{2}\operatorname{res}\left(\frac{1-f^{\prime}(z)}{z-f(z) }\,dz,z_{0}\right)-\operatorname{res}\left(\frac{1}{z-f(z)}\,dz,z_{0}\right)\] \[=\frac{\mathpzc{m}}{2}-\iota(f,z_{0}),\]
where \(\mathpzc{m}\geq 1\) is the fixed point multiplicity of \(z_{0}\), i.e., the multiplicity of \(z_{0}\) as a root of \(z-f(z)=0\). Evidently,
\[\operatorname{\mathsf{resit}}(f,z_{0})=\frac{1}{2}-\frac{1}{1-\lambda}\qquad \text{if }\lambda\neq 1. \tag{2.3}\]
This shows that a multiplicity \(1\) fixed point \(z_{0}\) is attracting or repelling according as \(\operatorname{Re}(\operatorname{\mathsf{resit}}(f,z_{0}))\) is negative or positive.
When \(\lambda=1\) the formulas (2.2) and (2.3) for the index and residu iteratif break down, but we can still calculate these invariants with the help of a suitable expansion of \(f\). Assuming \(z_{0}\) has multiplier \(1\) and fixed point multiplicity \(\mathpzc{m}\geq 2\), there is an analytic change of coordinates in which \(f\) assumes the local normal form
\[f(z)=z+a(z-z_{0})^{\mathpzc{m}}+b(z-z_{0})^{2\mathpzc{m}-1}+O((z-z_{0})^{2 \mathpzc{m}})\]
with \(a,b\in\mathbb{C}\) and \(a\neq 0\). An easy computation then shows that
\[\iota(f,z_{0})=\frac{b}{a^{2}}\qquad\text{so}\qquad\operatorname{\mathsf{resit }}(f,z_{0})=\frac{\mathpzc{m}}{2}-\frac{b}{a^{2}}. \tag{2.4}\]
For a generic perturbation \(f_{\varepsilon}\) of \(f\) the parabolic fixed point \(z_{0}\) splits into \(\mathpzc{m}\) simple fixed points \(z_{1}(\varepsilon),\ldots z_{\mathpzc{m}}(\varepsilon)\) of multipliers \(\lambda_{1}(\varepsilon),\ldots,\lambda_{\mathpzc{m}}(\varepsilon)\). Continuity of the index then gives \(\lim_{\varepsilon\to 0}\sum_{j=1}^{\mathpzc{m}}1/(1-\lambda_{j}( \varepsilon))=\iota(f,z_{0})\), or, in terms of the residu iteratif,
\[\lim_{\varepsilon\to 0}\sum_{j=1}^{\mathpzc{m}}\operatorname{\mathsf{resit}}(f_{ \varepsilon},z_{j}(\varepsilon))=\operatorname{\mathsf{resit}}(f,z_{0}).\]
When \(z_{0}\) is parabolic with multiplier \(\lambda\) a primitive \(q\)-th root of unity, we can apply the preceding remarks to the iterate \(f^{\circ q}\). In this case the multiplicity of \(z_{0}\) as a fixed point of \(f^{\circ q}\) is necessarily of the form \(\mathpzc{m}=\nu q+1\) for some integer \(\nu\geq 1\) called the _degeneracy order_ of \(z_{0}\), the case \(\nu=1\) being considered a non-degenerate parabolic. Geometrically, there are \(\mathpzc{m}-1=\nu q\) parabolic basins of \(f\) attached to \(z_{0}\), and these fall into \(\nu\) disjoint cycles of length \(q\). _We adopt the convention that the degeneracy order of a repelling fixed point is \(\nu=0\)._
### The Hausdorff metric
Let \(\mathcal{K}\) be the space of all non-empty compact subsets of the Riemann sphere \(\hat{\mathbb{C}}\). The _Hausdorff metric_ on \(\mathcal{K}\) is defined by
\[\mathbf{d}(K,H):=\inf\{\varepsilon>0:K\subset N_{\varepsilon}(H)\text{ and }H \subset N_{\varepsilon}(K)\},\]
where \(N_{\varepsilon}(\cdot)\) denotes the \(\varepsilon\)-neighborhood in the spherical metric. It is well known that \((\mathcal{K},\mathbf{d})\) is a compact metric space.
Let \(\{K_{t}\}_{t\in T}\) be a family of non-empty compact sets in \(\hat{\mathbb{C}}\) parametrized by a topological space \(T\). By the definition of \(\mathbf{d}\), continuity of the map \(t\mapsto K_{t}\) at \(t_{0}\in T\) means that for every \(\varepsilon>0\) there is a neighborhood \(V\) of \(t_{0}\) such that \(K_{t}\subset N_{\varepsilon}(K_{t_{0}})\) and \(K_{t_{0}}\subset N_{\varepsilon}(K_{t})\) for all \(t\in V\). This can be viewed as a combination of two semi-continuity conditions defined as follows. We say \(t\mapsto K_{t}\) is _upper semicontinuous_ at \(t_{0}\) if for every \(\varepsilon>0\) there is a neighborhood \(V\) of \(t_{0}\) such that
\[K_{t}\subset N_{\varepsilon}(K_{t_{0}})\quad\text{for all }t\in V,\]
and is _lower semicontinuous_ at \(t_{0}\) if for every \(\varepsilon>0\) there is a neighborhood \(V\) of \(t_{0}\) such that
\[K_{t_{0}}\subset N_{\varepsilon}(K_{t})\quad\text{for all }t\in V.\]
The following result can be found in [Do]:
**Theorem 2.3** (Douady).: _For every \(d\geq 2\) the maps \(\mathcal{P}(d)\to\mathcal{K}\) defined by \(P\mapsto K_{P}\) and \(P\mapsto J_{P}\) are upber semicontinuous and lower semicontinuous, respectively._
### Caratheodory limits of pointed disks
By a _disk_ in the plane is meant a simply connected domain \(U\subset\mathbb{C}\) other than \(\mathbb{C}\) itself. A _pointed disk_\((U,u)\) consists of a disk \(U\) and the choice of a base point \(u\in U\). By the Riemann mapping theorem there is a unique conformal isomorphism \(f:(\mathbb{D},0)\xrightarrow{\cong}(U,u)\) normalized so that \(f(0)=u\) and \(f^{\prime}(0)>0\). An easy exercise, based on the Schwarz lemma and the Koebe \(1/4\)-theorem, shows that
\[1\leq\frac{f^{\prime}(0)}{\operatorname{dist}(u,\partial U)}\leq 4. \tag{2.5}\]
We say that a sequence of pointed disks \((U_{n},u_{n})\) converges to \((U,u)\)_in the sense of Caratheodory_, and write \((U_{n},u_{n})\xrightarrow{\cong}(U,u)\), if the sequence of normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\) converges locally uniformly to the normalized Riemann map \(f:(\mathbb{D},0)\to(U,u)\). Equivalently, if the sequence \(f_{n}^{-1}\circ f\) converges locally uniformly in \(\mathbb{D}\) to the identity map. Notice that in this case every compact subset of \(U\) must be contained in \(U_{n}\) for all sufficiently large \(n\).
The Caratheodory convergence can be formulated in purely topological terms without any reference to Riemann maps as follows: \((U_{n},u_{n})\xrightarrow{\cong}(U,u)\) if and only if \(u_{n}\to u\) and for every subsequential Hausdorff limit \(K\) of \(\hat{\mathbb{C}}\smallsetminus U_{n}\), the disk \(U\) is the connected component of \(\hat{\mathbb{C}}\smallsetminus K\) containing \(u\).
The next two lemmas are easy consequences of the definition of Caratheodory convergence:
**Lemma 2.4**.: _Suppose \((U_{n},u_{n})\to(U,u)\). Then \(u^{\prime}_{n}\to u\) if and only if \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})\to 0\)._
Proof.: Take the normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\) and \(f:(\mathbb{D},0)\to(U,u)\), so \(g_{n}\coloneqq f_{n}^{-1}\circ f\to\operatorname{id}\) locally uniformly in \(\mathbb{D}\). First suppose \(u^{\prime}_{n}\to u\). Set \(z_{n}\coloneqq f^{-1}(u^{\prime}_{n}),w_{n}\coloneqq f_{n}^{-1}(u^{\prime}_{n})\). Then \(z_{n}\to 0\) and \(g_{n}(z_{n})=w_{n}\), so \(w_{n}\to 0\). It follows that \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})=\operatorname{dist}_{ \mathbb{D}}(0,w_{n})\to 0\). Conversely, suppose \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})\to 0\). Then, with \(z_{n},w_{n}\) defined as above, we have \(w_{n}\to 0\). Since \(g_{n}(z_{n})=w_{n}\) and \(g_{n}\to\operatorname{id}\), we must have \(z_{n}\to 0\), so \(u^{\prime}_{n}\to u\).
**Lemma 2.5**.: _Suppose \((U_{n},u_{n})\to(U,u)\) and \((U^{\prime}_{n},u^{\prime}_{n})\to(U^{\prime},u^{\prime})\). For each \(n\) take a conformal isomorphism \(\psi_{n}:(U_{n},u_{n})\to(U^{\prime}_{n},u^{\prime}_{n})\). Then some subsequence of \(\{\psi_{n}\}\) converges locally uniformly in \(U\) to a conformal isomorphism \(\psi:(U,u)\to(U^{\prime},u^{\prime})\)._
Proof.: Take the normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\), \(f:(\mathbb{D},0)\to(U,u)\), \(g_{n}:(\mathbb{D},0)\to(U^{\prime}_{n},u^{\prime}_{n})\), and \(g:(\mathbb{D},0)\to(U^{\prime},u^{\prime})\), so \(f_{n}\to f\) and \(g_{n}\to g\) locally uniformly in \(\mathbb{D}\). The disk automorphisms \(\sigma_{n}\coloneqq g_{n}^{-1}\circ\psi_{n}\circ f_{n}\) fix the origin, so after passing to a subsequence, \(\sigma_{n}\to\sigma\in\operatorname{Aut}(\mathbb{D})\). It follows that the corresponding subsequence of \(\{\psi_{n}\}\) converges to \(\psi:=g\circ\sigma\circ f^{-1}:(U,u)\to(U^{\prime},u^{\prime})\).
The space of all pointed disks has the following form of compactness:
**Lemma 2.6**.: _Every sequence \((U_{n},u_{n})\) with \(u_{n}\to u\) and \(\operatorname{dist}(u_{n},\partial U_{n})\asymp 1\) has a subsequence which converges to some \((U,u)\) in the sense of Caratheodory._
Proof.: Consider the normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\). Then \(f_{n}(0)\to u\) and \(f^{\prime}_{n}(0)\asymp 1\) by (2.5). Since the space of conformal maps \(\mathbb{D}\hookrightarrow\mathbb{C}\) modulo the action of the affine group \(\operatorname{Aut}(\mathbb{C})\) is compact, it follows that \(\{f_{n}\}\) has a subsequence which converges locally uniformly to the normalized Riemann map \(f:(\mathbb{D},0)\to(U,u)\).
The following useful result describes how changing the base point can affect Caratheodory convergence:
**Theorem 2.7**.: _Suppose \((U_{n},u_{n})\to(U,u)\), and take any sequence \(u^{\prime}_{n}\in U_{n}\) with \(u^{\prime}_{n}\to u^{\prime}\)._
1. _If_ \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})\) _is bounded, after passing to a subsequence we have_ \((U_{n},u^{\prime}_{n})\to(U,u^{\prime})\) _(in particular,_ \(u^{\prime}\in U\)_)._
2. _If_ \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})\to+\infty\) _and_ \(\operatorname{dist}(u^{\prime}_{n},\partial U_{n})\asymp 1\)_, after passing to a subsequence we have_ \((U_{n},u^{\prime}_{n})\to(V,u^{\prime})\)_, where_ \(V\cap U=\emptyset\) _(in particular,_ \(u^{\prime}\notin U\)_)._
Fig. 5 illustrates the two cases.
Proof.: As usual, consider the normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\) and \(f:(\mathbb{D},0)\to(U,u)\).
(i) The normalized Riemann map \(g_{n}:(\mathbb{D},0)\to(U_{n},u^{\prime}_{n})\) is of the form \(f_{n}\circ\psi_{n}\), where \(\psi_{n}\in\operatorname{Aut}(\mathbb{D})\) sends \(0\) to \(w_{n}\coloneqq f_{n}^{-1}(u^{\prime}_{n})\). We have \(\sup_{n}|w_{n}|<1\) since by the hypothesis \(\operatorname{dist}_{\mathbb{D}}(0,w_{n})=\operatorname{dist}_{U_{n}}(u_{n},u^{ \prime}_{n})\) is bounded. It follows that the \(\psi_{n}\) lie in a compact subset of \(\operatorname{Aut}(\mathbb{D})\). So, after passing to a subsequence, \(\psi_{n}\) converges to some \(\psi\in\operatorname{Aut}(\mathbb{D})\). Thus, \(g_{n}\to f\circ\psi\) and therefore \((U_{n},u^{\prime}_{n})\longrightarrow(U,u^{\prime})\), with \(u^{\prime}=f(\psi(0))\).
(ii) By Lemma 2.6, the assumption \(\operatorname{dist}(u^{\prime}_{n},\partial U_{n})\asymp 1\) guarantees that after passing to a subsequence \((U_{n},u^{\prime}_{n})\longrightarrow(V,u^{\prime})\) for some disk \(V\). Suppose \(V\cap U\neq\emptyset\) and pick some \(\zeta\in V\cap U\). Then \(\zeta\in U_{n}\) for all large \(n\) and we have \(f_{n}^{-1}(\zeta)\to f^{-1}(\zeta)\). Hence \(\operatorname{dist}_{U_{n}}(u_{n},\zeta)=\operatorname{dist}_{\mathbb{D}}(0,f_ {n}^{-1}(\zeta))\) is bounded. Similarly, if \(g_{n}:(\mathbb{D},0)\to(U_{n},u^{\prime}_{n})\) and \(g:(\mathbb{D},0)\to(V,u^{\prime})\) denote the corresponding normalized Riemann maps, then \(g_{n}^{-1}(\zeta)\to g^{-1}(\zeta)\), so \(\operatorname{dist}_{U_{n}}(u^{\prime}_{n},\zeta)=\operatorname{dist}_{ \mathbb{D}}(0,g_{n}^{-1}(\zeta))\) is bounded. The two bounds together imply \(\operatorname{dist}_{U_{n}}(u_{n},u^{\prime}_{n})\) being bounded, which is a contradiction.
**Lemma 2.8**.: _Suppose \((U_{n},u_{n})\longrightarrow(U,u)\). Let \(V_{n}\) be a proper subdisk of \(U_{n}\) containing \(u_{n}\) and \(r_{n}\) be the radius of the largest hyperbolic ball in \(U_{n}\) centered at \(u_{n}\) that is contained in \(V_{n}\). If \(r_{n}\to+\infty\), then \((V_{n},u_{n})\longrightarrow(U,u)\)._
Proof.: Take the normalized Riemann maps \(f_{n}:(\mathbb{D},0)\to(U_{n},u_{n})\) and \(g_{n}:(\mathbb{D},0)\to(V_{n},u_{n})\). By the assumption \(f_{n}\) converges locally uniformly to the normalized Riemann map \(f:(\mathbb{D},0)\to(U,u)\). The domain \(V^{\prime}_{n}:=f_{n}^{-1}(V_{n})\) contains the round disk centered at \(0\) of radius \((\operatorname{e}^{r_{n}}-1)/(\operatorname{e}^{r_{n}}+1)\to 1\). It follows from the Schwarz lemma that \(f_{n}^{-1}\circ g_{n}:(\mathbb{D},0)\to(V^{\prime}_{n},0)\) tends to the identity map, and we conclude that \(g_{n}\to f\) locally uniformly in \(\mathbb{D}\).
Figure 5. Illustration of the two cases of Caratheodory convergence in Theorem 2.7.
It is well known that in a given disk \(U\subset\mathbb{C}\) the Euclidean diameter of the hyperbolic ball of fixed radius \(r\) tends to \(0\) as the center of the ball converges to \(\partial U\). The next corollary gives a uniform version of this statement, and is an easy consequence of the following bounds (see [Po, Corollary 1.5] for a slightly modified form): If \(U\subset\mathbb{C}\) is a disk, \(u,u^{\prime}\in U\), and \(r:=\operatorname{dist}_{U}(u,u^{\prime})\), then
\[\frac{1}{4}\tanh\left(\frac{r}{2}\right)\leq\frac{|u-u^{\prime}|}{\operatorname {dist}(u,\partial U)}\leq 4\exp(2r).\]
**Corollary 2.9**.: _Fix \(r>0\). Take any sequence of pointed disks \((U_{n},u_{n})\) and let \(\delta_{n}\) be the Euclidean diameter of the hyperbolic \(r\)-ball in \(U_{n}\) centered at \(u_{n}\). Then, \(\delta_{n}\to 0\) if and only if \(\operatorname{dist}(u_{n},\partial U_{n})\to 0\) as \(n\to\infty\)._
Caratheodory convergence can of course be defined for pointed disks in the Riemann sphere \(\hat{\mathbb{C}}\) in the same manner. For our purposes the main examples of such disks are basins of infinity of polynomials with connected Julia sets:
**Example 2.10**.: Suppose \(P_{n}\to P\) in \(\mathcal{C}(d)\). Then \(\beta_{P_{n}}^{-1}\to\beta_{P}^{-1}\) locally uniformly in \(\hat{\mathbb{C}}\smallsetminus\overline{\mathbb{D}}\), hence \((\Omega_{P_{n}},\infty)\rightharpoonup(\Omega_{P},\infty)\).
All the lemmas in this subsection remain valid for the Caratheodory convergence of disks containing \(\infty\) except that the Euclidean condition \(\operatorname{dist}(u_{n},\partial U_{n})\asymp 1\) needed for compactness must be modified in terms of the spherical metric. For example, we could require that both the spherical distance between \(u_{n}\) and \(\partial U_{n}\) and the spherical diameter of \(\partial U_{n}\) be bounded away from \(0\). However, in our main examples where \(U_{n}\) is the basin of infinity of a polynomial \(P_{n}\in\mathcal{C}(d)\) and \(u_{n}\in U_{n}\) converges in \(\mathbb{C}\), this spherical condition is actually equivalent to \(\operatorname{dist}(u_{n},U_{n})\asymp 1\), so we will use the simpler Euclidean formulation without further warning.
## 3. \(\mathcal{L}\)-arcs and their basic properties
Recall that we are considering a convergent sequence \(P_{n}\to P\) in \(\mathcal{C}(d)\), \(\theta\in\mathbb{R}/\mathbb{Z}\) is a given angle with \(d^{q}\theta=\theta\ (\operatorname{mod}\mathbb{Z})\), \(\zeta_{n}\) and \(\zeta\) are the landing points of the external rays \(R_{n}:=R_{P_{n},\theta}\) and \(R:=R_{P,\theta}\), \(\zeta_{n}\to\zeta_{\infty}\) and \(\overline{R}_{n}:=R_{n}\cup\{\zeta_{n},\infty\}\to\mathcal{L}\) in the Hausdorff metric.
**Convention**.: In what follows we will use unsubscripted symbols for objects associated with \(P\) and symbols with a subscript \(n\) for those associated with \(P_{n}\).
It is clear that \(\mathcal{L}\) is a compact connected subset of \(\hat{\mathbb{C}}\) that satisfies \(P^{\circ q}(\mathcal{L})=\mathcal{L}\). Moreover, by Corollary 2.1 for every potential \(s_{0}>0\) we have \(R_{n}(s)\to R(s)\) uniformly on \([s_{0},+\infty[\), which shows the ray segment \(\{R(s):s\geq s_{0}\}\) is contained in \(\mathcal{L}\). Since this is true for every \(s_{0}\), it follows that \(\mathcal{L}\) contains \(R\cup\{\zeta,\zeta_{\infty},\infty\}\). Thus, if \(\mathcal{L}=\overline{R}=R\cup\{\zeta,\infty\}\), then \(\zeta_{\infty}\) must coincide with \(\zeta\) (this is the "tame case" in Theorem A). Take any \(u\in\mathcal{L}\smallsetminus K\) in \(\mathbb{C}\) and any sequence \(u_{n}\in R_{n}\) such that \(u_{n}\to u\). By the
joint continuity of the Green's function (see SS2.1) we have \(G_{n}(u_{n})\to G(u)>0\). By Corollary 2.1\(u_{n}=R_{n}(G_{n}(u_{n}))\to R(G(u))\), which shows \(u=R(G(u))\). It follows that \(\mathcal{L}\setminus K=R\cup\{\infty\}\).
### Proof of the Basic Structure Lemma
First suppose \(u\in\mathcal{L}\cap J\). Take any sequence \(u_{n}\in R_{n}\) such that \(u_{n}\to u\). Then \(s_{n}:=G_{n}(u_{n})\to 0\). By the lower semicontinuity of \(P\mapsto J_{P}\) (Theorem 2.3), \(\operatorname{dist}(u_{n},J_{n})\to 0\). Since
\[\operatorname{dist}_{\Omega_{n}}(u_{n},P_{n}^{\circ q}(u_{n}))\leq\operatorname {dist}_{\mathbb{C}\setminus\bar{\mathbb{D}}}(\mathrm{e}^{s_{n}+2\pi\mathrm{i} \theta},\mathrm{e}^{d^{q}s_{n}+2\pi\mathrm{i}\theta})=q\log d,\]
it follows from Corollary 2.9 that \(|u_{n}-P_{n}^{\circ q}(u_{n})|\to 0\), implying \(P^{\circ q}(u)=u\). This proves part (i) of the Basic Structure Lemma.
Now suppose \(u\in\mathcal{L}\cap\hat{K}\). It suffices to prove the claims in part (ii) of the Lemma under the additional hypothesis \(P^{\circ q}(u)\neq u\). Once this is accomplished, this additional hypothesis can be removed easily. In fact, by connectivity of \(\mathcal{L}\) we can always find some \(u^{\prime}\in\mathcal{L}\cap\hat{K}\) arbitrarily close to \(u\) for which \(P^{\circ q}(u^{\prime})\neq u^{\prime}\) and conclude that \(u^{\prime}\) belongs to a parabolic basin. This proves that \(u\) must be in a parabolic basin and therefore \(P^{\circ q}(u)\neq u\).
So let us assume \(u\in\mathcal{L}\cap\hat{K}\) and \(P^{\circ q}(u)\neq u\). By an argument similar to above, no subsequence of \(\operatorname{dist}(u_{n},J_{n})\) can tend to \(0\). In other words, \(\operatorname{dist}(u_{n},J_{n})\asymp 1\). On the other hand, \((\Omega_{n},\infty)\longrightarrow(\Omega,\infty)\) (Example 2.10), and
\[\operatorname{dist}_{\Omega_{n}}(u_{n},\infty)=\log\left(\frac{\mathrm{e}^{s_{ n}}+1}{\mathrm{e}^{s_{n}}-1}\right)\to+\infty\]
since \(s_{n}\to G(u)=0\). Hence, by Theorem 2.7(ii), after passing to a subsequence we may assume \((\Omega_{n},u_{n})\longrightarrow(V,u)\) for some disk \(V\subset K\). It is not hard to see that \(P^{\circ q}|_{V}:V\to V\) is a proper map (see for example [1, Theorem 5.6]). Since \(P_{n}\in\mathcal{C}(d)\), the basin of infinity \(\Omega_{n}\) does not contain any critical point of \(P_{n}\), so \(V\) does not contain any critical point of \(P\). Thus, \(P^{\circ q}|_{V}:V\to V\) is a conformal isomorphism.
Let "\(\log\)" denote the branch of logarithm which maps the slit-plane \(\mathbb{C}\setminus\)\(-\infty,0]\) conformally onto the strip \(\{z\in\mathbb{C}:|\operatorname{Im}(z)|<\pi\}\). The map
\[\psi_{n}(z):=\frac{1}{s_{n}}\log\left(\mathrm{e}^{-2\pi\mathrm{i}\theta}\beta_ {n}(z)\right)\]
carries the slit-basin \(V_{n}:=\Omega_{n}\setminus R_{n}(\theta+1/2)\) conformally onto the horizontal half-strip
\[S_{n}:=\left\{z\in\mathbb{C}:\operatorname{Re}(z)>0,|\operatorname{Im}(z)|< \frac{\pi}{s_{n}}\right\},\]
with \(\psi_{n}(u_{n})=1\). The subdomain \(V_{n}^{\prime}\subset V_{n}\) defined by
\[V_{n}^{\prime}:=\beta_{n}^{\,-1}\left\{r\mathrm{e}^{2\pi\mathrm{i}t}:r>1,|t- \theta|<\frac{1}{2d^{q}}\right\}\]
maps conformally under \(\psi_{n}\) onto the half-strip
\[S^{\prime}_{n}:=\left\{z:\operatorname{Re}(z)>0,|\operatorname{Im}(z)|<\frac{\pi} {d^{q}s_{n}}\right\}.\]
Moreover, if \(z\in V^{\prime}_{n}\),
\[\psi_{n}(P^{\circ q}_{n}(z)) =\frac{1}{s_{n}}\log\left(\operatorname{e}^{-2\pi\mathrm{i}\theta }\beta_{n}(P^{\circ q}_{n}(z))\right)=\frac{1}{s_{n}}\log\left(\operatorname{ e}^{-2\pi\mathrm{i}\theta}(\beta_{n}(z))^{d^{q}}\right)\] \[=\frac{1}{s_{n}}\log\left(\left(\operatorname{e}^{-2\pi\mathrm{i} \theta}\beta_{n}(z)\right)^{d^{q}}\right)\qquad\qquad(\text{since }d^{q}\theta=\theta\ ( \operatorname{mod}\mathbb{Z}))\] \[=\frac{d^{q}}{s_{n}}\log\left(\operatorname{e}^{-2\pi\mathrm{i} \theta}\beta_{n}(z)\right)=d^{q}\psi_{n}(z).\]
This gives the commutative diagram
(3.1)
**Lemma 3.1**.: _There is a subsequence of \(\{\psi_{n}\}\) which converges locally uniformly to a conformal isomorphism \(\psi:V\to\mathbb{H}^{r}\) normalized by \(\psi(u)=1\), and the following diagram commutes:_
(3.2)
Proof.: It is easy to see that \((S_{n},1)\to\left(\mathbb{H}^{r},1\right)\) and \((S^{\prime}_{n},1)\to\left(\mathbb{H}^{r},1\right)\). By Lemma 2.8, \((V_{n},u_{n})\to\left(V,u\right)\) and \((V^{\prime}_{n},u_{n})\to\left(V,u\right)\). It follows from Lemma 2.5 that some subsequence of \(\{\psi_{n}\}\) converges locally uniformly in \(V\) to a conformal isomorphism \(\psi:(V,u)\to(\mathbb{H}^{r},1)\). Taking the limit of (3.1) then shows that \(\psi\) satisfies (3.2).
Define \(\gamma:]0,+\infty[\to V\) by \(\gamma(t):=\psi^{-1}(t)\). Evidently \(\gamma\) satisfies \(P^{\circ q}(\gamma(t))=\gamma(d^{q}t)\). For simplicity we use \(\gamma\) both for the map and the arc \(\gamma(]0,+\infty[)\subset\mathbb{C}\).
**Lemma 3.2**.: \(\gamma=\mathcal{L}\cap V\)_._
Proof.: Let \(\psi_{n}:V_{n}\to S_{n}\) and \(\psi:V\to\mathbb{H}^{r}\) be as in Lemma 3.1. First suppose \(z\in\mathcal{L}\cap V\). Take a compact neighborhood \(E\) of \(z\) such that \(E\subset V\) and therefore \(E\subset V_{n}\) for all large \(n\). Take a sequence \(z_{n}\in R_{n}\) such that \(z_{n}\to z\), so \(z_{n}\in E\) for all large \(n\). The uniform convergence \(\psi_{n}\to\psi\) on \(E\) implies \(\psi_{n}(z_{n})\to\psi(z)\). Since \(\psi_{n}(z_{n})\in\mathbb{R}\) for all \(n\), we conclude that \(\psi(z)\in\mathbb{R}\), so \(z\in\gamma\).
For the reverse inclusion, take \(z=\gamma(t)\) for a given \(t>0\). Since \(z\in V\) we have \(z\in V_{n}\) for all large \(n\) and \(\zeta_{n}:=\psi_{n}(z)\to\psi(z)=t\). It follows that \(t_{n}:=\operatorname{Re}(\zeta_{n})\to t\)
The point \(z_{n}:=\psi_{n}^{-1}(t_{n})\) is in \(V_{n}\cap R_{n}\) and
\[\operatorname{dist}_{V_{n}}(z_{n},z)=\operatorname{dist}_{S_{n}}(t_{n},\zeta_{n}) \to 0\]
It follows that \(z_{n}\to z\) in the Euclidean metric. Since \(z_{n}\in R_{n}\), we conclude that \(z\in\mathcal{L}\).
A standard hyperbolic geometry argument (see e.g. [**M**, Lemma 5.5]) shows that both limits \(w^{+}:=\lim_{t\to+\infty}\gamma(t)\) and \(w^{-}:=\lim_{t\to 0}\gamma(t)\) exist and are fixed under \(P^{\circ q}\). By Lemma 3.2, \(w^{\pm}\in\mathcal{L}\). By the Snail Lemma ([**M**, Lemma 16.2]) \(w^{+}\) is either attracting or parabolic with multiplier \(1\). The first case cannot happen: If \(w^{+}\) were attracting, a full neighborhood of \(w^{+}\) would be contained in an attracting basin of \(P_{n}\) for all large \(n\)[**Do**, Lemma 6.3]. Clearly this is impossible since \(w^{+}\in\mathcal{L}\) must be accumulated by the rays \(R_{n}\). Thus, \(w^{+}\) is parabolic with multiplier \(1\). This proves that \(V\) and therefore \(\gamma(]0,+\infty[)\) is contained in a parabolic basin \(B\) of \(w^{+}\) and \(P^{\circ q}:B\to B\) is a proper map of some degree \(k\). If \(\phi:B\to\mathbb{D}\) is any conformal isomorphism, the induced map \(f:=\phi\circ P^{\circ q}\circ\phi^{-1}:\mathbb{D}\to\mathbb{D}\) is a Blaschke product of degree \(k\). If \(w^{+}\neq w^{-}\), then \(\varphi(w^{+})\) and \(\varphi(w^{-})\) are distinct fixed points of \(f\), with the former necessarily parabolic with multiplier \(1\) and multiplicity \(3\) since by symmetry it has two attracting basins (namely \(\mathbb{D}\) and \(\mathbb{C}\smallsetminus\overline{\mathbb{D}}\)). It follows that \(f\) has at least \(4\) fixed points on \(\partial\mathbb{D}\) counting multiplicities. Since the total number of fixed points of \(f\) is \(k+1\), we obtain \(k+1\geq 4\), or \(k\geq 3\). It follows that \(B\) contains \(k-1\geq 2\) critical points of \(P^{\circ q}\). This completes the proof of the Basic Structure Lemma.
_Remark 3.3_.: The statement that \(P^{\circ q}:B\to B\) has at least two critical points when \(w^{+}\neq w^{-}\) will be sharpened in SS3.3, where we show that each of the two components of \(B\setminus\gamma\) contains at least one critical point. By contrast, when \(w^{+}=w^{-}\) the Jordan domain \(\Delta\) bounded by \(\overline{\gamma}\) is a component of \(B\setminus\gamma\) free from critical points and the restriction \(P^{\circ q}:\Delta\to\Delta\) is a conformal isomorphism (see Lemma 3.6).
_Remark 3.4_.: Here is a byproduct of the proof of Lemma 3.1. For each \(n\) the conformal map \(\psi_{n}\) sends \(R_{n}(s)\), the point on the ray \(R_{n}\) at potential \(s\), to the point \(s/s_{n}\) on the real line. The convergence of \(\psi_{n}:V_{n}\to S_{n}\) to \(\psi:V\to\mathbb{H}^{r}\) shows that the sequence of inverse maps \(\psi_{n}^{-1}:S_{n}\to V_{n}\) converges to \(\psi^{-1}:\mathbb{H}^{r}\to V\) uniformly on compact subsets of \(\mathbb{H}^{r}\). It follows that for each \(t>0\), \(R_{n}(ts_{n})=\psi_{n}^{-1}(t)\to\psi^{-1}(t)=\gamma(t)\), and this convergence is uniform on compact subsets of \(]0,+\infty[\). Thus, _every compact subarc of \(\gamma\) can be approximated in \(C^{\infty}\)-topology by a suitable sequence of compact subarcs of the rays \(R_{n}\)_.
### Heteroclinic and homoclinic arcs in \(\mathcal{L}\)
We have shown that \(\mathcal{L}\cap\check{K}\) is a disjoint union of \(P^{\circ q}\)-invariant real-analytic open arcs contained in parabolic basins. Each such arc comes equipped with a natural parametrization \(\gamma:]0,+\infty[\to\mathcal{L}\cap\check{K}\) which satisfies \(\gamma(d^{q}t)=P^{\circ q}(\gamma(t))\). For simplicity each such \(\gamma\) will be called an \(\mathcal{L}\)_-arc_. The initial point \(w^{-}=w^{-}(\gamma):=\lim_{t\to 0}\gamma(t)\) and the end point \(w^{+}=w^{+}(\gamma):=\lim_{t\to+\infty}\gamma(t)\) are fixed
under \(P^{\circ q}\), with \(w^{+}\) always parabolic of multiplier \(1\) under \(P^{\circ q}\). Note that \(\gamma\) has a well-defined tangent direction at \(w^{+}\), namely the attracting direction of \(w^{+}\) corresponding to the basin that contains \(\gamma\). More precisely, \((w^{+}-\gamma(t))/|w^{+}-\gamma(t)|\to v\) as \(t\to+\infty\), where \(v\) is the unit vector in the given attracting direction. Similarly, if \(w^{-}\) is parabolic, then \(\gamma\) has a well-defined tangent direction at \(w^{-}\), namely a repelling direction of \(w^{-}\). More precisely, \((\gamma(t)-w^{-})/|\gamma(t)-w^{-}|\to v\) as \(t\to 0\), where \(v\) is the unit vector in the given repelling direction. Note however that \(\gamma\) may not have a \(C^{1}\) extension at its extremities, i.e., \(\gamma^{\prime}(t)/|\gamma^{\prime}(t)|\) may fail to have a limit as \(t\to 0\) or \(+\infty\).
It follows from the above remarks that if \(w^{-}=w^{+}=w\), the Jordan curve \(\overline{\gamma}=\gamma\cup\{w\}\) has the well-defined angle \(\pi\not{p}/(\nu q)\) at \(w\), where \(\not{p}\) is the period of \(w\) and \(\nu\geq 1\) is the degeneracy order of \(w\) as a fixed point of \(P^{\circ\not{p}}\) (see SS2.2). In particular, this angle is \(\pi\) if \(w\) is non-degenerate of period \(q\).
An \(\mathcal{L}\)-arc \(\gamma\) is called _heteroclinic_ if \(w^{-}(\gamma)\neq w^{+}(\gamma)\) and _homoclinic_ if \(w^{-}(\gamma)=w^{+}(\gamma)\). Any pair of heteroclinic arcs \(\gamma,\eta\) with common initial and end points must be contained in the same parabolic basin since by the maximum principle the topological disk bounded by \(\overline{\gamma}\cup\overline{\eta}\) is contained in \(\mathring{K}\). On the other hand, two homoclinic arcs that join the same \(w\) to itself can be contained in different parabolic basins of \(w\).
The Basic Structure Lemma shows that every \(\mathcal{L}\)-arc \(\gamma\) is contained in a topological "strip" \(V_{\gamma}\) in which the action of \(P^{\circ q}\) is conformally conjugate to \(z\mapsto d^{q}z\) in \(\mathbb{H}^{r}\), or equivalently, to the translation \(z\mapsto z+q\log d\) in the Euclidean strip \(\{z:|\operatorname{Im}(z)|<\pi/2\}\). We remark that if \(\gamma,\eta\) are distinct \(\mathcal{L}\)-arcs, then \(V_{\gamma}\cap V_{\eta}=\emptyset\). To see this, take sequences \(u_{n}\in R_{n}\) converging to \(u\in\gamma\) and \(v_{n}\in R_{n}\) converging to \(v\in\eta\). Then, after passing to subsequences, \((\Omega_{n},u_{n})\to(V_{\gamma},u)\) and \((\Omega_{n},v_{n})\to(V_{\eta},v)\). If \(\operatorname{dist}_{\Omega_{n}}(u_{n},v_{n})\) has a bounded subsequence, then \(V_{\gamma}=V_{\eta}\) by Theorem 2.7, which is impossible by Lemma 3.2. Thus \(\operatorname{dist}_{\Omega_{n}}(u_{n},v_{n})\to+\infty\) and another application of Theorem 2.7 shows that \(V_{\gamma}\cap V_{\eta}=\emptyset\). If follows from this disjointness property of strips that _there are at most countably many \(\mathcal{L}\)-arcs._
**Convention**. It will be convenient to also regard the external ray \(R\) as an \(\mathcal{L}\)-arc with \(w^{-}(R)=\zeta\) and \(w^{+}(R)=\infty\). We consider \(R\) as neither a heteroclinic nor a homoclinic arc. This special \(\mathcal{L}\)-arc also comes equipped with a parametrization \(\gamma:]0,+\infty[\to R\) which sends \(t=1\) to any designated point \(R(s)\) and satisfies \(\gamma(d^{q}t)=P^{\circ q}(\gamma(t))\). Simply set \(\gamma(t):=\beta^{-1}(\operatorname{e}^{si+2\pi\mathrm{i}\theta})\), where \(\beta\) is the Bottcher coordinate of \(P\).
### \(\mathcal{L}\)-arcs in a given parabolic basin
Let \(B\) be a parabolic basin that is invariant under \(P^{\circ q}\). Take a Fatou coordinate \(\Phi:B\to\mathbb{C}\) which satisfies \(\Phi\circ P^{\circ q}=T\circ\Phi\), where \(T:z\mapsto z+1\) is the unit translation. We normalize \(\Phi\) so that it maps a maximal attracting petal \(W\subset B\) biholomorphically onto the right half-plane \(\mathbb{H}^{r}\), sending some critical point \(c\in\partial W\cap B\) of \(P^{\circ q}\) to \(\Phi(c)=0\). One can check that \(\Phi\) maps the closure \(\overline{W}\) homeomorphically onto \(\overline{\mathbb{H}^{r}}=\{z:\operatorname{Re}(z)\geq 0\}\). The quotient \(W/P^{\circ q}\) is conformally isomorphic to the cylinder \(\mathbb{H}^{r}/T=\mathbb{C}/T\). The critical points of \(\Phi\) are the critical
points of \(P^{\circ q}\) and their preimages in \(B\), so the critical values of \(\Phi\) form finitely many backward orbits of \(T\) in \(\mathbb{C}\). It is not hard to show that \(\Phi\) is an infinite-degree ramified covering from \(B\) onto \(\mathbb{C}\) and as such it has no finite asymptotic value. It follows from the monodromy theorem that any simply connected domain in \(\mathbb{C}\) which avoids the critical values of \(\Phi\) can be lifted univalently under \(\Phi\).
**Lemma 3.5**.: _Let \(\gamma\) be an \(\mathcal{L}\)-arc in a parabolic basin \(B=P^{\circ q}(B)\)._
1. _The Fatou coordinate_ \(\Phi:B\to\mathbb{C}\) _normalized as above maps_ \(V_{\gamma}\) _biholomorphically onto a_ \(T\)_-invariant topological strip_ \(\bar{V}_{\gamma}\) _which avoids the critical value_ \(\Phi(c)=0\)_. The image_ \(\tilde{\gamma}:=\Phi(\gamma)\) _is a_ \(T\)_-invariant arc in_ \(\bar{V}_{\gamma}\)_._
2. _The annulus_ \(A_{\gamma}:=\bar{V}_{\gamma}/T\) _is essentially embedded in the cylinder_ \(\mathbb{C}/T\) _and has the projection_ \(\tilde{\gamma}/T\) _as its core geodesic. Moreover,_ \(\operatorname{mod}(A_{\gamma})=\pi/(q\log d)\)_._
3. _If_ \(\eta\) _is another_ \(\mathcal{L}\)_-arc in_ \(B\) _distinct from_ \(\gamma\)_, then_ \(\bar{V}_{\gamma}\cap\bar{V}_{\eta}=\emptyset\)_, hence_ \(A_{\gamma}\cap A_{\eta}=\emptyset\)_._
Proof.: For (i), consider the maximal attracting petal \(W\subset B\) having \(c\) on its boundary. Then \(\Phi\) maps \(V_{\gamma}\cap W\) biholomorphically onto an open set in \(\mathbb{H}^{r}\) which is forward invariant under \(T\). The relation \(\Phi=T^{-n}\circ\Phi\circ P^{\circ nq}\) together with the fact that \(P^{\circ q}:V_{\gamma}\to V_{\gamma}\) is a conformal isomorphism shows that \(\Phi\) is a biholomorphism between \(V_{\gamma}\) and a topological strip \(\bar{V}_{\gamma}\) in \(\mathbb{C}\) that is fully invariant under \(T\). Moreover, \(\bar{V}_{\gamma}\) avoids the critical value \(\Phi(c)=0\) since \(V_{\gamma}\) avoids \(c\) and \(\Phi:\overline{W}\to\overline{\mathbb{H}^{r}}\) is a homeomorphism (see Fig. 6).
Part (ii) is an easy exercise since a combination of (i) and the Basic Structure Lemma shows that \(A_{\gamma}\cong V_{\gamma}/P^{\circ q}\) is isomorphic to the quotient of \(\mathbb{H}^{r}\) by the action of the automorphism \(z\mapsto d^{q}z.\) Statement (iii) follows from the disjointness \(V_{\gamma}\cap V_{\eta}=\emptyset\) proved at the end of SS3.2 since \(\bar{V}_{\gamma}\cap\bar{V}_{\eta}\cap\mathbb{H}^{r}=\Phi(V_{\gamma}\cap V_{ \eta}\cap W)=\emptyset\), so \(\bar{V}_{\gamma}\cap\bar{V}_{\eta}=\emptyset\) by \(T\)-invariance.
Take an \(\mathcal{L}\)-arc \(\gamma\) in \(B\) and its \(T\)-invariant image \(\tilde{\gamma}=\Phi(\gamma)\) as above. Denote the upper and lower components of \(\mathbb{C}\setminus\tilde{\gamma}\) by \(\bar{U}_{\gamma}^{+}\) and \(\bar{U}_{\gamma}^{-}\), respectively. Let \(U_{\gamma}^{\pm}\) be the unique component of \(\Phi^{-1}(\bar{U}_{\gamma}^{\pm})\) having \(\gamma\) on its boundary. Notice that each of the two components of \(B\setminus\gamma\) contains one of \(U_{\gamma}^{\pm}\). Every point of \(\partial U_{\gamma}^{\pm}\) either belongs to the basin boundary \(\partial B\) at which \(\Phi\) is undefined, or it belongs to an iterated \(P^{\circ q}\)-preimage of \(\gamma\) in \(B\) which maps under \(\Phi\) to a point of \(\tilde{\gamma}\). It is easy to check that \(U_{\gamma}^{\pm}\) is simply connected, the map \(P^{\circ q}:U_{\gamma}^{\pm}\to U_{\gamma}^{\pm}\) is proper, and the following diagram commutes:
(3.3)
Recall that each \(\mathcal{L}\)-arc has a natural dynamical orientation. We call a homoclinic arc \(\gamma\) positively or negatively oriented according as the dynamical orientation of the Jordan curve \(\overline{\gamma}\) is counterclockwise or clockwise. The Jordan domain bounded by \(\overline{\gamma}\) will be denoted by \(\Delta_{\gamma}\).
**Lemma 3.6** (Characterization of homoclinic arcs).: _The following conditions on an \(\mathcal{L}\)-arc \(\gamma\subset B\) are equivalent:_
1. \(\gamma\) _is a positively oriented homoclinic arc._
2. \(U_{\gamma}^{+}\) _is one of the two components of_ \(B\smallsetminus\gamma\)_._
3. \(P^{\circ q}:U_{\gamma}^{+}\to U_{\gamma}^{+}\) _is a conformal isomorphism._
4. \(\Phi:U_{\gamma}^{+}\to\tilde{U}_{\gamma}^{+}\) _is a conformal isomorphism._
5. \(U_{\gamma}^{+}\) _contains no critical point of_ \(P^{\circ q}\)_._
_Under these conditions \(U_{\gamma}^{+}=\Delta_{\gamma}\). A similar statement is true if we change \(\gamma\) in (i) to negatively oriented and \(U_{\gamma}^{+},\tilde{U}_{\gamma}^{+}\) everywhere to \(U_{\gamma}^{-},\tilde{U}_{\gamma}^{-}\)._
Proof.: For simplicity we will drop the subscript \(\gamma\) from our notation.
(i) \(\Longrightarrow\) (ii): The Jordan domain \(\Delta\) bounded by \(\overline{\gamma}\) is a component of \(B\setminus\gamma\). We claim that \(U^{+}=\Delta\). In fact, \(U^{+}\subset\Delta\) since \(\gamma\) is positively oriented. If this inclusion were strict, we would have \(\partial U^{+}\cap\Delta\neq\emptyset\) and any point in this intersection would eventually map to \(\gamma\) under the iterations of \(P^{\circ q}\). This is impossible since \(P^{\circ q}(\overline{\gamma})=\overline{\gamma}\) together with the maximum principle implies \(P^{\circ q}(\Delta)=\Delta\).
(ii) \(\Longrightarrow\) (iii): The restriction \(P^{\circ q}:U^{+}\to U^{+}\) is proper of some degree \(k\geq 1\). The assumption on \(U^{+}\) implies \(P^{-q}(\gamma)\cap\partial U^{+}=\gamma\). Since \(P^{\circ q}\) acts homeomorphically on \(\gamma\), every point of \(\gamma\) has a unique \(P^{\circ q}\)-preimage on \(\partial U^{+}\), so \(k=1\).
(iii) \(\Longrightarrow\) (v): Trivial.
(v) \(\Longrightarrow\) (iv): By the hypothesis the restriction \(\Phi:U^{+}\to\bar{U}^{+}\) is a ramified covering without critical points, so it is a regular covering map. As \(\bar{U}^{+}\) is simply connected, this covering map must be a conformal isomorphism.
(iv) \(\Longrightarrow\) (i): By (3.3), \(P^{\circ q}:U^{+}\to U^{+}\) is a conformal isomorphism. Since \(P^{\circ q}\) acts homeomorphically on \(\gamma\), it follows that \(P^{-q}(\gamma)\cap\partial U^{+}=\gamma\). This, in turn, implies \(\Phi^{-1}(\bar{\gamma})\cap\partial U^{+}=\bigcup_{n\geq 0}P^{-nq}(\gamma) \cap\partial U^{+}=\gamma\). Now for each \(z_{0}\in U^{+}\) the arc \(\bar{\gamma}\) has full harmonic measure in \(\partial\bar{U}^{+}\) as seen from \(\Phi(z_{0})\in\bar{U}^{+}\). Since \(\Phi:U^{+}\to\bar{U}^{+}\) is a conformal isomorphism and \(\Phi^{-1}(\bar{\gamma})\cap\partial U^{+}=\gamma\), it follows that \(\gamma\) has full harmonic measure in \(\partial U^{+}\) as seen from \(z_{0}\). By elementary conformal mapping theory, this implies \(\gamma\) being homoclinic. In fact, if \(\gamma\) were heteroclinic, its end points on \(\partial U^{+}\) would be distinct so we could find distinct accessible points \(\alpha,\beta\in\partial U^{+}\setminus\gamma\). Under any conformal isomorphism \((U^{+},z_{0})\to(\mathbb{D},0)\) these points would correspond to distinct points \(\alpha^{\prime},\beta^{\prime}\in\partial\mathbb{D}\) and the image of \(\gamma\) would be contained in one of the two components of \(\partial\mathbb{D}\setminus\{\alpha^{\prime},\beta^{\prime}\}\), forcing the harmonic measure of \(\gamma\) to be \(<1\).
Given two homoclinic arcs \(\gamma,\eta\) based at the same parabolic point, we say that \(\gamma\) is _inside_\(\eta\), or \(\eta\) is _outside_\(\gamma\), if \(\Delta_{\gamma}\subset\Delta_{\eta}\). Any maximal linearly ordered set of homoclinics with respect to this order will be called an _earring_. Notice that all homoclinic arcs in the same earring must have the same (positive or negative) dynamical orientation.
**Theorem 3.7**.: _Suppose \(B=P^{\circ q}(B)\) is a parabolic basin._
1. \(B\) _contains at most finitely many heteroclinic_ \(\mathcal{L}\)_-arcs._
2. \(B\) _contains at most two earrings of homoclinic_ \(\mathcal{L}\)_-arcs, and each earring has an outermost element._
Later we will sharpen this result by replacing "at most finitely many" in (i) and "at most two" in (ii) with "at most one" (see Corollaries 4.6 and 5.5).
Proof.: (i) Suppose there are infinitely many heteroclinics \(\gamma_{0},\gamma_{1},\gamma_{2},\dots\) in \(B\). After relabeling we may assume \(\bar{U}^{+}_{\gamma_{j+1}}\subset\bar{U}^{+}_{\gamma_{j}}\) for all \(j\geq 0\) (the case where \(\bar{U}^{+}_{\gamma_{j+1}}\supset\bar{U}^{+}_{\gamma_{j}}\) for all \(j\) is similar). By Lemma 3.5 the annuli \(A_{\gamma_{j}}=\bar{V}_{\gamma_{j}}/T\) are mutually disjoint and
essentially embedded in \(\mathbb{C}/T\), all having the same modulus \(\pi/(q\log d)\). It follows from the Grotzsch inequality ([M, Corollary B.6]) that the annulus \(X_{j}\) bounded by the core geodesics of \(A_{\gamma_{1}}\) and \(A_{\gamma_{j}}\) has modulus \(\geq(j-1)\pi/(q\log d)\), so \(\lim_{j\to\infty}\operatorname{mod}(X_{j})=+\infty\). On the other hand, Lemma 3.6 shows that both topological half-planes \(\tilde{U}_{\gamma_{j}}^{\pm}\) contain critical values of \(\Phi\). Since the critical values of \(\Phi\) lie in finitely many backward \(T\)-orbits, there should be distinct critical values \(a,b\) such that \(a\in\tilde{U}_{\gamma_{j}}^{+}\) and \(b\in\tilde{U}_{\gamma_{j}}^{-}\)_for every \(j\geq 0\)_. As \(\tilde{V}_{\gamma_{j}}\subset\tilde{U}_{\gamma_{j-1}}^{+}\cap\tilde{U}_{ \gamma_{j+1}}^{-}\), the points \(a\) and \(b\) belong to different components of \(\mathbb{C}\smallsetminus\tilde{V}_{\gamma_{j}}\) for all \(j\geq 1\). It follows that \(A_{\gamma_{j}}\) and therefore \(X_{j}\) separates the images of \(a\) and \(b\) in the quotient cylinder \(\mathbb{C}/T\) for all \(j\geq 1\). This is a contradiction since there is a bound on the moduli of essentially embedded annuli in \(\mathbb{C}/T\) that separate two given points.
(ii) First we show that every earring in \(B\) has an outermost homoclinic. Suppose to the contrary that there is an infinite sequence \(\{\gamma_{j}\}\) of distinct homoclinics in \(B\) such that \(\gamma_{j}\) is inside \(\gamma_{j+1}\) for all \(j\). Without loss of generality take every \(\gamma_{j}\) to be positively oriented. By Lemma 3.6, \(U_{\gamma_{j}}^{+}=\Delta_{\gamma_{j}}\), hence \(U_{\gamma_{j}}^{+}\subset U_{\gamma_{j+1}}^{+}\) for all \(j\). It follows that \(\tilde{U}_{\gamma_{j}}^{+}\subset\tilde{U}_{\gamma_{j+1}}^{+}\) and in particular \(\tilde{V}_{\gamma_{j}}\subset\tilde{U}_{\gamma_{j+1}}^{+}\cap\tilde{U}_{ \gamma_{j-1}}^{-}\) for all \(j\). Moreover, every \(\tilde{U}_{\gamma_{j}}^{+}\) avoids the critical value \(\Phi(\varepsilon)=0\) since \(U_{\gamma_{j}}^{+}\) avoids \(c\) and \(\Phi:\overline{W}\to\overline{\mathbb{H}^{r}}\) is a homeomorphism. Fixing some point \(a\in\tilde{U}_{\gamma_{1}}^{+}\smallsetminus\tilde{V}_{\gamma_{1}}\), we see that \(0\) and \(a\) belong to different components of \(\mathbb{C}\smallsetminus\tilde{V}_{\gamma_{j}}\) and therefore every \(A_{\gamma_{j}}\) separates the images of \(0\) and \(a\) in the quotient \(\mathbb{C}/T\). This leads to a contradiction by applying Grotzsch inequality as in (i).
We have shown that each earring of homoclinics in \(B\) has an outermost element \(\gamma\). The image \(\Phi(\Delta_{\gamma})\) is one of the topological half-planes \(\tilde{U}_{\gamma}^{\pm}\) depending on the orientation of \(\gamma\), so the quotient \(\Phi(\Delta_{\gamma})/T\) is a punctured neighborhood of one end of the cylinder \(\mathbb{C}/T\). If \(\eta\) is the outermost element of another earring in \(B\), then \(\Phi(\Delta_{\eta})\) is disjoint from \(\Phi(\Delta_{\gamma})\) since
\[\Phi(\Delta_{\gamma})\cap\Phi(\Delta_{\eta})\cap\mathbb{H}^{r}=\Phi(\Delta_{ \gamma}\cap\Delta_{\eta}\cap W)=\emptyset.\]
It follows that \(\Phi(\Delta_{\eta})/T\) is another punctured neighborhood of an end of \(\mathbb{C}/T\) disjoint from \(\Phi(\Delta_{\gamma})/T\). As this cylinder has only two ends, we conclude that there are at most two earrings of homoclinics in \(B\).
One trivial consequence of the above proof: _There are at most finitely many homoclinic arcs between a given pair of homoclinics in an earring_. In fact, if \(\gamma,\xi,\eta\) are distinct homoclinics with \(\Delta_{\gamma}\subset\Delta_{\xi}\subset\Delta_{\eta}\), then the annulus \(A_{\xi}\) is essentially embedded in the annulus bounded by the core geodesics of \(A_{\gamma}\) and \(A_{\eta}\), and there can be at most finitely many such annuli since they are pairwise disjoint and have the same modulus.
_Remark 3.8_.: It follows from the theory of parabolic implosions (specifically, the existence of the so-called Lavaurs maps) that every earring consists of either one or infinitely many homoclinic arcs.
### Arcwise-connectivity in \(\mathcal{L}\)
For any subset \(E\subset\mathcal{L}\) let \(E^{*}\) denote the set of points in \(E\) that do not lie on any homoclinic arc:
\[E^{*}:=E\setminus(\text{union of all homoclinic arcs}).\]
**Theorem 3.9**.: _For every connected set \(E\subset\mathcal{L}\), both \(E\) and \(E^{*}\) are arcwise-connected. The arc in \(E^{*}\) joining a given pair of points in \(E\cap J\) is unique up to homotopy in \(K\) rel \(E\cap J\)._
We will eventually see that \(\mathcal{L}^{*}\) is homeomorphic to the interval \([0,+\infty]\) so the arc joining any pair in \(E^{*}\) is in fact unique (see SS4.2).
Proof.: By Theorem 3.7\(\mathcal{L}\) contains at most finitely many heteroclinics and finitely many earrings of homoclinics. The outermost homoclinic \(\gamma\) in any earring has the property that \(\Delta_{\gamma}\cup V_{\gamma}\) contains all homoclinics in that earring and \(\mathcal{L}\setminus(\Delta_{\gamma}\cup V_{\gamma})\) is compact and connected. Removing all the finitely many such \(\Delta_{\gamma}\cup V_{\gamma}\) from \(\mathcal{L}\), we conclude that \(\mathcal{L}^{*}\) is a compact connected subset of \(\hat{\mathbb{C}}\). It follows that \(\mathcal{L}^{*}\) is a finite connected graph which has the points of \(\mathcal{L}\cap J\) and \(\infty\) as its vertices and the heteroclinic arcs and \(R\) as its edges.
Suppose now that \(E\subset\mathcal{L}\) is connected. For any homoclinic \(\gamma\) that meets \(E\), either \(E\subset\gamma\) (in which case \(E\) is trivially arcwise-connected and \(E^{*}=\emptyset\)), or \(E\cap\overline{\gamma}\) is an arc (open, closed, half-open) containing \(w^{-}(\gamma)=w^{+}(\gamma)\). It is now easy to see that \(E^{*}\) is connected, and that arcwise-connectivity of \(E\) is equivalent to that of \(E^{*}\) (provided that \(E^{*}\neq\emptyset\)). The latter is trivial since \(E^{*}\) is a connected subset of the finite graph \(\mathcal{L}^{*}\).
Finally, suppose \(\xi\neq\xi^{\prime}\) are two arcs in \(E^{*}\) that join a given pair in \(E\cap J\). Evidently each of these arcs is the closure of a finite union of heteroclinics. Let \(U\) be a bounded component of \(\mathbb{C}\setminus(\xi\cup\xi^{\prime})\). Then \(U\) is a Jordan domain with \(\partial U\subset K\). By the maximum principle \(U\subset\mathring{K}\), from which it follows that \(U\) is contained in a parabolic basin which also contains all heteroclinics in \(\partial U\). This implies that \(U\) is bounded by exactly two heteroclinics with the same initial and end points \(E\cap J\), which are clearly homotopic in \(K\) rel \(E\cap J\). Repeating this for the finitely many bounded components of \(\mathbb{C}\setminus(\xi\cup\xi^{\prime})\), we conclude that \(\xi,\xi^{\prime}\) must be homotopic in \(K\) rel \(E\cap J\).
## 4. Proof of Theorem A
### The intrinsic potential order
Every \(u\in\mathcal{L}\) is the limit of a sequence \(u_{n}=R_{n}(s_{n})\). In this case the full sequence of potentials \(s_{n}=G_{n}(u_{n})\) must have a well-defined limit. In fact, \(s_{n}\to s>0\) if and only if \(u=R(s)\in\mathcal{L}\setminus K\), and \(s_{n}\to 0\) if and only if \(u\in\mathcal{L}\cap K\).
Our goal is to show that the union of \(\mathcal{L}\)-arcs inherits a natural _linear_ order from the potentials of all possible sequences of approximating points on the rays \(R_{n}\). We begin with the following
**Lemma 4.1**.: _Suppose \(u_{n}=R_{n}(s_{n})\to u\in\mathcal{L}\setminus(J\cup\{\infty\})\). Take a sequence \(\{s_{n}^{\prime}\}\) of potentials and set \(u_{n}^{\prime}=R_{n}(s_{n}^{\prime})\). Then, the following conditions on a sequence \(n_{i}\to\infty\) are equivalent:_
1. \(u^{\prime}_{n_{t}}\to u\) _as_ \(i\to\infty\)_._
2. \(s_{n_{t}}/s^{\prime}_{n_{t}}\to 1\) _as_ \(i\to\infty\)_._
_In particular, \(u^{\prime}_{n}\to u\) if and only if \(s_{n}/s^{\prime}_{n}\to 1\)._
Proof.: The result follows from the joint continuity of the Green's function if \(u\in R\), so let us assume \(u\in\mathcal{L}\cap\hat{K}\). We will make use of the computation
\[\operatorname{dist}_{\Omega_{\ast}}(u_{n},u^{\prime}_{n})=\operatorname{dist} _{\mathbb{C}\smallsetminus\overline{\mathbb{D}}}(\mathrm{e}^{s_{n}+2\pi\mathrm{i }\theta},\mathrm{e}^{s^{\prime}_{n}+2\pi\mathrm{i}\theta})=\left|\log\left( \frac{s_{n}}{s^{\prime}_{n}}\right)\right|\]
which shows that the conditions \(s_{n}/s^{\prime}_{n}\to 1\) and \(\operatorname{dist}_{\Omega_{\ast}}(u_{n},u^{\prime}_{n})\to 0\) along any given subsequence are equivalent.
First suppose \(u^{\prime}_{n_{t}}\to u\) but \(\operatorname{dist}_{\Omega_{\ast_{t}}}(u_{n_{t}},u^{\prime}_{n_{t}})\not\to 0\). Take a subsequence of \(\{n_{t}\}\) along which \(\operatorname{dist}_{\Omega_{\ast_{t}}}(u_{n_{t}},u^{\prime}_{n_{t}})\) remains bounded away from \(0\). As in the proof of the Basic Structure Lemma, there is a sub-subsequence of \(\{n_{t}\}\) along which \((\Omega_{n_{t}},u_{n_{t}})\to\)\((V,u)\). By Lemma 2.4, \(\operatorname{dist}_{\Omega_{\ast_{t}}}(u_{n_{t}},u^{\prime}_{n_{t}})\to 0\) along this sub-subsequence of \(\{n_{t}\}\), which is a contradiction.
Now suppose \(\operatorname{dist}_{\Omega_{\ast_{t}}}(u_{n_{t}},u^{\prime}_{n_{t}})\to 0\) but \(u^{\prime}_{n_{t}}\not\to u\). Take a subsequence of \(\{n_{t}\}\) along which \(u^{\prime}_{n_{t}}\) remains bounded away from \(u\). Take a sub-subsequence of \(\{n_{t}\}\) along which \((\Omega_{n_{t}},u_{n_{t}})\to\)\((V,u)\). By Lemma 2.4, \(u^{\prime}_{n_{t}}\to u\) along this sub-subsequence of \(\{n_{t}\}\), which is a contradiction.
Let us call a sequence \(\{s_{n}\}\) of potentials _admissible_ if \(\lim_{n\to\infty}R_{n}(s_{n})\) exists and belongs to \(\mathcal{L}\smallsetminus(J\cup\{\infty\})\). Two admissible sequences \(\{s_{n}\},\{s^{\prime}_{n}\}\) are _equivalent_ if \(s_{n}/s^{\prime}_{n}\to 1\). The equivalence class of \(\{s_{n}\}\) is denoted by \(\langle s_{n}\rangle\). We denote by \(\mathcal{S}\) the space of all equivalence classes of admissible sequences. By Lemma 4.1 there is a well-defined bijection \(\Pi:\mathcal{S}\to\mathcal{L}\smallsetminus(J\cup\{\infty\})\) given by \(\Pi(\langle s_{n}\rangle):=\lim_{n\to\infty}R_{n}(s_{n})\). We topologize \(\mathcal{S}\) so that \(\Pi\) is continuous, in which case \(\mathcal{S}\) is homeomorphic to a disjoint union of at most countably many open intervals.
For each \(u=\Pi(\langle s_{n}\rangle)\) take the \(\mathcal{L}\)-arc \(\gamma\) through \(u\) and the special parametrization \(\gamma:]0,+\infty[\to\mathcal{L}\) which satisfies \(\gamma(1)=u\) and \(\gamma(d^{q}t)=P^{\circ g}(\gamma(t))\), given by the Basic Structure Lemma if \(\gamma\not=R\) or the Bottcher coordinate if \(\gamma=R\). Then \(\gamma(t)=\Pi(\langle ts_{n}\rangle)\) for every \(t>0\) (see Remark 3.4). Thus, we can think of \(\Pi^{-1}(\gamma)\) as the interval \(\{\langle ts_{n}\rangle:t>0\}\) in \(\mathcal{S}\) homeomorphic to \(]0,+\infty[\).
**Lemma 4.2**.: _For any pair \(\langle s_{n}\rangle,\langle s^{\prime}_{n}\rangle\) in \(\mathcal{S}\) the limit \(c:=\lim_{n\to\infty}s_{n}/s^{\prime}_{n}\in[0,+\infty]\) exists. More precisely, if \(u=\Pi(\langle s_{n}\rangle)\) and \(u^{\prime}=\Pi(\langle s^{\prime}_{n}\rangle)\) belong to the same \(\mathcal{L}\)-arc, then \(0<c<+\infty\), while if \(u,u^{\prime}\) belong to different \(\mathcal{L}\)-arcs, then \(c=0\) or \(+\infty\)._
Proof.: We have already seen that if \(u,u^{\prime}\) belong to the same \(\mathcal{L}\)-arc, then \(u=\Pi(\langle cs^{\prime}_{n}\rangle)\) for some \(0<c<+\infty\). This gives \(\langle cs^{\prime}_{n}\rangle=\langle s_{n}\rangle\), which shows \(\lim s_{n}/s^{\prime}_{n}=c\), as required. We claim that if \(u,u^{\prime}\) do not belong to the same \(\mathcal{L}\)-arc, then one of the relations
\(\lim s_{n}/s_{n}^{\prime}=0\) or \(\lim s_{n}/s_{n}^{\prime}=+\infty\) must hold. If not, take a subsequence \(s_{n_{l}}/s_{n_{t}}^{\prime}\) which tends to some limit \(c\in\,]0,+\infty[\) as \(i\to\infty\). By Lemma 4.1, \(R_{n_{t}}(cs_{n_{l}}^{\prime})\to u\). On the other hand, we know that the whole sequence \(\{R_{n}(cs_{n}^{\prime})\}\) tends to a point on the same \(\mathcal{L}\)-arc as \(u^{\prime}\). This contradicts our assumption that \(u,u^{\prime}\) are not on the same \(\mathcal{L}\)-arc.
Lemma 4.2 shows that we can define a linear order on \(\mathcal{S}\) by declaring
\[\langle s_{n}\rangle<\langle s_{n}^{\prime}\rangle\qquad\text{if and only if}\qquad\lim_{n\to\infty}\frac{s_{n}}{s_{n}^{\prime}}<1.\]
Pushing forward this order by the bijection \(\Pi\), we obtain a corresponding linear order on \(\mathcal{L}\smallsetminus(J\cup\{\infty\})\) called the _intrinsic potential order_, that is, we define \(u<u^{\prime}\) if and only if \(\Pi^{-1}(u)<\Pi^{-1}(u^{\prime})\). Observe that the restriction of this order to each \(\mathcal{L}\)-arc is compatible with the dynamical orientation: if \(u,u^{\prime}\in\gamma\) with \(u<u^{\prime}\), then in going from \(w^{-}(\gamma)\) to \(w^{+}(\gamma)\) we visit \(u\) before \(u^{\prime}\).
We write \(u\leq u^{\prime}\) in the usual sense that \(u<u^{\prime}\) or \(u=u^{\prime}\).
Evidently if \(\gamma,\eta\) are distinct \(\mathcal{L}\)-arcs such that \(u<u^{\prime}\) for some \(u\in\gamma,u^{\prime}\in\eta\), then \(u<u^{\prime}\) for every \(u\in\gamma,u^{\prime}\in\eta\). In this case we write \(\gamma<\eta\). This defines a linear order on the collection of \(\mathcal{L}\)-arcs. Explicitly, \(\gamma<\eta\) if and only if for every \(\Pi(\langle s_{n}\rangle)\in\gamma\) and \(\Pi(\langle s_{n}^{\prime}\rangle)\in\eta\) we have \(\lim_{n\to\infty}s_{n}/s_{n}^{\prime}=0\). Note that in this order the ray \(R\) is the largest \(\mathcal{L}\)-arc.
_Remark 4.3_.: There is a completion \(\hat{\mathcal{S}}\) homeomorphic to \([0,+\infty]\) such that \(\Pi\) extends to a continuous surjection \(\hat{\Pi}:\hat{\mathcal{S}}\to\mathcal{L}\). Moreover, for each \(w\in\mathcal{L}\cap J\) the cardinality of \(\hat{\Pi}^{-1}(w)\) is one more than the number of homoclinics based at \(w\) (possibly infinite). The proof is based on Theorem B and will not be given.
The following lemma will be used frequently in the next sections:
**Lemma 4.4**.: _Consider positive sequences \(\{s_{n}\},\{s_{n}^{\prime}\}\) such that \(s_{n}<s_{n}^{\prime}\) for all \(n\). Let \(E\subset\mathcal{L}\) be any subsequential Hausdorff limit of the ray segments \(R_{n}([s_{n},s_{n}^{\prime}]):=\{R_{n}(s):s_{n}\leq s\leq s_{n}^{\prime}\}\), and let \(z\in E\smallsetminus(J\cup\{\infty\})\)._
1. _If_ \(\langle s_{n}\rangle\in\mathcal{S}\) _and_ \(u=\Pi(\langle s_{n}\rangle)\)_, then_ \(u\leq z\)_._
2. _If_ \(\langle s_{n}^{\prime}\rangle\in\mathcal{S}\) _and_ \(u^{\prime}=\Pi(\langle s_{n}^{\prime}\rangle)\)_, then_ \(z\leq u^{\prime}\)_._
Proof.: We prove (i), the proof of (ii) being similar. Take an increasing sequence \(\{n_{i}\}\) of integers such that \(R_{n_{t}}([s_{n_{t}},s_{n_{t}}^{\prime}])\to E\) in the Hausdorff metric and choose \(r_{i}\in[s_{n_{t}},s_{n_{t}}^{\prime}]\) such that \(R_{n_{t}}(r_{i})\to z\). Let \(\langle t_{n}\rangle:=\Pi^{-1}(z)\), so \(R_{n}(t_{n})\to z\). By Lemma 4.1, \(r_{i}/t_{n_{t}}\to 1\). It follows from
\[\frac{s_{n_{t}}}{t_{n_{t}}}=\frac{s_{n_{t}}}{r_{i}}\cdot\frac{r_{i}}{t_{n_{t}}}\]
that \(\lim\sup_{i\to\infty}s_{n_{t}}/t_{n_{t}}\leq 1\) and therefore \(\lim_{n\to\infty}s_{n}/t_{n}\leq 1\) by Lemma 4.2. This implies \(u\leq z\), as required.
### The order and structure of heteroclinic arcs
**Lemma 4.5**.: _Let \(\gamma\) be a heteroclinic arc and \(\eta\) be any \(\mathcal{L}\)-arc._
1. _Suppose_ \(w^{+}(\gamma)=w^{+}(\eta)=w\)_. If_ \(\eta<\gamma\)_, there is a heteroclinic arc_ \(\xi\) _with_ \(w^{+}(\xi)=w\) _such that_ \(\eta<\xi<\gamma\)_._
2. _Suppose_ \(w^{-}(\gamma)=w^{-}(\eta)=w\)_. If_ \(\gamma<\eta\)_, there is a heteroclinic arc_ \(\xi\) _with_ \(w^{-}(\xi)=w\) _such that_ \(\gamma<\xi<\eta\)_._
Since by Theorem 3.7 there are only finitely many heteroclinic arcs in \(\mathcal{L}\), we immediately obtain the following
**Corollary 4.6**.:
1. _For every_ \(w\in\mathcal{L}\cap J\) _there is at most one heteroclinic arc_ \(\gamma\) _with_ \(w=w^{-}(\gamma)\) _or with_ \(w=w^{+}(\gamma)\)_. In particular, every parabolic basin_ \(B=P^{\circ q}(B)\) _contains at most one heteroclinic arc._
2. _Let_ \(\gamma\) _be a heteroclinic and_ \(\eta\) _be a homoclinic arc. If_ \(w^{+}(\gamma)=w^{+}(\eta)\)_, then_ \(\gamma<\eta\)_. If_ \(w^{-}(\gamma)=w^{-}(\eta)\)_, then_ \(\eta<\gamma\)_._
Proof of Lemma 4.5.: We only prove (i), as the proof of (ii) is similar. Fix \(u=\Pi(\langle s_{n}\rangle)\in\eta\) and \(u^{\prime}=\Pi(\langle s^{\prime}_{n}\rangle)\in\gamma\), so \(\lim_{n\to\infty}s_{n}/s^{\prime}_{n}=0\). Let \(E\) be a subsequential Hausdorff limit of the ray segments \(R_{n}([s_{n},s^{\prime}_{n}])\). Then \(E\) is a compact connected subset of \(\mathcal{L}\) containing \(u,u^{\prime}\). In fact, it is easy to see that \(E\) contains the segment of \(\eta\) between \(u\) and \(w\) and the segment of \(\gamma\) between \(w^{-}(\gamma)\) and \(u^{\prime}\). Observe that by Lemma 4.4, every \(z\in E\smallsetminus J\) other than \(u,u^{\prime}\) satisfies \(u<z<u^{\prime}\).
By Theorem 3.9 we can find an arc \(\xi\) in \(E^{*}\) joining \(w\) and \(w^{-}(\gamma)\). By the above observation, this arc cannot meet the open segment of \(\eta\) between \(w^{-}(\eta)\) and \(u\) or the open segment of \(\gamma\) between \(u^{\prime}\) and \(w\). Hence it must be altogether disjoint from \(\gamma,\eta\). Another application of Theorem 3.9 then shows that \(\xi\) is homotopic to \(\gamma\) rel \(E\cap J\) and therefore must be a heteroclinic arc in the same basin as \(\gamma\). Evidently \(w^{+}(\xi)=w^{+}(\gamma)=w\) and \(\eta<\xi<\gamma\).
The next lemma shows that the dynamical orientation of adjacent heteroclinic arcs is compatible with their intrinsic potential order.
**Lemma 4.7**.: _If \(\gamma,\eta\) are heteroclinic arcs with \(w^{+}(\gamma)=w^{-}(\eta)=w\), then \(\gamma<\eta\)._
Proof.: (i) First note that \(w^{+}(\eta)\neq w^{-}(\gamma)\); otherwise the union \(\gamma\cup\eta\cup\{w^{\pm}(\gamma)\}\) would bound a topological disk in \(\mathring{K}\), which would imply \(\gamma,\eta\) are contained in the same parabolic basin \(B\). This leads to a contradiction, either by invoking Corollary 4.6(i) or by simply observing that all orbits of \(P^{\circ q}\) in \(B\) must converge to a unique boundary point.
Assume by way of contradiction that \(\eta<\gamma\). As in the proof of Lemma 4.5 fix \(u=\Pi(\langle s_{n}\rangle)\in\eta\) and \(u^{\prime}=\Pi(\langle s^{\prime}_{n}\rangle)\in\gamma\), consider a subsequential Hausdorff limit \(E\subset\mathcal{L}\)
of the ray segments \(R_{n}([s_{n},s^{\prime}_{n}])\), and find an arc \(\xi\) in \(E^{*}\) joining \(w^{+}(\eta)\) and \(w^{-}(\gamma)\) which must be disjoint from \(\gamma,\eta\). By Theorem 3.9, \(\xi\) is homotopic to the arc \(\gamma\) followed by \(\eta\) rel \(E\cap J\). In particular, \(w\in\xi\). It follows that the segment of \(\xi\) between \(w^{+}(\eta)\) and \(w\) is a heteroclinic in the same basin as \(\eta\) and the segment of \(\xi\) between \(w\) and \(w^{-}(\gamma)\) is a heteroclinic in the same basin as \(\gamma\). This contradicts Corollary 4.6(i).
**Lemma 4.8**.: _Suppose \(\gamma\) is a heteroclinic arc and \(u=\Pi(\langle s_{n}\rangle)\in\gamma\). Take any sequence of potentials \(\{t_{n}\}\) with \(R_{n}(t_{n})\to w\in\mathcal{L}\cap J\)._
1. _If_ \(w=w^{+}(\gamma)\)_, then_ \(s_{n}/t_{n}\to 0\)_._
2. _If_ \(w=w^{-}(\gamma)\)_, then_ \(s_{n}/t_{n}\to+\infty\)_._
Proof.: We prove (i), the proof of (ii) being similar. Assume by way of contradiction that \(\limsup_{n\to\infty}s_{n}/t_{n}>0\). If there were a subsequence \(s_{n_{t}}/t_{n_{t}}\to\epsilon\in]0,+\infty[\), then \(\lim_{i\to\infty}R_{n_{t}}(t_{n_{t}})=\lim_{n\to\infty}R_{n}(\epsilon^{-1}s_{ n})\in\gamma\) by Lemma 4.1, which would lead to the conclusion \(w\in\gamma\). Thus, we must have \(\lim_{n\to\infty}s_{n}/t_{n}=+\infty\). Let \(E\) be a subsequential Hausdorff limit of the segments \(R_{n}([t_{n},s_{n}])\). Then \(E\) is a compact connected subset of \(\mathcal{L}\) containing \(w,u\). Moreover, by Lemma 4.4, if \(z\in E\smallsetminus J\) and \(z\neq u\), then \(z<u\). In particular, \(E\) is disjoint from the open subarc \(\xi\subset\gamma\) between \(u\) and \(w\). Now Theorem 3.9 shows that there is an arc in \(E\) joining \(w\) and \(u\). But by Corollary 4.6(i) the only arc in \(\mathcal{L}\) joining \(w\) and \(u\) is \(\xi\). The contradiction proves \(\lim_{n\to\infty}s_{n}/t_{n}=0\).
We now have all the ingredients we need to prove Theorem A:
Proof of Theorem A.: Most of the claims of the theorem have already been verified. Theorem 3.9 showed that the spine \(\mathcal{L}^{*}\) is a finite connected graph embedded in \(\hat{\mathbb{C}}\) which has \((\mathcal{L}\cap J)\cup\{\infty\}\) as its vertices and all non-homoclinic \(\mathcal{L}\)-arcs as its edges. By Corollary 4.6(i), every vertex of this graph has degree \(1\) or \(2\). Since \(\infty\) is surely a vertex of degree \(1\), it follows that this graph is a tree with exactly two vertices of degree \(1\) and the remaining vertices (if any) of degree \(2\). In particular, this tree is homeomorphic to a closed arc. If there are no vertices of degree \(2\), then \(\mathcal{L}^{*}=\overline{R}\) and therefore \(\mathcal{L}\cap J=\{\zeta\}=\{\zeta_{\infty}\}\), and depending on whether \(\mathcal{L}\) contains any homoclinics or not, we are in the semi-wild or tame case, respectively. On the other hand, if \(\mathcal{L}^{*}\) does have vertices of degree \(2\), Lemma 4.7 shows that we can sort the heteroclinic arcs in \(\mathcal{L}\) as \(\gamma_{N}<\dots<\gamma_{1}<R\) so that \(w^{+}(\gamma_{j})=w_{j-1}\) and \(w^{-}(\gamma_{j})=w_{j}\) for all \(1\leq j\leq N\), and \(w^{-}(R)=w_{0}\) (compare Fig. 7).
To finish the proof, it remains to show that he limit \(\zeta_{\infty}=\lim_{n\to\infty}\zeta_{n}\) of the landing points of the rays \(R_{n}\) is \(w_{N}\). Suppose \(\zeta_{\infty}=w_{j}\) for some \(0\leq j\leq N-1\). Fix some \(u=\Pi(\langle s_{n}\rangle)\in\gamma_{j+1}\). On the one hand, since \(\zeta_{n}\to\zeta_{\infty}\), we can find a sequence \(\{t_{n}\}\) of potentials converging to \(0\) so fast that \(0<t_{n}<s_{n}\) and \(R_{n}(t_{n})\to\zeta_{\infty}\). On the other hand, Lemma 4.8(i) implies \(s_{n}/t_{n}\to 0\). This is a contradiction.
## 5. Proof of Theorem B
So far we have shown that \(\mathcal{L}\) is the union of the spine \(\mathcal{L}^{*}\) together with a finite collection (possibly empty) of earrings attached to the points of \(\mathcal{L}\cap J\). For the proof of Theorem B, we need to show that these earrings cannot occur in the basins that meet the spine, that the same basin cannot contain two distinct earrings, and that every earring with more than one homoclinic arc must be based at the point \(w_{N}=\zeta_{\infty}\). These statements require a better understanding of the structure of homoclinic arcs and will be addressed in SS5-4.
### Good disks
Suppose \(\gamma\) is an \(\mathcal{L}\)-arc with \(w^{+}(\gamma)=w\). By real analyticity, there are at most countably many radii \(\varepsilon>0\) for which the boundary of the disk \(D:=\mathbb{D}(w,\varepsilon)\) meets \(\gamma\) tangentially. In other words, for all but countably many choices of \(\varepsilon>0\) the circle \(\partial D\) meets \(\gamma\) transversally at finitely many points. Of course a similar description holds when \(w^{-}(\gamma)=w\). It follows that for any finite collection \(\mathcal{C}\) of \(\mathcal{L}\)-arcs with either the initial or end point at \(w\), there are arbitrarily small \(\varepsilon>0\) for which \(\partial D\) meets every \(\gamma\in\mathcal{C}\) transversally at finitely many points. We call such \(D\) a _good disk_ centered at \(w\) for the collection \(\mathcal{C}\).
Suppose \(D=\mathbb{D}(w,\varepsilon)\) is a good disk for a finite collection \(\mathcal{C}\), where \(w\in\mathcal{L}\cap J\) is parabolic. Let \(\gamma\in\mathcal{C}\) be a heteroclinic with \(w^{-}(\gamma)=w\), so \(\gamma\) is asymptotic to a repelling direction at \(w\). If \(a^{-}\) is the point where \(\partial D\) meets the radial line at \(w\) in this repelling direction, then every \(z\in\gamma\cap\partial D\) satisfies \(|z-a^{-}|=o(\varepsilon)\) as \(\varepsilon\to 0\) (see SS3.2). The greatest point on \(\gamma\cap\partial D\) (in the intrinsic potential order) is denoted by \(w^{-}(\gamma,D)\). Thus, \(w^{-}(\gamma,D)\) is characterized as the point on \(\gamma\cap\partial D\) such that \(z\in\gamma\) and \(w^{-}(\gamma,D)<z\) imply \(z\notin\overline{D}\). Similarly, suppose \(\gamma\in\mathcal{C}\) is a heteroclinic with \(w^{+}(\gamma)=w\), so \(\gamma\) is asymptotic to an attracting direction at \(w\). If \(a^{+}\) is the point where \(\partial D\) meets the radial line at \(w\) in this attracting direction, then every \(z\in\gamma\cap\partial D\) satisfies \(|z-a^{+}|=o(\varepsilon)\) as \(\varepsilon\to 0\). The least point on \(\gamma\cap\partial D\) (in the intrinsic potential order) is denoted by \(w^{+}(\gamma,D)\). Thus, \(w^{+}(\gamma,D)\) is characterized as the point on \(\gamma\cap\partial D\) such that \(z\in\gamma\) and \(z<w^{+}(\gamma,D)\) imply \(z\notin\overline{D}\).
Now suppose \(\gamma\) is a homoclinic in \(\mathcal{C}\) so \(\gamma\) is asymptotic to a pair of repelling and attracting directions at \(w\). If \(a^{-},a^{+}\) are the points where \(\partial D\) meets the radial line at \(w\) in these repelling and attracting directions, then \(|a^{-}-a^{+}|\asymp\varepsilon\) but every \(z\in\gamma\cap\partial D\) satisfies \(|z-a^{-}|=o(\varepsilon)\) or \(|z-a^{+}|=o(\varepsilon)\) depending on which end of \(\gamma\) the point \(z\) is close to. In other words, the finite set \(\gamma\cap\partial D\) is partitioned into two subsets, one near \(a^{-}\) and the other near \(a^{+}\), unambiguously separated for \(\varepsilon\) sufficiently small. By definition, the greatest point of \(\gamma\cap\partial D\) in the first subset is denoted by \(w^{-}(\gamma,D)\) and the least point of \(\gamma\cap\partial D\) in the second subset is denoted by \(w^{+}(\gamma,D)\). Notice that by this definition \(w^{-}(\gamma,D)<w^{+}(\gamma,D)\) and the segment of \(\gamma\) between \(w^{\pm}(\gamma,D)\) is outside \(\overline{D}\). Recall that \(\Delta_{\gamma}\) denotes the Jordan domain bounded by \(\overline{\gamma}=\gamma\cup\{w\}\). We define \(I_{\gamma}\) to be the closed arc of the circle \(\partial D\) bounded by \(w^{\pm}(\gamma,D)\) which is nearly contained in \(\Delta_{\gamma}\) in the sense that the length of \(I_{\gamma}\setminus\Delta_{\gamma}\) is \(o(\varepsilon)\) (see Fig. 8).
The following basic properties will be used in the next section and can be easily verified. Suppose, as above, that \(\gamma\in\mathcal{C}\) is a homoclinic.
1. If \(\eta\in\mathcal{C}\) is a homoclinic in the same earring as \(\gamma\), then \(I_{\eta}\subset I_{\gamma}\) or \(I_{\gamma}\subset I_{\eta}\).
2. If \(\eta\in\mathcal{C}\) is _not_ a homoclinic in the same earring as \(\gamma\), then neither of \(w^{\pm}(\eta,D)\) (when defined) can be in \(I_{\gamma}\).
### Good transversals
We now turn to another construction that will be useful for our purposes. Let us work with the compact set \(\mathcal{L}_{\leq 1}:=\mathcal{L}\setminus(R(]1,+\infty[)\cup\{\infty\})\), i.e., the result of truncating \(\mathcal{L}\) beyond Green's potential \(1\). It will be convenient to use the term _chain_ to describe a finite sequence of adjacent homoclinics in the same earring starting with the outermost. In other words, the homoclinics \(\eta_{1},\ldots,\eta_{n}\) form a chain if \(\eta_{1}\) is the outermost in its earring and \(\overline{\Delta}_{\eta_{j}}\supset\overline{\Delta}_{\eta_{j+1}}\) and \((\Delta_{\eta_{j}}\setminus\overline{\Delta}_{\eta_{j+1}})\cap\mathcal{L}=\emptyset\) for all \(1\leq j\leq n-1\). We refer to a chain of length \(n\) as an _\(n\)-chain_.
Figure 8. A good disk \(D\) centered at \(w\), a homoclinic arc \(\gamma\), and the points \(w^{-}(\gamma,D)\) and \(w^{+}(\gamma,D)\). The closed arc \(I_{\gamma}\) bounded by \(w^{\pm}(\gamma,D)\) and nearly contained in \(\Delta_{\gamma}\) is highlighted in blue.
**Definition 5.1**.: A smooth embedded arc \(\Sigma:[0,1[\to\mathbb{C}\) with \(\lim_{t\to 1}\Sigma(t)=\infty\) is called a _good transversal_ for \(\mathcal{L}\) if \(\Sigma\) intersects \(\mathcal{L}_{\leq 1}\) transversally at finitely many points \(z_{1},\ldots,z_{n}\) such that
* either \(n=1\) and \(z_{1}\) belongs to a heteroclinic arc or \(R\),
* or \(z_{1},\ldots,z_{n}\) belong to an \(n\)-chain of homoclinics \(\eta_{1},\ldots,\eta_{n}\), respectively.
**Lemma 5.2** (Existence of good transversals).:
1. _Suppose_ \(z\) _belongs to a heteroclinic or_ \(R(]0,1[)\)_. Then there is a good transversal_ \(\Sigma\) _with_ \(\Sigma\cap\mathcal{L}_{\leq 1}=\{z\}\)_._
2. _Suppose_ \(z_{1},\ldots,z_{n}\) _belong to an n-chain of homoclinics_ \(\eta_{1},\ldots,\eta_{n}\)_, respectively. Then there is a good transversal_ \(\Sigma\) _with_ \(\Sigma\cap\mathcal{L}_{\leq 1}=\{z_{1},\ldots,z_{n}\}\)_._
3. _Given finitely many distinct points of type (i) and collections of points of type (ii) in different earrings, we can choose corresponding good transversals that are pairwise disjoint._
Proof.: Let \(\tau_{1},\ldots,\tau_{k}\) denote the outermost homoclinics of all the earrings in \(\mathcal{L}\). Consider the union \(\hat{\mathcal{L}}\) of the closed disks \(\overline{\Delta}_{\tau_{1}},\ldots,\overline{\Delta}_{\tau_{k}}\) together with all heteroclinics, the ray segment \(R(]0,1])\), and all points in \(\mathcal{L}\cap\mathcal{J}\). In other words, \(\hat{\mathcal{L}}\) is the "filled in" \(\mathcal{L}_{\leq 1}\). Evidently \(\hat{\mathcal{L}}\) is a full compact subset of \(\mathbb{C}\) containing \(\mathcal{L}_{\leq 1}\) with piecewise analytic boundary. Using the non-dynamical "external rays" of the uniformization \((\hat{\mathbb{C}}\smallsetminus\overline{\mathbb{D}},\infty)\stackrel{{ \cong}}{{=}}(\hat{\mathbb{C}}\smallsetminus\hat{\mathcal{L}},\infty)\) we see that every \(z\in\partial\hat{\mathcal{L}}\) is the landing point of at least one ray in \(\mathbb{C}\smallsetminus\hat{\mathcal{L}}\). If \(z\in\partial\hat{\mathcal{L}}\smallsetminus(J\cup R(1))\), each ray landing at \(z\) meets the \(\mathcal{L}\)-arc through \(z\) orthogonally, so it can be extended ever so slightly past its \(z\)-end to become a good transversal with \(\Sigma\cap\mathcal{L}_{\leq 1}=\{z\}\). This proves part (i) and part (ii) for \(1\)-chains.
If \(z_{1},\ldots,z_{n}\) lie on an \(n\)-chain \(\eta_{1},\ldots,\eta_{n}\) for \(n\geq 2\), take a good transversal \(\Sigma\) with \(\Sigma\cap\mathcal{L}_{\leq 1}=\{z_{1}\}\) as above. It is then easy to extend \(\Sigma\) smoothly all the way inside \(\Delta_{\eta_{n}}\), crossing \(\eta_{j}\) once transversally at \(z_{j}\) for \(2\leq j\leq n\). This proves part (ii) for \(n\geq 2\).
Part (iii) follows from the fact that distinct "external rays" are disjoint.
### Linked and unlinked pairs
Let \(D\) be a round disk in \(\mathbb{C}\). Pairs \((a_{1},a_{2}),(b_{1},b_{2})\) of distinct points on the boundary circle \(\partial D\) are said to be _linked_ if \(b_{1}\) and \(b_{2}\) lie in different connected components of \(\partial D\smallsetminus\{a_{1},a_{2}\}\). Otherwise, \((a_{1},a_{2}),(b_{1},b_{2})\) are called _unlinked_. A collection of pairs on \(\partial D\) is unlinked if every two pairs in the collection are unlinked. It is customary to represent a pair \((a_{1},a_{2})\in\partial D\) by the hyperbolic geodesic in \(D\) with endpoints at \(a_{1}\) and \(a_{2}\). An unlinked collection is then visualized as one whose representative geodesics are pairwise disjoint.
If \((a_{1},a_{2}),(b_{1},b_{2})\) on \(\partial D\) are linked, any two paths in \(D\) that connect \(a_{1}\) to \(a_{2}\) and \(b_{1}\) to \(b_{2}\) must intersect. This is an easy consequence of the Jordan curve theorem. The following lemma is a generalization of this fact. Recall that a point \(z\) on the boundary
of a simply connected domain \(U\subsetneq\mathbb{C}\) is _uniaccessible_ if \(\partial U\smallsetminus\{z\}\) is connected. Equivalently, if for any base point \(z_{0}\in U\) there is a unique up to homotopy arc in \(U\) that connects \(z_{0}\) to \(z\).
**Lemma 5.3**.: _Suppose \((a_{1},a_{2}),(b_{1},b_{2})\) on \(\partial D\) are linked. Let \(U\) be any simply connected domain with \(D\subset U\subset\mathbb{C}\) such that \(a_{1},a_{2},b_{1},b_{2}\) are uniaccessible points of \(\partial U\). Then any two paths in \(U\) that connect \(a_{1}\) to \(a_{2}\) and \(b_{1}\) to \(b_{2}\) must intersect._
The assumptions that \(D\subset U\) and \(a_{1},a_{2},b_{1},b_{2}\) are uniaccessible are both necessary; compare Fig. 9.
Proof.: Let \(w\) be the center of \(D\) and take a conformal isomorphism \(\phi:(U,w)\stackrel{{\cong}}{{\longrightarrow}}(\mathbb{D},0)\). By elementary conformal mapping theory the four radial lines in \(U\) starting at \(w\) and landing on \(a_{1},a_{2},b_{1},b_{2}\) map under \(\phi\) to four disjoint paths in \(\mathbb{D}\) starting at \(0\) and landing at distinct points \(a_{1}^{\prime},a_{2}^{\prime},b_{1}^{\prime},b_{2}^{\prime}\). The pairs \((a_{1}^{\prime},a_{2}^{\prime}),(b_{1}^{\prime},b_{2}^{\prime})\) on the unit circle \(\partial\mathbb{D}\) are linked because \(\phi\) preserves the cyclic order of the radial lines near \(w\). Now since \(a_{1},a_{2},b_{1},b_{2}\) are uniaccessible, any two paths in \(U\) that connect \(a_{1}\) to \(a_{2}\) and \(b_{1}\) to \(b_{2}\) map under \(\phi\) to two paths in \(\mathbb{D}\) that connect the same pairs \((a_{1}^{\prime},a_{2}^{\prime})\) and \((b_{1}^{\prime},b_{2}^{\prime})\), and the result follows.
### The order and structure of homoclinic arcs
The proof of Theorem B will be based on a series of statements about the order of the homoclinics that are based at a given point in \(\mathcal{L}\cap J\) (Corollaries 5.5-5.7). We derive these statements from the following topological result which will also be used repeatedly in SS6:
**Theorem 5.4**.: _Fix \(w\in\mathcal{L}\cap J\) and \(h\geq 1\). Let \(\eta_{0}<\cdots<\eta_{h+1}\) be any collection of \(\mathcal{L}\)-arcs such that \(\eta_{1},\ldots,\eta_{h}\) are homoclinics based at \(w\) and \(w^{+}(\eta_{0})=w^{-}(\eta_{h+1})=w\) (so \(\eta_{0},\eta_{h+1}\) may or may not be homoclinics). Assume further that the homoclinics in this collection form a union of chains. Take a sufficiently small good disk \(D\) centered at \(w\) for the collection \(\{\eta_{0},\ldots,\eta_{h+1}\}\) and let_
\[v_{j}:=w^{+}(\eta_{j},D)\quad\text{and}\quad u_{j+1}:=w^{-}(\eta_{j+1},D)\qquad (0\leq j\leq h),\]
Figure 9. Examples of simply connected domains \(U\) that violate the conclusion of Lemma 5.3. In both cases there are non-intersecting paths that connect linked pairs on \(\partial D\cap\partial U\).
_so \(v_{0}<u_{1}<v_{1}<\dots<u_{h}<v_{h}<u_{h+1}\). Then the set of pairs_
\[\{(v_{0},u_{1}),\ (v_{1},u_{2}),\ \dots,\ (v_{h},u_{h+1})\}\]
_on \(\partial D\) is unlinked._
_Proof_. For each \(0\leq j\leq h+1\) choose a point \(z_{j}\in\eta_{j}\), with \(z_{h+1}<R(1)\). By Lemma 5.2 we can take pairwise disjoint good transversals \(\Sigma_{1},\dots,\Sigma_{r}\) so that \(\bigcup_{i=1}^{r}\Sigma_{i}\cap\mathcal{L}_{\leq 1}=\{z_{0},\dots,z_{h+1}\}\). By choosing the good disk \(D\) sufficiently small we can guarantee that the \(\Sigma_{i}\) are disjoint from \(\overline{D}\) and that
\[z_{0}<v_{0}<u_{1}<z_{1}<v_{1}<\dots<u_{h}<z_{h}<v_{h}<u_{h+1}<z_{h+1}<R(1). \tag{5.1}\]
By transversality, for all large \(n\) the ray \(R_{n}\) meets \(\partial D\) at nearby points
\[v_{n,j}:=R_{n}(t_{n,j})\quad\text{and}\quad u_{n,j+1}:=R_{n}(s_{n,j+1})\qquad( 0\leq j\leq h).\]
Here
\[\langle t_{n,0}\rangle<\langle s_{n,1}\rangle<\langle t_{n,1}\rangle<\dots< \langle s_{n,h}\rangle<\langle t_{n,h}\rangle<\langle s_{n,h+1}\rangle\]
are the respective preimages of \(v_{0},u_{1},v_{1},\dots,u_{h},v_{h},u_{h+1}\) under the homeomorphism \(\Pi:S\to\mathcal{L}\smallsetminus(J\cup\{\infty\})\) of SS4.1. Complete the construction at the two ends by choosing \(u_{0}:=\Pi(\langle s_{n,0}\rangle)\in\eta_{0}\) and \(v_{h+1}:=\Pi(\langle t_{n,h+1}\rangle)\in\eta_{h+1}\) such that \(u_{0}<z_{0}\) and \(z_{h+1}<v_{h+1}<R(1)\), and set \(u_{n,0}:=R_{n}(s_{n,0}),v_{n,h+1}:=R_{n}(t_{n,h+1})\) (see Fig. 10).
Figure 10. Illustration of the proof of Theorem 5.4. Left: The subarcs \(\tilde{\eta}_{j}\) of \(\eta_{j}\) are shown in red and the good transversals \(\Sigma_{i}\) in black. The union of red and black is the set \(E\). Right: The arcs in blue are segments of the external ray \(R_{n}\) for a large \(n\) that uniformly approximate the \(\tilde{\eta}_{j}\). The union of blue and black is the set \(E_{n}\). Both \(\mathbb{C}\smallsetminus E\) and \(\mathbb{C}\smallsetminus E_{n}\) are simply connected.
For \(0\leq j\leq h+1\) let \(\tilde{\eta}_{j}\) denote the subarc of \(\eta_{j}\) that joins \(u_{j}\) to \(v_{j}\). The closed set
\[E:=\bigcup_{i=1}^{r}\Sigma_{i}\cup\bigcup_{j=0}^{h+1}\tilde{\eta}_{j}\]
has simply connected complement in \(\mathbb{C}\) (this is the reason why we introduced the good transversals \(\Sigma_{i}\)). For each \(0\leq j\leq h+1\) the ray segment \(R_{n}([s_{n,j},t_{n,j}])\) that joins \(u_{n,j}\) to \(v_{n,j}\) tends to the subarc \(\tilde{\eta}_{j}\) in \(C^{\infty}\)-topology as \(n\to\infty\). Thus, for large \(n\) the closed set
\[E_{n}:=\bigcup_{i=1}^{r}\Sigma_{i}\cup\bigcup_{j=0}^{h+1}R_{n}([s_{n,j},t_{n, j}])\]
is \(C^{\infty}\)-close to \(E\). Since all the intersections in \(E\) are transversal, it follows that the complement \(\mathbb{C}\setminus E_{n}\) is also simply connected (see Fig. 10).
Now suppose the pairs \((v_{j},u_{j+1})\) and \((v_{k},u_{k+1})\) are linked for some \(0\leq j<k\leq h\). Then for all large \(n\) the pairs \((v_{n,j},u_{n,j+1})\) and \((v_{n,k},u_{n,k+1})\) are linked as well. At least one of the open ray segments \(R_{n}(]t_{n,j},s_{n,j+1}[)\) between \(v_{n,j}\) and \(u_{n,j+1}\) or \(R_{n}(]t_{n,k},s_{n,k+1}[)\) between \(v_{n,k}\) and \(u_{n,j+k}\) must meet \(E_{n}\); otherwise by Lemma 5.3 these ray segments would have to intersect, which is impossible. Since these ray segments are clearly disjoint from the ray segments in \(E_{n}\), one of them must intersect \(\bigcup_{i=1}^{r}\Sigma_{i}\). In other words, for all large \(n\) the ray \(R_{n}\) intersects \(\bigcup_{i=1}^{r}\Sigma_{i}\) at some point \(R_{n}(\lambda_{n})\), where
\[t_{n,j}<\lambda_{n}<s_{n,j+1}\qquad\text{or}\qquad t_{n,k}<\lambda_{n}<s_{n,k +1}. \tag{5.2}\]
Any accumulation point \(\tilde{z}\) of the sequence \(\{R_{n}(\lambda_{n})\}\) must then belong to \(\bigcup_{i=1}^{r}\Sigma_{i}\cap\mathcal{L}_{\leq 1}=\{z_{0},\ldots,z_{h+1}\}\). But then Lemma 4.4 together with (5.2) implies that \(v_{j}\leq\tilde{z}\leq u_{j+1}\) or \(v_{k}\leq\tilde{z}\leq u_{k+1}\), contradicting (5.1).
We now gather several corollaries of Theorem 5.4.
**Corollary 5.5**.: _Two distinct \(\mathcal{L}\)-arcs in a given parabolic basin must be homoclinics belonging to the same earring. In particular, every parabolic basin contains at most one earring of homoclinics._
Proof.: We already know from Corollary 4.6(i) that a parabolic basin \(B=P^{\circ q}(B)\) contains at most one heteroclinic. Thus, we must rule out a homoclinic/heteroclinic pair or a homoclinic/homoclinic pair in different earrings in \(B\). Assume by way of contradiction that \(B\) contains a homoclinic \(\eta\) and an \(\mathcal{L}\)-arc \(\gamma\) not in the earring of \(\eta\). Without loss of generality we can take \(\eta\), and \(\gamma\) if it is also a homoclinic, to be the outermost elements in their respective earrings. Set \(w:=w^{+}(\gamma)=w^{+}(\eta)\) and let \(\xi\) be the unique heteroclinic with \(w^{-}(\xi)=w\), or \(\xi=R\) if \(w=w_{0}\). By Corollary 4.6(ii), \(\gamma<\eta<\xi\) if \(\gamma\) is a heteroclinic. We may assume the same order even if \(\gamma\) is a homoclinic (simply swap \(\eta\) and \(\gamma\) if necessary).
Take a small good disk \(D\) centered at \(w\) for the collection \(\{\gamma,\eta,\xi\}\) and set
\[v_{0}:=w^{+}(\gamma,D),\quad u_{1}:=w^{-}(\eta,D),\quad v_{1}:=w^{+}(\eta,D), \quad u_{2}:=w^{-}(\xi,D).\]
Since \(\gamma\) and \(\xi\) are not in the earring of \(\eta\), neither of the points \(v_{0},u_{2}\) belongs to the arc \(I_{\eta}\subset\partial D\) bounded by \(u_{1},v_{1}\) (property (P2) of good disks in SS5.1). Moreover, \(\gamma,\eta\) are in the same basin so \(v_{0},v_{1}\) are asymptotically close to the same attracting direction, while \(u_{2}\) is asymptotically close to a repelling direction. Thus, the pairs \((v_{0},u_{1}),(v_{1},u_{2})\) must be linked. This contradicts Theorem 5.4.
The next corollary shows that the intrinsic potential order on the set of homoclinics in an earring is compatible with the order coming from embedding in the plane:
**Corollary 5.6**.: _Suppose \(\gamma,\eta\) are homoclinics in the same earring, with \(\Delta_{\eta}\subset\Delta_{\gamma}\). Then, \(\eta<\gamma\)._
Proof.: Label the homoclinics in the earring \(\eta_{1},\eta_{2},\eta_{3},\ldots\) so that \(\Delta_{\eta_{1}}\supset\Delta_{\eta_{2}}\supset\Delta_{\eta_{3}}\supset\cdots\). It suffices to show that \(\eta_{j}<\eta_{j-1}\) for all \(j\). Suppose there is a least index \(n\geq 2\) such that the opposite order \(\eta_{n-1}<\eta_{n}\) holds. Then \(\eta_{n}\) has an immediate predecessor in the \(n\)-chain \(\{\eta_{1},\ldots,\eta_{n}\}\) with respect to \(<\), i.e., there is a unique \(1\leq k\leq n-1\) such that \(\eta_{k}<\eta_{n}\) and no other \(\eta_{j}\) comes between \(\eta_{k},\eta_{n}\). We consider two cases:
_Case 1._\(\eta_{n}\) has an immediate successor in this \(n\)-chain, i.e., there is a unique \(1\leq\ell\leq n-1\) such that \(\eta_{n}<\eta_{\ell}\) and no other \(\eta_{j}\) comes between \(\eta_{n},\eta_{\ell}\). Note that \(k>\ell\) by minimality of \(n\). Let \(w\) be the point where the earring is based at. Take a sufficiently small good disk \(D\) centered at \(w\) for the collection \(\{\eta_{1},\ldots,\eta_{n}\}\) and set
\[v_{0}:=w^{+}(\eta_{k},D),\quad u_{1}:=w^{-}(\eta_{n},D),\quad v_{1}:=w^{+}( \eta_{n},D),\quad u_{2}:=w^{-}(\eta_{\ell},D).\]
Since \(\Delta_{\eta_{n}}\subset\Delta_{\eta_{k}}\subset\Delta_{\eta_{\ell}}\), we have \(I_{\eta_{n}}\subset I_{\eta_{k}}\subset I_{\eta_{\ell}}\), which shows that the pairs \((v_{0},u_{1}),(v_{1},u_{2})\) must be linked. This contradicts Theorem 5.4.
_Case 2._\(\eta_{n}\) has no successor in this \(n\)-chain. Let \(\xi\) be the unique heteroclinic with \(w^{-}(\xi)=w\), or \(\xi=R\) if \(w=w_{0}\). By Corollary 4.6(ii), \(\eta_{j}<\xi\) for all \(1\leq j\leq n\). Take a sufficiently small good disk \(D\) centered at \(w\) for the collection \(\{\eta_{1},\ldots,\eta_{n},\xi\}\) and set
\[v_{0}:=w^{+}(\eta_{k},D),\quad u_{1}:=w^{-}(\eta_{n},D),\quad v_{1}:=w^{+}( \eta_{n},D),\quad u_{2}:=w^{-}(\xi,D).\]
Since \(\Delta_{\eta_{n}}\subset\Delta_{\eta_{k}}\), we have \(I_{\eta_{n}}\subset I_{\eta_{k}}\) but \(u_{2}\notin I_{\eta_{k}}\) (properties (P1) and (P2) of good disks in SS5.1). It follows that the pairs \((v_{0},u_{1}),(v_{1},u_{2})\) are linked, which again contradicts Theorem 5.4.
**Corollary 5.7**.: _Suppose there is an earring based at \(w\in\mathcal{L}\cap J\) containing at least two distinct homoclinics. Then there can be no heteroclinic \(\gamma\) with \(w^{+}(\gamma)=w\)._
Proof.: Suppose such \(\gamma\) exists. Let \(\xi\) be the outermost homoclinic in the given earring and \(\eta\) be the adjacent homoclinic inside \(\gamma\). A combination of Corollaries 4.6(ii) and
\(5.6\) then shows that \(\gamma<\eta<\xi\). Take a sufficiently small good disk \(D\) centered at \(w\) for the collection \(\{\gamma,\eta,\xi\}\) and set
\[v_{0}:=w^{+}(\gamma,D),\ \ u_{1}:=w^{-}(\eta,D),\ \ v_{1}:=w^{+}(\eta,D),\ \ u_{2}:=w^{-}(\xi,D),\ \ v_{2}:=w^{+}(\xi,D).\]
We have \(I_{\eta}\subset I_{\xi}\) but \(v_{1}\notin I_{\xi}\). It follows that the pairs \((v_{0},u_{1}),(v_{1},u_{2})\) are linked, contradicting Theorem 5.4.
Proof of Theorem B.: By Theorem A the complement \(\mathcal{L}\smallsetminus\mathcal{L}^{*}\) is either empty or consists of finitely many earrings attached to the points of \(\mathcal{L}\cap J\). By Corollary 5.5 none of these earrings can share its basin with another earring or heteroclinic arc. By Corollary 5.7 every earring with at least two homoclinics must be based at \(w_{N}=\zeta_{\infty}\).
### Comment on the order of homoclinic arcs
As the final word of this section, let us comment on the relative order of the homoclinics in two earrings based at the same point. Suppose \(\{\eta_{j}\}\) and \(\{\xi_{j}\}\) are distinct earrings based at \(w_{N}=\zeta_{\infty}\), labeled so that \(\eta_{j+1}\) is inside \(\eta_{j}\) and \(\xi_{j+1}\) is inside \(\xi_{j}\) for all \(j\). By Corollary 5.6, \(\eta_{j+1}<\eta_{j}\) and \(\xi_{j+1}<\xi_{j}\) for all \(j\). Without loss of generality assume \(\xi_{1}<\eta_{1}\). Then an inductive argument using Theorem 5.4, which we shall omit, shows that the two earrings must _order-interlace_:
\[\cdots<\xi_{3}<\eta_{3}<\xi_{2}<\eta_{2}<\xi_{1}<\eta_{1}.\]
An explicit example of this phenomenon, communicated to us by H. Inou, is provided by suitable perturbations of the cubic \(P(z)=z+z^{3}\) of the form \(P_{n}(z)=\lambda_{n}z+z^{3}\), where \(|\lambda_{n}|>1\) and \(\lambda_{n}\to 1\) tangentially, as illustrated in Fig. 11. For large \(n\) the fixed rays \(R_{P_{n},0}\) follow a double spiral towards their landing point at \(0\). The Hausdorff limit \(\mathcal{L}=\lim_{n\to\infty}\overline{R_{P_{n},0}}\) consists of \(\overline{R_{P,0}}\) together with two order-interlacing earrings, each contained in one of the invariant basins of the parabolic fixed point at \(0\).
## 6. Proof of Theorem C
It is easy to see that the period of a point in \(\mathcal{L}\cap J\) can be a proper divisor of the ray period \(q\). As the simplest example, suppose \(\zeta_{0}\) is a repelling fixed point of \(P\) of combinatorial rotation number \(\neq 0\) and \(R_{P,\rho}\) is a periodic \(q\) ray landing at \(\zeta_{0}\). Then any sequence \(P_{n}\to P\) will produce a tame convergence \(\overline{R_{P_{n},\rho}}\to\overline{R_{P,\rho}}\) (Theorem 2.2). The same holds if \(\zeta_{0}\) is a non-degenerate parabolic point and the sequence of perturbations is chosen such that their multiplier at \(\zeta_{0}\) tends to the corresponding root of unity nontangentially (work in progress; see the introduction). Figure 12 illustrates a more subtle example of this drop in period involving a wild convergence.
### The iterated images of \(\mathcal{L}\)
Since \(P^{\circ q}\) acts bijectively on \(\mathcal{L}\), each restriction \(P^{\circ i}:\mathcal{L}\to P^{\circ i}(\mathcal{L})\) must be bijective. Here \(P^{\circ i}(\mathcal{L})\) coincides with the Hausdorff limit of the sequence of periodic rays \(P^{\circ i}_{n}(R_{n})\). Let \(\mathcal{L}\cap J=\{w_{0}=\zeta,\ldots,w_{N}=\zeta_{\infty}\}\) and \(\gamma_{1},\ldots,\gamma_{N}\) be the heteroclinic arcs in \(\mathcal{L}\), with \(\gamma_{j}\) joining \(w_{j}\) to \(w_{j-1}\) (we adopt the usual convention that if there are no heteroclinics then \(N=0\) so \(w_{0}=w_{N}\)). Then \(P^{\circ i}(\mathcal{L})\cap J\)
consists of the points \(P^{\circ i}(w_{j})\), and the arcs \(P^{\circ i}(\gamma_{j})\) are the heteroclinics in \(P^{\circ i}(\mathcal{L})\). Moreover, if \(\eta\) is a homoclinic arc of \(\mathcal{L}\) based at \(w_{j}\), then \(P^{\circ i}(\eta)\) is a homoclinic arc of \(P^{\circ i}(\mathcal{L})\) based at \(P^{\circ i}(w_{j})\). This shows that the Hausdorff limits \(P^{\circ i}(\mathcal{L})\) have the same number and combinatorial structure of heteroclinics and earrings as \(\mathcal{L}\).
Even though the \(q\) rays \(R,P(R),\ldots,P^{\circ q-1}(R)\) are always disjoint, the Hausdorff limits \(\mathcal{L},P(\mathcal{L}),\ldots,P^{\circ q-1}(\mathcal{L})\) may indeed intersect, as the example in Fig. 12 illustrates. We will show in Corollary 6.4 that such intersection can only occur at a unique point of the Julia set. But even without this result, one thing is clear from the \(P^{\circ q}\)-invariance of these sets:
\((\dagger)\) **Simple observation.** If \(z\in P^{\circ i}(\mathcal{L})\cap P^{\circ j}(\mathcal{L})\) is not in the Julia set, and if \(\gamma\) and \(\eta\) are the \(P^{\circ i}(\mathcal{L})\)-arc and \(P^{\circ j}(\mathcal{L})\)-arc through \(z\), then \(w^{+}(\gamma)=w^{+}(\eta)\in P^{\circ i}(\mathcal{L})\cap P^{\circ j}(\mathcal{ L})\).
From this observation it is easy to conclude that \(\mathcal{L},P(\mathcal{L}),\ldots,P^{\circ q-1}(\mathcal{L})\) are disjoint if and only if \(w_{0},\ldots,w_{N}\) have exact period \(q\).
**Lemma 6.1**.: _Either all \(w_{0},\ldots,w_{N}\) have period \(q\), or there is a unique \(0\leq\ell\leq N\) such that the period of \(w_{\ell}\) is a proper divisor of \(q\)._
Proof.: Let \(\ell\) be the smallest index for which \(w_{\ell}\) has period \(p=q/k\) with \(k>1\). If \(\ell=N\) we are done, so let us assume \(N\geq 1\) and \(0\leq\ell\leq N-1\). Split the spine \(\mathcal{L}^{*}\) into two arcs: \(\Gamma\) from \(w_{\ell}\) to \(\infty\) and \(\Lambda\) from \(w_{\ell}\) to \(w_{N}\), so \(\Gamma\cap\Lambda=\{w_{\ell}\}\). For \(0\leq i\leq k-1\) define \(\Gamma^{i}:=P^{\circ i}p(\Gamma),\Lambda^{i}:=P^{\circ i}p(\Lambda)\). The minimality of \(\ell\) implies that the
Figure 11. Perturbations of the cubic \(P(z)=z+z^{3}\) with a degenerate parabolic fixed point at \(0\). The Hausdorff limit \(\mathcal{L}\) of the closed ray at angle \(0\) contains two order-interlacing earrings based at the same point. The Hausdorff limit of the closed ray at angle \(1/2\) contains a similar pair of earrings (not shown).
\(k\) arcs \(\Gamma=\Gamma^{0},\Gamma^{1},\ldots,\Gamma^{k-1}\) are pairwise disjoint except at their end point \(w_{\ell}\); otherwise \(\Gamma^{0}\) would intersect some \(\Gamma^{i}\) at a point other than \(w_{\ell}\), which by the simple observation \((\dagger)\) would imply some \(w_{j}\) with \(j<\ell\) being in \(\Gamma^{0}\cap\Gamma^{i}\), hence having period \(<q\). The arcs \(\Gamma^{0},\Gamma^{1},\ldots,\Gamma^{k-1}\) are permuted cyclically under \(P^{\circ\not{p}}\) in the manner determined by the combinatorial rotation number of \(w_{\ell}\), which is necessarily of the form \(r/k\) with \((r,k)=1\) since \(\Gamma^{0}\) contains \(R\) which under the action of \(P^{\circ\not{p}}\) has period \(k\). In particular, \((P^{\circ\not{p}})^{\prime}(w_{\ell})=\mathrm{e}^{2\pi\mathrm{i}r/k}\).
By the assumption \(\ell\leq N-1\) there is a unique heteroclinic \(\gamma_{\ell}\subset\Lambda=\Lambda^{0}\) with \(w^{+}(\gamma_{\ell})=w_{\ell}\). Setting \(\gamma^{i}:=P^{\circ\not{p}}(\gamma_{\ell})\subset\Lambda^{i}\), it follows that \(\gamma_{\ell}=\gamma^{0},\gamma^{1},\ldots,\gamma^{k-1}\) have a common end point \(w_{\ell}\). By the same reasoning as above, none of these heteroclinics can intersect \(\Gamma^{0},\Gamma^{1},\ldots,\Gamma^{k-1}\) anywhere other than \(w_{\ell}\). As \(P^{\circ\not{p}}\) permutes \(\gamma^{0},\gamma^{1},\ldots,\gamma^{k-1}\) near \(w_{\ell}\) cyclically with the same combinatorial rotation number \(r/k\), each of the \(k\) sectors of \(\mathbb{C}\setminus(\Gamma^{0}\cup\Gamma^{1}\cup\ldots\cup\Gamma^{k-1})\) must contain exactly one of these heteroclinics. In particular, \(\gamma^{0},\gamma^{1},\ldots,\gamma^{k-1}\) are pairwise disjoint except at their common end point \(w_{\ell}\). It follows that each arc \(\Lambda^{i}\) is contained in the same sector of \(\mathbb{C}\setminus(\Gamma^{0}\cup\Gamma^{1}\cup\ldots\cup\Gamma^{k-1})\) as \(\gamma^{i}\), hence \(\Lambda^{0},\Lambda^{1},\ldots,\Lambda^{k-1}\) are also pairwise disjoint except at \(w_{\ell}\). Thus, the points \(w_{\ell+1},\ldots,w_{N}\in\Lambda^{0}\) have period \(k\) under \(P^{\circ\not{p}}\), i.e., period \(q\) under \(P\).
Figure 12. Perturbations of the cubic \(P(z)=-z+0.6\mathrm{i}\,z^{2}+z^{3}\) with a parabolic fixed point at \(0\). Here the Hausdorff limit \(\mathcal{L}\) of the closed ray at angle \(1/4\) (shown in green) contains a unique heteroclinic connecting the fixed point \(w_{0}=\zeta=0\) to the repelling period \(2\) point \(w_{1}=\zeta_{\infty}\). The Hausdorff limit of the closed ray at angle \(3/4\) is the image \(P(\mathcal{L})\) (shown in white). Notice the tame behavior of the period \(2\) rays at angle \(1/8,3/8\) (shown in blue).
**Lemma 6.2**.: _Let \(B=P^{\circ q}(B)\) be a parabolic basin of \(P\) that meets \(\mathcal{L}\). Then \(B\) has period \(q\). In particular, every \(\mathcal{L}\)-arc has period \(q\)._
Proof.: The result is clear if \(B\) is a basin of some \(w_{j}\in\mathcal{L}\cap J\) with period \(q\). Let us then assume there is a unique \(w_{\ell}\) with period \(p=q/k<q\) and \(B\) is a basin of \(w_{\ell}\). In this case, by the proof of Lemma 6.1, \(B\) must be contained in one of the \(k\) sectors of \(\mathbb{C}\setminus(\Gamma^{0}\cup\Gamma^{1}\cup\ldots\cup\Gamma^{k-1})\), so the action of \(P^{\circ p}\) on \(B\) must have the same combinatorial rotation number \(r/k\) as \(w_{\ell}\). In particular, the period of \(B\) under \(P^{\circ p}\) must be \(k\).
Here is a sharper statement about the period of \(\mathcal{L}\)-arcs:
**Lemma 6.3**.: _If \(\gamma\) is an \(\mathcal{L}\)-arc, none of the iterated images \(P(\gamma),\ldots,P^{\circ q-1}(\gamma)\) can be an \(\mathcal{L}\)-arc._
Proof.: Suppose \(\gamma^{\prime}=P^{\circ j}(\gamma)\) is an \(\mathcal{L}\)-arc for some \(1\leq j\leq q-1\). Take \(u=\Pi(\langle s_{n}\rangle)\in\gamma\) and let \(u^{\prime}=P^{\circ j}(u)=\Pi(\langle s^{\prime}_{n}\rangle)\in\gamma^{\prime}\). Then \(P^{\circ j}_{n}(R_{n}(s_{n}))\to u^{\prime}\) and \(R_{n}(s^{\prime}_{n})\to u^{\prime}\). By Lemma 2.4, the hyperbolic distance in the basin of infinity \(\Omega_{n}\) between \(P^{\circ j}_{n}(R_{n}(s_{n}))\) and \(R_{n}(s^{\prime}_{n})\) must tend to \(0\) as \(n\to\infty\). But this distance is
\[\operatorname{dist}_{\mathbb{C}\setminus\overline{\mathbb{D}}}(\mathrm{e}^{d ^{j}s_{n}+2\pi\mathrm{i}d^{j}\theta},\mathrm{e}^{s^{\prime}_{n}+2\pi\mathrm{i }\theta}),\]
which clearly tends to \(\infty\) since \(d^{j}\theta\neq\theta\pmod{\mathbb{Z}}\).
**Corollary 6.4** (Intersections of the images of \(\mathcal{L}\)).: _Let \(0\leq i<j\leq q-1\)._
1. _If all_ \(w_{0},\ldots,w_{N}\) _have period_ \(q\)_, then_ \(P^{\circ i}(\mathcal{L})\) _and_ \(P^{\circ j}(\mathcal{L})\) _are disjoint._
2. _If there is a unique_ \(w_{\ell}\) _with period_ \(p<q\)_, then_ \(P^{\circ i}(\mathcal{L})\) _and_ \(P^{\circ j}(\mathcal{L})\) _are disjoint unless_ \(i=j\pmod{p}\) _in which case_ \(P^{\circ i}(\mathcal{L})\cap P^{\circ j}(\mathcal{L})\) _is the single point_ \(P^{\circ i}(w_{\ell})\)_._
Proof.: We only need to treat case (ii) and rule out the possibility that \(P^{\circ i}(\mathcal{L})\) and \(P^{\circ j}(\mathcal{L})\) with \(i=j\pmod{p}\) might share a homoclinic arc \(\gamma\) based at \(P^{\circ i}(w_{\ell})\). But for any such \(\gamma\) both \(P^{\circ q-i}(\gamma)\) and \(P^{\circ q-j}(\gamma)\) would be \(\mathcal{L}\)-arcs, contrary to Lemma 6.3.
### \(\mathcal{L}\)-arcs in the same cycle of basins
Suppose \(B=P^{\circ q}(B)\) is a parabolic basin of \(w\in\mathcal{L}\cap J\) that meets \(\mathcal{L}\). If \(w\) has period \(q\), Corollary 6.4 shows that none of the iterated images \(P(B),\ldots,P^{\circ q-1}(B)\) can meet \(\mathcal{L}\). However, if \(w\) has period \(p<q\), then the union \(P^{\circ p}(B)\cup\cdots\cup P^{\circ(q-p)}(B)\) can _a priori_ meet \(\mathcal{L}\). Below we investigate this possibility in preparation for the proof of Theorem C.
**Standing assumptions**. For the remainder of this section up to the proof of Theorem C, we work under the following hypotheses:
\(\bullet\)\(N\geq 1\) and there is a unique \(0\leq\ell\leq N-1\) for which \(w_{\ell}\) has period \(p=q/k<q\), so \(w_{\ell}\neq w_{N}=\zeta_{\infty}\). Since the multiplier \((P^{\circ p})^{\prime}(w_{\ell})\) is a primitive \(k\)-th root of unity,
there is a unique integer \(1\leq j\leq k-1\) for which \((P^{\circ j\not{p}})^{\prime}(w_{\ell})=\mathrm{e}^{2\pi\mathrm{i}/k}\). We set
\[Q \coloneqq P^{\circ j\not{p}}, R^{i} \coloneqq Q^{\circ i}(R), \mathcal{L}^{i} \coloneqq Q^{\circ i}(\mathcal{L}),\] \[Q_{n} \coloneqq P^{\circ j\not{p}}_{n}, R^{i} \coloneqq Q^{\circ i}_{n}(R_{n}).\]
\(\bullet\) There is a cycle \(B,Q(B),\dots,Q^{\circ k-1}(B)\) of parabolic basins at \(w_{\ell}\) containing at least one homoclinic in \(\mathcal{L}\).
Sort all the homoclinic \(\mathcal{L}\)-arcs in the cycle \(B,Q(B),\dots,Q^{\circ k-1}(B)\) as \(\eta_{1}<\dots<\eta_{h}\). Note that by Theorem B and the assumption \(\ell\neq N\), each \(\eta_{j}\) is the sole homoclinic arc in its earring. By Corollary 5.5, \(\eta_{1},\dots,\eta_{h}\) belong to different parabolic basins in this cycle and in particular \(1\leq h\leq k\). Let \(\eta_{0}\) be the unique heteroclinic in \(\mathcal{L}\) such that \(w^{+}(\eta_{0})=w_{\ell}\), and \(\eta_{h+1}\) be the unique heteroclinic in \(\mathcal{L}\) such that \(w^{-}(\eta_{h+1})=w_{\ell}\), or \(\eta_{h+1}=R\) if \(\ell=0\). Then \(\eta_{0}<\eta_{1}<\dots<\eta_{h}<\eta_{h+1}\). This puts us in the situation of Theorem 5.4: If \(D\) is a sufficiently small good disk centered at \(w_{\ell}\) for the collection \(\{\eta_{0},\dots,\eta_{h+1}\}\), and if
\[v_{j} \coloneqq w^{+}(\eta_{j},D)\quad\text{and}\quad u_{j+1} \coloneqq w^{-}(\eta_{j+1},D)\qquad(0\leq j\leq h),\]
then the set of pairs
\[\Theta^{0}\coloneqq\{(v_{0},u_{1}),\ (v_{1},u_{2}),\ \dots,\ (v_{h},u_{h+1})\}\]
on \(\partial D\) is unlinked. More generally, for each \(0\leq i\leq k-1\) we can consider the \(\mathcal{L}^{i}\)-arcs \(\eta^{i}_{j}\coloneqq Q^{\circ i}(\eta_{j})\) in the same cycle \(B,Q(B),\dots,Q^{\circ k-1}(B)\), and the points
\[v^{i}_{j} \coloneqq w^{+}(\eta^{i}_{j},D)\quad\text{and}\quad u^{i}_{j+1} \coloneqq w^{-}(\eta^{i}_{j+1},D)\qquad(0\leq j\leq h)\]
(we may arrange the same \(D\) to be a good disk for \(\{\eta^{i}_{0},\dots,\eta^{i}_{h+1}\}\) for every \(i\)). We then form the set
\[\Theta^{i}\coloneqq\{(v^{i}_{0},u^{i}_{1}),\ (v^{i}_{1},u^{i}_{2}),\ \dots,\ (v^{i}_{h},u^{i}_{h+1})\}\]
of \(h+1\) unlinked pairs. Observe that since \(w_{\ell}\) is a fixed point of \(Q\) with multiplier \(\rho\coloneqq Q^{\prime}(w_{\ell})=\mathrm{e}^{2\pi\mathrm{i}/k}\), each \(\Theta^{i+1}\) is approximately the rotation \(\rho\Theta^{i}\) with an error of the order of \(o(\varepsilon)\), where \(\varepsilon>0\) is the radius of \(D\). Note also that by Lemma 6.3 all the arcs \(\eta^{i}_{j}\) are disjoint, hence all the points \(u^{i}_{j},v^{i}_{j}\) are distinct.
The following is a generalization of Theorem 5.4:
**Theorem 6.5**.: _The union \(\Theta^{0}\cup\Theta^{1}\cup\dots\cup\Theta^{k-1}\) is unlinked._
Proof.: It suffices to prove that for every \(1\leq i\leq k-1\) the union \(\Theta^{0}\cup\Theta^{i}\) is unlinked. The argument is similar Theorem 5.4, so we will be brief on the identical details. To ease the notation a bit, we will denote all the objects corresponding to \(\mathcal{L}=\mathcal{L}^{0}\) without a superscript \(0\) and those corresponding to \(\mathcal{L}^{i}\) with a superscript \(*\). Choose points
\(z_{j}\in\eta_{j}\) and \(z_{j}^{*}\in\eta_{j}^{*}\) for \(0\leq j\leq h+1\), with \(z_{h+1}<R(1)\) and \(z_{h+1}^{*}<R^{*}(1)\). We can find pairwise disjoint good transversals \(\Sigma_{1},\ldots,\Sigma_{r}\) for \(\mathcal{L}\cup\mathcal{L}^{*}\) such that
\[\bigcup_{j=1}^{r}\Sigma_{j}\cap(\mathcal{L}_{\leq 1}\cup\mathcal{L}_{\leq 1}^{*})= \{z_{0},\ldots,z_{h+1},z_{0}^{*},\ldots,z_{h+1}^{*}\}. \tag{6.1}\]
In fact, the union \(\hat{\mathcal{L}}\cup\hat{\mathcal{L}}^{*}\) is connected and full, so in the construction of good transversals in the proof of Lemma 5.2 we can use the "external rays" of the uniformization \((\hat{\mathbb{C}}\smallsetminus\overline{\mathbb{D}},\infty)\stackrel{{ \cong}}{{\longrightarrow}}(\hat{\mathbb{C}}\smallsetminus(\hat{ \mathcal{L}}\cup\hat{\mathcal{L}}^{*}),\infty)\). Choosing \(D\) sufficiently small guarantees that these transversals are disjoint from \(\overline{D}\) and that the relations
\[z_{0}<v_{0}<u_{1}<z_{1}<v_{1}<\cdots<u_{h}<z_{h}<v_{h}<u_{h+1}<z_{h+1}<R(1)\] \[z_{0}^{*}<v_{0}^{*}<u_{1}^{*}<z_{1}^{*}<v_{1}^{*}<\cdots<u_{h}^{* }<z_{h}^{*}<v_{h}^{*}<u_{h+1}^{*}<z_{h+1}^{*}<R^{*}(1) \tag{6.2}\]
hold. By transversality, we can find the approximating sequences
\[v_{n,j} :=R_{n}(t_{n,j})\to v_{j} u_{n,j+1}:=R_{n}(s_{n,j+1})\to u_{j+1}\] \[v_{n,j}^{*} :=R_{n}^{*}(t_{n,j}^{*})\to v_{j} u_{n,j+1}^{*}:=R_{n}^{*}(s_{n,j+1}^{*})\to u_{j+1}^{*}\]
on \(\partial D\) for \(0\leq j\leq h\). Choose \(u_{0}:=\Pi(\langle s_{n,0}\rangle)\in\eta_{0}\) and \(v_{h+1}:=\Pi(\langle t_{n,h+1}\rangle)\in\eta_{h+1}\) such that \(u_{0}<z_{0}\) and \(z_{h+1}<v_{h+1}<R(1)\), and set \(u_{n,0}:=R_{n}(s_{n,0}),v_{n,h+1}:=R_{n}(t_{n,h+1})\). Similarly, choose \(u_{0}^{*}:=\Pi(\langle s_{n,0}^{*}\rangle)\in\eta_{0}^{*}\) and \(v_{h+1}^{*}:=\Pi(\langle t_{n,h+1}^{*}\rangle)\in\eta_{h+1}^{*}\) such that \(u_{0}^{*}<z_{0}^{*}\) and \(z_{h+1}^{*}<v_{h+1}^{*}<R^{*}(1)\), and set \(u_{n,0}^{*}:=R_{n}^{*}(s_{n,0}^{*}),v_{n,h+1}^{*}:=R_{n}^{*}(t_{n,h+1}^{*})\).
For \(0\leq j\leq h+1\) let \(\tilde{\eta}_{j}\) denote the subarc of \(\eta_{j}\) that joins \(u_{j}\) to \(v_{j}\). Define the subarc \(\tilde{\eta}_{j}^{*}\) of \(\eta_{j}^{*}\) analogously. The closed set
\[E:=\bigcup_{j=1}^{r}\Sigma_{j}\cup\bigcup_{j=0}^{h+1}\left(\tilde{\eta}_{j} \cup\tilde{\eta}_{j}^{*}\right)\]
has simply connected complement in \(\mathbb{C}\). For each \(0\leq j\leq h+1\) we have \(R_{n}([s_{n,j},t_{n,j}])\to\tilde{\eta}_{j}\) and \(R_{n}^{*}([s_{n,j}^{*},t_{n,j}^{*}])\to\tilde{\eta}_{j}^{*}\) in \(C^{\infty}\)-topology as \(n\to\infty\). Thus, for large \(n\) the closed set
\[E_{n}:=\bigcup_{j=1}^{r}\Sigma_{j}\cup\bigcup_{j=0}^{h+1}\left(R_{n}([s_{n,j}, t_{n,j}])\cup R_{n}^{*}([s_{n,j}^{*},t_{n,j}^{*}])\right)\]
is \(C^{\infty}\)-close to \(E\). Since all the intersections in \(E\) are transversal, the complement \(\mathbb{C}\smallsetminus E_{n}\) must also be simply connected.
Now suppose there is a pair \((v_{a},u_{a+1})\in\Theta\) that is linked with a pair \((v_{b}^{*},u_{b+1}^{*})\in\Theta^{*}\). Then for all large \(n\) the pairs \((v_{n,a},u_{n,a+1})\) and \((v_{n,b}^{*},u_{n,b+1}^{*})\) are linked as well. It follows that at least one of the open ray segments \(R_{n}(]t_{n,a},s_{n,a+1}[)\) or \(R_{n}^{*}(]t_{n,b}^{*},s_{n,i+b}^{*}[)\) must meet \(E_{n}\), for otherwise by Lemma 5.3 these ray segments would have to intersect, which is impossible since \(R_{n}\) and \(R_{n}^{*}\) are disjoint. We conclude that either
\(R_{n}(]t_{n,a},s_{n,a+1}[)\) or \(R_{n}^{*}(]t_{n,b}^{*},s_{n,b+1}^{*}[)\) must intersect \(\bigcup_{j=1}^{r}\Sigma_{j}\) for infinitely many values of \(n\). In the former case we obtain an accumulation point \(\tilde{z}\in\bigcup_{j=1}^{r}\Sigma_{j}\cap\mathcal{L}_{\leq 1}\) of a sequence \(\{R_{n}(\lambda_{n})\}\), where \(t_{n,a}<\lambda_{n}<s_{n,a+1}\). By (6.1), \(\tilde{z}\in\{z_{0},\ldots,z_{h+1}\}\) while by Lemma 4.4, \(v_{a}\leq\tilde{z}\leq u_{a+1}\). This contradicts (6.2). In the latter case we obtain an accumulation point \(\tilde{z}\in\bigcup_{j=1}^{r}\Sigma_{j}\cap\mathcal{L}_{\leq 1}^{*}\) of a sequence \(\{R_{n}^{*}(\lambda_{n})\}\), where \(t_{n,b}^{*}<\lambda_{n}<s_{n,b+1}^{*}\). By (6.1), \(\tilde{z}\in\{z_{0}^{*},\ldots,z_{h+1}^{*}\}\) while by Lemma 4.4, \(v_{b}^{*}\leq\tilde{z}\leq u_{b+1}^{*}\). This, again, contradicts (6.2).
Recall that \(\varepsilon>0\) is the radius of the good disk \(D\) centered at \(w_{\ell}\) used to define the sets \(\Theta^{0},\ldots,\Theta^{k-1}\). It will be convenient to represent \(\partial D\) in the additive model of the unit circle by identifying \(w_{\ell}+\varepsilon\mathrm{e}^{2\pi it}\in\partial D\) with \(t\in\mathbb{T}:=\mathbb{R}/\mathbb{Z}\). This identification involves rescaling by a factor of \(1/\varepsilon\), so it turns every \(o(\varepsilon)\) estimate on \(\partial D\) to an \(o(1)\) estimate in the additive model \(\mathbb{T}\) as \(\varepsilon\to 0\). To simplify the formulas that will follow, we write
\[x\stackrel{{\circ}}{{=}}y\qquad\text{whenever}\qquad x=y+o(1).\]
By the _distance_\(\delta(a,b)\) between \(a,b\in\mathbb{T}\) we mean the normalized Lebesgue measure of the shorter arc of \(\mathbb{T}\) between \(a,b.\) Choosing suitable representatives, it is clear that \(\delta(a,b)=|a-b|\leq 1/2\).
Let \(\nu\geq 1\) be the degeneracy order of \(w_{\ell}\) as a fixed point of \(Q\). There are \(\nu k\) attracting and \(\nu k\) repelling directions of \(w_{\ell}\) which intersect \(\partial D\cong\mathbb{T}\) at \(2\nu k\) equally spaced alternating points that we mark as \(\oplus\) for attracting and \(\ominus\) for repelling. Thus, every \(v_{j}^{i}\) is \(o(1)\)-close to a \(\oplus\) and every \(u_{j}^{i}\) is \(o(1)\)-close to a \(\ominus\). This yields
\[\delta(u_{j}^{i},v_{j}^{i})\stackrel{{\circ}}{{=}}\frac{1}{2\nu k} \tag{6.3}\]
\[\delta(v_{j}^{i},u_{j+1}^{i})\stackrel{{\circ}}{{=}}\text{an odd multiple of }\frac{1}{2\nu k}. \tag{6.4}\]
Recalling that \(\Theta^{i+1}\) is \(o(1)\)-close to \(\rho\Theta^{i}\), where \(\rho=Q^{\prime}(w_{\ell})=\mathrm{e}^{2\pi\mathrm{i}/k}\), we also have
\[u_{j}^{i+1}\stackrel{{\circ}}{{=}}u_{j}^{i}+\frac{1}{k}\qquad\text {and}\qquad v_{j}^{i+1}\stackrel{{\circ}}{{=}}v_{j}^{i}+\frac{1}{ k}. \tag{6.5}\]
**Lemma 6.6**.: _For any \(0\leq j\leq h\) the pair \((v_{j}^{i},u_{j+1}^{i})\in\Theta^{i}\) satisfies_
\[\delta(v_{j}^{i},u_{j+1}^{i})\leq\frac{1}{k}-\frac{1}{2\nu k}+o(1). \tag{6.6}\]
_If \(1\leq j\leq h-1\), or if \(j=0\) and \(\eta_{0}^{i}\) belongs to the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\), then_
\[\delta(v_{j}^{i},u_{j+1}^{i})\stackrel{{\circ}}{{=}}\frac{1}{k}- \frac{1}{2\nu k}. \tag{6.7}\]
Proof.: The proof of (6.6) is based on the simple observation that a pair of distance \(>1/k\) and its rotated image by \(1/k\) of a turn must be linked. There is nothing to prove if \(k=2\), so suppose (6.6) is false and \(k\geq 3\). Then, by (6.4), we would have the lower
bound \(\delta(v^{i}_{j},u^{i}_{j+1})\geq 1/k+1/(2\nu k)+o(1)\). In view of (6.5), this lower bound would imply that \((v^{i}_{j},u^{i}_{j+1})\) and \((v^{i+1}_{j},u^{i+1}_{j+1})\) are linked, contradicting Theorem 6.5.
To prove (6.7), recall that the attracting directions corresponding to the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\) form \(k\) equally spaced \(\oplus\) points on \(\mathbb{T}\). If \(1\leq j\leq h-1\), or if \(j=0\) and \(\eta^{i}_{0}\) belongs to the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\), then \(\eta^{i}_{j}\) and \(\eta^{i}_{j+1}\) belong to different basins in this cycle, so \(\delta(v^{i}_{j},v^{i}_{j+1})\geq 1/k+o(1)\). In view of (6.3), this gives the lower bound \(\delta(v^{i}_{j},u^{i}_{j+1})\geq 1/k-1/(2\nu k)+o(1)\). Combining this with the upper bound (6.6), we obtain (6.7).
The following is the main technical result of this section:
**Theorem 6.7**.: _The cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\) meets only one \(\mathcal{L}\)-arc, which is necessarily the homoclinic \(\eta_{1}\) based at \(w_{\ell}\)._
Thus, the sequence of \(\mathcal{L}\)-arcs used to define \(\Theta^{0}\) reduces to \(\eta_{0}<\eta_{1}<\eta_{2}\) (so \(h=1\)), and the heteroclinic \(\eta_{0}\) is not in the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\). As a result, each of the \(k\) basins in this cycle contains the homoclinic \(\eta^{i}_{1}\) for a unique \(0\leq i\leq k-1\). In particular, another cycle of basins at \(w_{\ell}\) is needed to accommodate the heteroclinic arcs \(\eta_{0},\eta^{1}_{0},\ldots,\eta^{k-1}_{0}\), so the degeneracy order \(\nu\) of \(w_{\ell}\) must be at least \(2\).
Proof.: We know that the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\) contains the homoclinic \(\eta_{1}\). But there is a possibility that this cycle contains the heteroclinic \(\eta_{0}\) or a second homoclinic \(\eta_{2}\). Below we rule out these scenarios:
_Case 1._ Suppose the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\) contains the heteroclinic \(\eta_{0}\). Without loss of generality assume \(\eta_{0}\subset B\). By Corollary 5.5\(\eta_{1}\) cannot be in \(B\), so \(\eta_{1}\subset Q^{\circ i}(B)\) for some \(1\leq i\leq k-1\). It follows that \(\eta_{1}\) is in the same parabolic basin as the heteroclinic \(\eta^{i}_{0}\), so \(\delta(v_{1},v^{i}_{0})\stackrel{{\circ}}{{=}}0\) and therefore \(\delta(u_{1},v^{i}_{0})\stackrel{{\circ}}{{=}}1/(2\nu k)\) by (6.3). Since \(\delta(v_{0},u_{1})\stackrel{{\circ}}{{=}}1/k-1/(2\nu k)\) by (6.7), it follows that \(\delta(v_{0},v^{i}_{0})\leq 1/k+o(1)\). On the other hand, (6.5) shows that up to an \(o(1)\) error the distance \(\delta(v_{0},v^{i}_{0})\) is a multiple of \(1/k\). In fact, \(\delta(v_{0},v^{i}_{0})\stackrel{{\circ}}{{=}}i/k\) if \(i\leq k/2\) and \(\delta(v_{0},v^{i}_{0})\stackrel{{\circ}}{{=}}(k-i)/k\) if \(i>k/2\). It follows that \(i=1\) or \(i=k-1\). Without loss of generality assume that \(i=1\) (the other case is completely similar). Then we have the following points in positive cyclic order on \(\partial\mathbb{D}\cong\mathbb{T}\):
\[v_{0}<u_{1}\stackrel{{\circ}}{{=}}v_{0}+\frac{1}{k}-\frac{1}{2 \nu k}<v_{1}\stackrel{{\circ}}{{=}}v_{0}+\frac{1}{k}<v^{1}_{0} \stackrel{{\circ}}{{=}}v_{0}+\frac{1}{k}.\]
Using (6.5), we obtain a similar order for all \(0\leq i\leq k-1\):
\[v^{i}_{0}<u^{i}_{1}\stackrel{{\circ}}{{=}}v^{i}_{0}+\frac{1}{k}- \frac{1}{2\nu k}<v^{i}_{1}\stackrel{{\circ}}{{=}}v^{i}_{0}+\frac{1} {k}<v^{i+1}_{0}\stackrel{{\circ}}{{=}}v^{i}_{0}+\frac{1}{k} \tag{6.8}\]
(see Fig. 13). Now consider the next pair \((v_{1},u_{2})\in\Theta^{0}\). By (6.4) and (6.6), we have \(u_{2}\stackrel{{\circ}}{{=}}v_{1}-(2j+1)/(2\nu k)\) or \(u_{2}\stackrel{{\circ}}{{=}}v_{1}+(2j+1)/(2\nu k)\) for some \(0\leq j\leq\nu-1\). In the first case
(6.8) shows that \((v_{1},u_{2})\) and \((v_{0},u_{1})\) would be linked, contradicting Theorem 6.5. In the second case (6.8) shows that \((v_{1},u_{2})\) and \((v_{0}^{1},u_{1}^{1})\) would be linked unless \(j=2\nu-1\). This leaves only one possibility for \(u_{2}\):
\[u_{2}\stackrel{{\circ}}{{=}}v_{1}+\frac{1}{k}-\frac{1}{2\nu k} \stackrel{{\circ}}{{=}}v_{0}^{1}+\frac{1}{k}-\frac{1}{2\nu k} \stackrel{{\circ}}{{=}}u_{1}^{1}.\]
Observe that since \((v_{1},u_{2})\) and \((v_{0}^{1},u_{1}^{1})\) are unlinked by Theorem 6.5, \(u_{2}\) belongs to the interval \(I_{\eta_{1}^{1}}\) bounded by \(u_{1}^{1}\) and \(v_{1}^{1}\), so \(\eta_{2}\) is contained in \(\Delta_{\eta_{1}^{1}}\). In particular, \(\eta_{2}\) must be a homoclinic, with \(v_{2}\in I_{\eta_{1}^{1}}\) and \(v_{2}\stackrel{{\circ}}{{=}}v_{1}^{1}\stackrel{{ \circ}}{{=}}v_{0}^{2}\).
Now repeat the above argument with the next pair \((v_{2},u_{3})\in\Theta^{0}\) to conclude that the only possibility is \(u_{3}\stackrel{{\circ}}{{=}}u_{1}^{2},v_{3}\stackrel{{ \circ}}{{=}}v_{1}^{2}\), with \(\eta_{3}\) contained in \(\Delta_{\eta_{1}^{2}}\), so \(\eta_{3}\) must be a homoclinic. Continuing this process inductively, we finally reach \(\eta_{h+1}\) which by the same argument must be contained in \(\Delta_{\eta_{1}^{h}}\). This is a contradiction since \(\eta_{h+1}\) is a heteroclinic.
_Case 2_. Suppose the cycle \(B,Q(B),\ldots,Q^{\circ k-1}(B)\) does not contain the heteroclinic \(\eta_{0}\) but contains \(\eta_{1}\) and a next homoclinic \(\eta_{2}\). By (6.4) we now have \(\delta(v_{0},u_{1})\stackrel{{\circ}}{{=}}(2j+1)/(2\nu k)\) for some \(0\leq j\leq\nu-2\) (in particular \(\nu\geq 2\)). Without loss of generality,
Figure 13. Illustration of _Case \(\tau\)_ in the proof of Theorem 6.7, with \(k=4\).
assume \(u_{1}\stackrel{{\circ}}{{=}}v_{0}+(2j+1)/(2\nu k)\). By (6.3), either \(v_{1}\stackrel{{\circ}}{{=}}u_{1}+1/(2\nu k)\) or \(v_{1}\stackrel{{\circ}}{{=}}u_{1}-1/(2\nu k)\). The latter is impossible, because it implies \(v_{0}<v_{1}<u_{1}\) which would force \((v_{0},u_{1})\) and \((v_{1},u_{2})\) be linked since \(\delta(v_{1},u_{2})\stackrel{{\circ}}{{=}}1/k-1/(2\nu k)\) is greater than \(\delta(v_{0},u_{1})\) by at least \(1/(2\nu k)+o(1)\). Thus, we must have the following points in positive cyclic order:
\[v_{0}<u_{1}\stackrel{{\circ}}{{=}}v_{0}+\frac{2j+1}{2\nu k}<v_{1 }\stackrel{{\circ}}{{=}}v_{0}+\frac{2j+2}{2\nu k}<v_{0}^{1} \stackrel{{\circ}}{{=}}v_{0}+\frac{1}{k}.\]
Using (6.5), we obtain a similar order for all \(0\leq i\leq k-1\):
\[v_{0}^{i}<u_{1}^{i}\stackrel{{\circ}}{{=}}v_{0}^{i}+\frac{2j+1}{2 \nu k}<v_{1}^{i}\stackrel{{\circ}}{{=}}v_{0}^{i}+\frac{2j+2}{2 \nu k}<v_{0}^{i+1}\stackrel{{\circ}}{{=}}v_{0}^{i}+\frac{1}{k}.\]
Now consider the next pair \((v_{1},u_{2})\in\Theta^{0}\). Since \(\delta(v_{1},u_{2})\stackrel{{\circ}}{{=}}1/k-1/(2\nu k)\) by (6.7), there are only two possibilities \(u_{2}\stackrel{{\circ}}{{=}}v_{1}\pm(1/k-1/(2\nu k))\). Let us address them separately:
_Case 2a._\(u_{2}\stackrel{{\circ}}{{=}}v_{1}+1/k-1/(2\nu k)\). Then, by (6.5) and (6.3),
\[u_{2}\stackrel{{\circ}}{{=}}v_{1}^{1}-\frac{1}{2\nu k}\stackrel{{ \circ}}{{=}}u_{1}^{1}.\]
Since \((v_{1},u_{2})\) and \((v_{0}^{1},u_{1}^{1})\) are unlinked, \(u_{2}\) belongs to the interval \(I_{\eta_{1}^{1}}\) bounded by \(u_{1}^{1}\) and \(v_{1}^{1}\), so \(\eta_{2}\) is contained in \(\Delta_{\eta_{1}^{1}}\) (see Fig. 14 top). Now repeat the argument with the next pair \((v_{2},u_{3})\in\Theta^{0}\) to conclude that the only possibility is \(u_{3}\stackrel{{\circ}}{{=}}u_{1}^{2},v_{3}\stackrel{{ \circ}}{{=}}v_{1}^{2}\), with \(\eta_{3}\) contained in \(\Delta_{\eta_{1}^{2}}\), and therefore \(\eta_{3}\) must be a homoclinic. Continuing this process inductively as in _Case 1_, we eventually arrive at the conclusion that \(\eta_{k+1}\) is contained in \(\Delta_{\eta_{1}^{k}}\), which is a contradiction since \(\eta_{k+1}\) is a heteroclinic.
_Case 2b._\(u_{2}\stackrel{{\circ}}{{=}}v_{1}-1/k+1/(2\nu k)\). Then, by (6.5) and (6.3),
\[u_{2}\stackrel{{\circ}}{{=}}v_{1}^{k-1}+\frac{1}{2\nu k}.\]
This means \(\eta_{2}\) and \(\eta_{1}^{k-1}\) are in the same basin but neither is inside the other (see Fig. 14 bottom). Now repeat the argument with the next pair \((v_{2},u_{3})\in\Theta^{0}\) to conclude that the only possibility is
\[u_{3}\stackrel{{\circ}}{{=}}v_{2}-\frac{1}{k}+\frac{1}{2\nu k} \stackrel{{\circ}}{{=}}v_{2}^{k-1}+\frac{1}{2\nu k}\stackrel{{ \circ}}{{=}}u_{2}^{k-1}.\]
Moreover, since \((v_{2},u_{3})\) and \((v_{1}^{k-1},u_{2}^{k-1})\) are unlinked, \(u_{3}\) belongs to the interval \(I_{\eta_{2}^{k-1}}\) bounded by \(u_{2}^{k-1}\) and \(v_{2}^{k-1}\), so \(\eta_{3}\) is contained in \(\Delta_{\eta_{2}^{k-1}}\). In particular, \(\eta_{3}\) must be a homoclinic. Continuing this process inductively, we finally reach \(\eta_{k+1}\) which by the same argument must be contained in \(\Delta_{\eta_{2}^{k-k+1}}\). This is a contradiction since \(\eta_{k+1}\) is a heteroclinic.
Proof of Theorem C.: Let us first treat the easier case \(N=0\) where there are no heteroclinic arcs. If \(w_{0}=w_{N}=\zeta_{\infty}\) is repelling, then \(M=M^{\#}=0\) and there is
Figure 14. Illustration of _Case 2a_ (top) and _Case 2b_ (bottom) in the proof of Theorem 6.7, with \(k=4\).
nothing to prove. Otherwise \(w_{0}\) is parabolic and \(M^{\#}\) is at most the number of cycles of parabolic basins at \(w_{0}\). By classical Fatou-Julia theory, every cycle of parabolic basins contains at least one critical point. Hence \(M^{\#}\leq d-1\), proving (1.1). To bound \(M\), suppose \(\not\!p=q/k\) is the period of \(w_{0}\) and \(\nu\) is the degeneracy order of \(w_{0}\) as a fixed point of \(P^{\circ\not\!p}\). Then there are \(\nu\) cycles of parabolic basins at \(w_{0}\), each of length \(k\). Since \(\nu\leq d-1\), we obtain \(M\leq k\nu\leq d-1+(k-1)\nu\), which proves (1.2).
Now suppose \(N\geq 1\). For \(0\leq j\leq N\), define
\[M_{j} :=\text{number of earrings in $\mathcal{L}$ based at $w_{j}$}\] \[M_{j}^{\#} :=\text{number of equivalence classes of earrings in $\mathcal{L}$ based at $w_{j}$},\]
so \(M=\sum_{j=0}^{N}M_{j}\) and \(M^{\#}=\sum_{j=0}^{N}M_{j}^{\#}\). Let \(\not\!p_{j}=q/k_{j}\) be the period of \(w_{j}\) (we know from Lemma 6.1 that \(\not\!p_{j}=q\Leftrightarrow k_{j}=1\) for all \(j\) with at most one exception). Let \(\nu_{j}\) be the degeneracy order of \(w_{j}\) as a fixed point of \(P^{\circ\not\!p_{j}}\), with the convention that \(\nu_{N}=0\) if \(w_{N}\) is repelling. Then there are \(\nu_{j}\) cycles of parabolic basins at \(w_{j}\), each of length \(k_{j}\). Let \(B=P^{\circ q}(B)\) be a parabolic basin at \(w_{j}\). By Theorem 6.7, if \(0\leq j\leq N-1\), the cycle \(B,P(B),\dots,P^{\circ q-1}(B)\) contains at most one heteroclinic or one earring in \(\mathcal{L}\), but not both. Moreover, if this cycle contains a heteroclinic, the Basic Structure Lemma shows that there are at least two critical points of \(P\) in the union \(B\cup P(B)\cup\dots\cup P^{\circ q-1}(B)\). This shows
\[M_{j} \leq\nu_{j}-1 M_{j}^{\#} =\ M_{j}\qquad\text{ if $0\leq j\leq N-1$},\] \[M_{N} \leq\ k_{N}\nu_{N} M_{N}^{\#} \leq\nu_{N},\]
and
\[\sum_{j=0}^{N-1}(\nu_{j}+1)+\nu_{N}\leq d-1,\qquad\text{so}\qquad\sum_{j=0}^{N -1}(\nu_{j}-1)+\nu_{N}\leq d-1-2N.\]
It follows that
\[2N+M^{\#}=2N+\sum_{j=0}^{N-1}M_{j}^{\#}+M_{N}^{\#}\leq 2N+\sum_{j=0}^{N-1}( \nu_{j}-1)+\nu_{N}\leq d-1,\]
which proves (1.1). Similarly,
\[2N+M=2N+\sum_{j=0}^{N-1}M_{j}+M_{N}\leq 2N+\sum_{j=0}^{N-1}(\nu_{j}-1)+k_{N} \nu_{N}\leq d-1+(k_{N}-1)\nu_{N},\]
which proves (1.2).
## 7. Proof of Theorem D
In this section we prove Theorem D, that is, we construct real monic polynomials \(P\) of any odd degree \(\geq 3\) which have the maximum number of heteroclinics allowed by Theorem C.
First assume we have constructed a real monic polynomial \(P\) of degree \(d=2N+1\) with fixed points \(w_{N}=0<\cdots<w_{1}<w_{0}\) such that \(0\) is repelling and \(w_{j}\) is parabolic with multiplier \(1\) and \(\operatorname{resit}(P,w_{j})<0\) for every \(0\leq j\leq N-1\). Since \(P\) has \(2N+1\) fixed points in \(\mathbb{C}\) counting multiplicities, the fixed point set of \(P\) is \(\{w_{0},w_{1},\ldots,w_{N}\}\) with \(w_{0},\ldots,w_{N-1}\) having multiplicity \(2\). Using the fact that \(P\) is monic, we can write
\[P_{\varepsilon}(z)=P(z)+\varepsilon=\varepsilon+z+z(z-w_{0})^{2}\cdots(z-w_{N -1})^{2},\]
which shows \(P_{\varepsilon}(x)\geq\varepsilon+x\) for \(x>0\). It follows that the unique repelling fixed point \(w_{N}(\varepsilon)\) near \(0\) is negative, while the two simple fixed points of \(P_{\varepsilon}\) bifurcating off of \(w_{j}\) are non-real, hence form a complex-conjugate pair \(w_{j}(\varepsilon),\overline{w_{j}(\varepsilon)}\) with multipliers \(\lambda_{j}(\varepsilon),\overline{\lambda_{j}(\varepsilon)}\). We have
\[\operatorname{resit}(P_{\varepsilon},w_{j}(\varepsilon))+ \operatorname{resit}(P_{\varepsilon},\overline{w_{j}(\varepsilon)}) =\frac{1}{2}-\frac{1}{1-\lambda_{j}(\varepsilon)}+\frac{1}{2}- \frac{1}{1-\lambda_{j}(\varepsilon)}\] \[=1-2\operatorname{Re}\left(\frac{1}{1-\lambda_{j}(\varepsilon) }\right).\]
As \(\varepsilon\to 0\), this quantity must converge to \(\operatorname{resit}(P,w_{j})\), which is negative by the assumption. Hence \(\operatorname{Re}(1/(1-\lambda_{j}(\varepsilon)))>1/2\) or \(|\lambda_{j}(\varepsilon)|<1\) for all sufficiently small \(\varepsilon>0\). (it is not hard to see that \(\lambda_{j}(\varepsilon)\) must tend to \(1\) horocyclically as \(\varepsilon\to 0\)). It follows that all the \(2N\) critical points of \(P_{\varepsilon}\) lie in the basins of attraction of \(w_{j}(\varepsilon),\overline{w_{j}(\varepsilon)}\) for \(0\leq j\leq N-1.\) In particular, \(K_{P_{\varepsilon}}\) and therefore \(K_{P}\) is connected.
Now each of the \(2N\) fixed rays of \(P_{\varepsilon}\) must land at a repelling or parabolic fixed point of \(P_{\varepsilon}\). Since the fixed points of \(P_{\varepsilon}\) other than \(w_{N}(\varepsilon)\) are all attracting, it follows that the fixed rays of \(P_{\varepsilon}\) (in particular \(R_{P_{\varepsilon},0}\)) all land at \(w_{N}(\varepsilon)\). Thus, as \(\varepsilon\to 0\), the closed ray \(\overline{R_{P_{\varepsilon},0}}=[w_{N}(\varepsilon),+\infty]\) converges in the Hausdorff metric to
\[[0,+\infty]=[w_{N},w_{N-1}]\cup\cdots\cup[w_{1},w_{0}]\cup[w_{0},+\infty],\]
with the last interval \([w_{0},+\infty]\) being the closed ray \(\overline{R_{P,0}}\).
It remains to construct a polynomial \(P\) with the aforementioned properties. It will be more convenient in the notations that follow to label the points
in increasing order by setting \(x_{j}:=w_{N-j}\). Let \(C>0\) and \(0<x_{1}<\dots<x_{N}\). Define
\[Q(z) :=\prod_{j=1}^{N}(z-x_{j})\] \[P(z) :=z+Cz(Q(z))^{2}.\]
Evidently \(P\) is a real polynomial with fixed points at \(0\) and the \(x_{j}\), and \(P^{\prime}(0)>1\) and \(P^{\prime}(x_{j})=1\). Our goal is to find suitable \(C,x_{1},\dots,x_{N}\) such that \(\operatorname{resit}(P,x_{j})<0\) for all \(1\leq j\leq N\). Once this is accomplished, we can conjugate \(P\) via a real dilation to a real monic polynomial which will have the desired properties since the residu iteratif is invariant under analytic change of coordinates.
Each \(x_{j}\) is a parabolic fixed point of multiplicity \(2\), so the formula (2.4) gives
\[\operatorname{resit}(P,x_{j})=1-\iota(P,x_{j})=1-\frac{2}{3}\frac{P^{\prime \prime\prime}(x_{j})}{(P^{\prime\prime}(x_{j}))^{2}}.\]
Thus, we need to arrange for the inequalities
\[P^{\prime\prime\prime}(x_{j})>\frac{3}{2}(P^{\prime\prime}(x_{j}))^{2}\qquad( 1\leq j\leq N).\]
A straightforward calculation shows
\[P^{\prime\prime}(x_{j}) =2C\;x_{j}\;(Q^{\prime}(x_{j}))^{2}\] \[P^{\prime\prime\prime}(x_{j}) =6C\;(Q^{\prime}(x_{j}))^{2}+6C\;x_{j}\;Q^{\prime}(x_{j})Q^{ \prime\prime}(x_{j}),\]
so the above inequalities translates to
\[(Q^{\prime}(x_{j}))^{2}+x_{j}\;Q^{\prime}(x_{j})Q^{\prime\prime}(x_{j})>C\;x_{ j}^{2}\;(Q^{\prime}(x_{j}))^{4}\qquad(1\leq j\leq N). \tag{7.1}\]
To make these inequalities more explicit, we notice that
\[\frac{Q^{\prime}(z)}{Q(z)} =\sum_{i=1}^{N}\frac{1}{z-x_{i}}\] \[\frac{Q^{\prime\prime}(z)}{Q(z)}-\left(\frac{Q^{\prime}(z)}{Q(z) }\right)^{2} =\sum_{i=1}^{N}\frac{-1}{(z-x_{i})^{2}}.\]
It follows, after routine algebra, that
\[Q^{\prime}(x_{j}) =\prod_{i\neq j}(x_{j}-x_{i})\] \[Q^{\prime\prime}(x_{j}) =2Q^{\prime}(x_{j})\sum_{i\neq j}\frac{1}{x_{j}-x_{i}}.\]
Setting
\[H_{j}:=\sum_{i\neq j}\frac{1}{x_{j}-x_{i}},\]
we can now write the desired inequalities (7.1) in the form
\[1+2x_{j}H_{j}>C\,x_{j}^{2}\,(Q^{\prime}(x_{j}))^{2}\qquad(1\leq j\leq N). \tag{7.2}\]
It suffices to find \(x_{1},\dots,x_{N}\) so that the weaker inequalities
\[1+2x_{j}H_{j}>0\qquad(1\leq j\leq N) \tag{7.3}\]
hold, for then (7.2) can be achieved by choosing \(C>0\) sufficiently small.
To obtain (7.3), define \(x_{1},\dots,x_{N}\) recursively by
\[x_{1} :=1,\] \[x_{j} :=x_{j-1}+2^{j}\qquad(2\leq j\leq N).\]
We have
\[H_{1}=-\frac{1}{2^{2}}-\frac{1}{2^{2}+2^{3}}-\dots-\frac{1}{2^{2}+2^{3}+\dots+ 2^{N}}>-\frac{1}{2},\]
so \(1+2x_{1}H_{1}>0\) and (7.3) holds for \(j=1\). We claim that \(H_{j}>0\) for \(2\leq j\leq N\), so (7.3) holds for these values of \(j\) as well. In fact, \(H_{N}>0\) trivially since every term in its defining sum is positive. On the other hand, if \(2\leq j\leq N-1\), then
\[\sum_{i<j}\frac{1}{x_{j}-x_{i}}>\frac{1}{x_{j}-x_{j-1}}=\frac{1}{2^{j}}\]
while
\[\sum_{i>j}\frac{1}{x_{j}-x_{i}}=-\frac{1}{2^{j+1}}-\dots-\frac{1}{2^{j+1}+ \dots+2^{N}}>-\frac{1}{2^{j}}.\]
Adding the two inequalities, we obtain \(H_{j}>0\). This completes the construction of \(P\) and the proof of Theorem D.
|
2310.12886 | On wave-driven propulsion | A theory is presented for wave-driven propulsion of floating bodies driven
into oscillation at the fluid interface. By coupling the equations of motion of
the body to a quasi-potential flow model of the fluid, we derive expressions
for the drift speed and propulsive thrust of the body which in turn are shown
to be consistent with global momentum conservation. We explore the efficacy of
our model in describing the motion of SurferBot [Rhee et al., Bioinspir.
Biomim. 17 (5), 2022], demonstrating close agreement with the experimentally
determined drift speed and oscillatory dynamics. The efficiency of wave-driven
propulsion is then computed as a function of driving oscillation frequency and
the forcing location, revealing optimal values for both of these parameters
which await confirmation in experiments. A comparison to other modes of
locomotion and applications of our model to competitive water-sports are
discussed in conclusion. | GP Benham, O Devauchelle, SJ Thomson | 2023-10-19T16:37:45Z | http://arxiv.org/abs/2310.12886v1 | # On wave-driven propulsion
###### Abstract
A theory is presented for wave-driven propulsion of floating bodies driven into oscillation at the fluid interface. By coupling the equations of motion of the body to a quasi-potential flow model of the fluid, we derive expressions for the drift speed and propulsive thrust of the body which in turn are shown to be consistent with global momentum conservation. We explore the efficacy of our model in describing the motion of _SurferBot_[Rhee _et al.__Bioinspir. Biomim._**17** (5), 2022], demonstrating close agreement with the experimentally determined drift speed and oscillatory dynamics. The efficiency of wave-driven propulsion is then computed as a function of driving oscillation frequency and the forcing location, revealing optimal values for both of these parameters which await confirmation in experiments. A comparison to other modes of locomotion and applications of our model to competitive water-sports are discussed in conclusion.
1120012008/11/11200120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120201201201201201200120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120012012012012012012012012020120120120201201201201202012012012020012012020120120201201202012012020120012012020120201202012020120202012020201202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202202020202022020202202020202020202202020202020202020202020202020202020202202202020202202022020202022020202020202020202020202022020202022020220202022020202022020220202202202022020220220202020220220202202202202202220220220202202202220220222022220 |
2307.11798 | Observational Signatures of Modified Bardeen Black Hole: Shadow and
Strong Gravitational Lensing | This paper is devoted to studying the observational signatures modified by
Bardeen black hole via shadow and strong lensing observations. Influence of the
modified Bardeen black hole parameters q, g, and the parameter $\mu$ on the
shadow radius of the black hole have been investigated numerically and
graphically. Recently, EHT collaboration observed the image and shadow of
supermassive black holes $M87^*$ and $SgrA^*$ where the shadow angular diameter
$\theta_d=42\pm3$ for $M87^*$ and $\theta_d=51.8\pm2.3$ for $SgrA^*$. The
modified black hole parameters q and $\mu$ for the fixed value of g have been
constrained by the EHT collaboration data for the angular shadow diameter of
$M87^*$ and $SgrA^*$. It has been observed that the constrain ranges of the
parameters $\mu$ and $q$ of modified Bardeen black hole as $-0.89\leq \mu/8M^2
\leq 0.4$ and $0\leq |q|\leq 0.185$ for $M87^*$; and $-1.38\leq \mu/8M^2 \leq
0.1$ and $0\leq |q|\leq 0.058$ for $SgrA^*$, keeping the fixed value
$g/2M=0.2$. Modified Bardeen black holes with the additional parameters
$\mu$,$g$ and $q$ besides the mass M of the black hole as the supermassive
black holes $M87^*$ and $SgrA^*$; and it is observed that to be a viable
astrophysical black hole candidate. Furthermore, Gravitational lensing in the
strong field limit for modified Bardeen black hole has been investigated
numerically as well as graphically and compared to the other ordinary
astrophysical black hole such as Schwarzschild ($\mu=\&q=0$) and regular
Bardeen ($\mu=0$) black hole. | Niyaz Uddin Molla, Amna Ali, Ujjal Debnath | 2023-07-21T09:46:23Z | http://arxiv.org/abs/2307.11798v2 | # Observational Signatures of Modified Bardeen Black Hole: Shadow and Strong Gravitational Lensing
###### Abstract
This paper is devoted to studying the observational signatures modified by Bardeen black hole via shadow and strong lensing observations. Influence of the modified Bardeen black hole parameters q, g, and the parameter \(\mu\) on the shadow radius of the black hole have been investigated numerically and graphically. Recently, the Event Horizon Telescope (EHT) collaboration observed the image and shadow of supermassive black holes \(M87^{\ast}\) and \(SgrA^{\ast}\) where the shadow angular diameter \(\theta_{d}=42\pm 3\) for \(M87^{\ast}\) and \(\theta_{d}=51.8\pm 2.3\) for \(SgrA^{\ast}\). The modified black hole parameters q and \(\mu\) for the fixed value of g have been constrained by the EHT collaboration data for the angular shadow diameter of \(M87^{\ast}\) and \(SgrA^{\ast}\). It has been observed that the constrain ranges of the parameters \(\mu\) and \(q\) of modified Bardeen black hole as \(-0.89\leq\mu/8M^{2}\leq 0.4\) and \(0\leq|q|\leq 0.185\) for \(M87^{\ast}\); and \(-1.38\leq\mu/8M^{2}\leq 0.1\) and \(0\leq|q|\leq 0.058\) for \(SgrA^{\ast}\), keeping the fixed value \(g/2M=0.2\). Modified Bardeen black holes with the additional parameters \(\mu\)\(g\) and \(q\) besides the mass M of the black hole as the supermassive black holes \(M87^{\ast}\) and \(SgrA^{\ast}\); and it is observed that to be a viable astrophysical black hole candidate, the EHT result constrains the (\(\mu\), \(q\)) parameter space. Furthermore, Gravitational lensing in the strong field limit for modified Bardeen black hole has been investigated numerically as well as graphically and compared to the other ordinary astrophysical black hole such as Schwarzschild (\(\mu=\&q=0\)) and regular Bardeen (\(\mu=0\)) black hole. We prove how the modified Bardeen black hole parameters would affect the various strong lensing observables. The astrophysical consequences via strong gravitational lensing have been explored by considering the example of supermassive black holes in various galaxies, and the findings show that the modified Bardeen black hole can be quantitatively distinguished from the other astrophysical black hole such as Schwarzschild and regular Bardeen black holes. The findings via astrophysical consequences provide a potential way to distinguish the modified black hole from its counterpart in the general theory of relativity.
**Keywords:** Shadow, Gravitational lensing, Null geodesics, Bardeen black hole.
## I Introduction
Black holes, which were postulated several decades ago, represent a crucial prediction in the general theory of relativity and remain enigmatic, compact entities within our universe. These objects have currently garnered significant attention in the fields of astronomy, astrophysics, and high-energy physics. Remarkably significant discoveries about thermodynamics, quantum effects, and gravitational interactions within curved spacetime have emerged through the study of black holes[1; 2]. In recent years, researchers have primarily been driven by various theories regarding black holes. However, significant advancements in observational and experimental investigations of black holes have been witnessed over the past decade. Several observational characteristics have provided strong evidence for the existence of black holes, including the measurement of black hole spin in X-ray binaries, the detection of gravitational wave signals emitted during binary black hole mergers by LIGO[3; 4; 5]' the first-ever image of a black hole at the center of the galaxy M87 captured by the Event Horizon Telescope (EHT) collaboration[6], and the discovery of a wide star-black hole binary system through radial velocity measurements[7]. These achievements have collectively reinforced the understanding of black holes.
In addition to its significance in astronomy and astrophysics, black holes have also been the subject of investigation in various other branches of physics. One noteworthy endeavor in this regard is the study of the Bardeen black hole. The modified Bardeen black hole represents a modified version of the original Bardeen black hole solution, which was initially proposed by John Bardeen in the publication year [8]. This particular line of research explores alternative aspects and characteristics of black holes beyond traditional understanding.
The original Bardeen black hole solution was proposed as a regular alternative to the classical Schwarzschild black hole, addressing the singularity problem. Subsequently, variations such as the Bardeen-anti-de Sitter and Bardeen-de Sitter black holes have been examined [9; 10]. In recent years, researchers have introduced modifications to the Bardeen black hole solution to incorporate various physical effects and tackle theoretical challenges
[11; 12]. These modifications involve introducing additional fields, such as scalar or electromagnetic fields, or considering alterations to the theory of gravity itself, including higher-order curvature terms.
A notable example is the modified Bardeen black hole in 4D Einstein-Gauss-Bonnet (EGB) gravity ([13]). The EGB theory extends general relativity to higher dimensions and incorporates quadratic curvature terms. The modified Bardeen black hole in EGB gravity offers fresh insights into regular black hole solutions and their astrophysical implications. Additionally, rotating versions of the modified Bardeen black hole have been proposed by Pourhassan and Debnath [11].
Various astronomical and astrophysical aspects, such as black hole parameter estimation, shadows, gravitational lensing, quasinormal modes, time delay, and particles' motion around black holes have been investigated for different types of black holes. These include Schwarzschild black holes [14], modified regular black holes, regular Bardeen black holes [15; 16], Bardeen black holes in cloud string [17; 18; 19], Bardeen-Kiselev black holes [20], modified Ads Bardeen black holes [12], asymptotic magnetically-charged non-singular black holes [21], regular Bardeen black holes in 4D Einstein Gauss-Bonnet gravity [13; 22], and others [23; 24; 25; 26; 27; 28; 29; 30] studied over the past few decades.
In the current paper, we aim to expand on previous analyses conducted on Schwarzschild, modified regular, and regular Bardeen black holes (explored in the literature by [13; 14; 16; 31; 32; 33; 34] and apply them to the case of the modified Bardeen black hole through observations of shadows and strong gravitational lensing. Specifically, we discuss the astrophysical consequences of the modified Bardeen black hole using examples of supermassive black holes located near the center of galaxies, comparing them to ordinary astrophysical black holes like Schwarzschild and regular Bardeen black holes.
The black hole shadow is one of the most fascinating and important astrophysical features observed through strong gravitational lensing. The ultra-high-resolution images of \(M87^{*}\) and \(SgrA^{*}\) released by the Event Horizon Telescope (EHT) collaboration provide crucial evidence for the existence of black holes [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. Within these images, a faint region at the center of the black hole, known as the black hole shadow can be observed. It is generally understood that light rays passing in the vicinity of a black hole are deflected due to the gravitational lensing effect [46; 47; 48; 14], resulting in the observation of a sharp-edge boundary region of brightness on the distant image plane. The black hole shadow is deeply connected to the spacetime geometry and serves as a robust tool for estimating black hole parameters [49; 50; 51; 52], investigating general relativity and its alternatives [53; 54; 55; 56; 57; 58; 59].
Gravitational lensing, an important aspect of general relativity, is a robust astrophysical tool that involves the deflection of light rays (photons) due to gravity. The object causing this deflection is known as a gravitational lens. This phenomenon was predicted in 1919 by Eddington et al. as a successful demonstration of Einstein's general theory of relativity [60]. Gravitational lensing is widely used in physics, astronomy, and cosmology to understand various properties of spacetime, such as the distribution of matter in the universe on both small and large scales, including galaxy clusters and haloes [61; 62; 63].
Furthermore, gravitational lensing serves as a valuable tool for detecting and studying dark matter and dark energy [64; 65; 66; 67; 68; 69], estimating the Hubble parameter [70; 71], observing gravitational waves ([72; 73], and testing general relativity and alternative theories of gravity [74; 75]. Overall, gravitational lensing plays a significant role in our understanding of the universe, allowing us to probe the nature of gravity, explore the distribution of matter, and investigate phenomena related to dark matter, dark energy, and cosmological parameters.
Gravitational lensing, as an important application of general relativity, can be categorized into two regimes: weak lensing and strong lensing. In weak lensing, the gravitational lens is not strong enough to produce multiple or highly magnified images. This regime is particularly useful for studying galaxy clusters. On the other hand, strong lensing occurs when a compact object, such as a black hole, with a strong gravitational field or when the source is very close to the black hole, leading to the appearance of multiple images, arcs, and rings of the source. In this work, we focus specifically on the phenomena of strong gravitational lensing.
The study of strong gravitational lensing has garnered significant interest among modern researchers due to its ability to provide valuable information about the properties of black hole spacetime. While relativistic images cannot be easily separated due to their small separation and low magnification, advancements in technology, such as the new generation Event Horizon Telescope (EHT), offer the potential to distinguish between relativistic images and different types of black holes. Therefore, gravitational lensing in the strong field limit provides a useful tool for testing general relativity and alternative theories of gravity.
In [76], Bozza et al. introduced a useful method for obtaining the deflection angle in the strong gravitational field and found that the deflection angle diverges logarithmically for the Schwarzschild black hole spacetime. They also proposed that this method could be applied to any general spherically symmetric black hole [76; 31]. Since then, gravitational lensing in the strong field limit has been studied for various types of black holes [77; 78; 79; 80; 81] and naked singularities [82; 83; 84; 85; 86; 87], as well as wormholes [88; 89; 90]. Virbhadra and Ellis [14] and Frittelli et al. [91] proposed the definition of the exact lens equation in the context of spacetime geometry, providing an exact lens equation for the Schwarzschild black hole spacetime. Virbhadra and Ellis [92] also numerically studied gravitational lensing by naked singularities.
Several studies have explicitly discussed the astrophysical consequences of different black hole spacetimes, such as the angular position, angular separation, relative mag
nification, Einstein ring, and time delays of relativistic images. These investigations have quantitatively examined gravitational lensing by rotating black holes as well as non-rotating black holes, focusing on the observable signatures [93; 94; 95; 96; 97; 98; 99; 100; 27].
Observationally, gravitational lensing phenomena by black holes have gained significant attention from researchers in recent years [103; 104; 105; 106; 107; 108]. Our work aims to study the observational signatures of the modified Bardeen black hole through shadow and strong lensing observations. We investigate various astrophysical consequences, such as the black hole shadow, angular position, separation, Einstein ring, and time delays of relativistic images, within the context of the modified Bardeen black hole. We compare these results with other astrophysical black holes, including the Schwarzschild black hole and the ordinary regular Bardeen black hole.
The structure of this paper is organized as follows: In Section \(\mathbf{II}\), we provide a brief review of the modified Bardeen black hole and analyze its null geodesics along the equatorial plane. Section \(\mathbf{III}\) is devoted to studying the shadow of the modified Bardeen black hole and constraining its observables using observational data from \(M87^{*}\) and \(SgrA^{*}\)[38; 39; 40; 41]. In Section \(\mathbf{IV}\), we investigate strong gravitational lensing by the modified Bardeen black hole and explore its observables, such as the angular image position, angular separation, relative magnification, Einstein ring, and time delays of the relativistic images. We also provide a brief overview of the astrophysical consequences of the modified Bardeen black hole. Section \(\mathbf{V}\) focuses on estimating the strong lensing observables for the supermassive black hole BH \(NGC4649\), with a mass of \(M=4.3\times 10^{6}\) and a distance of \(D_{OL}=0.0083\) Mpc [109], and comparing the results with those obtained for the Schwarzschild black hole and the ordinary regular Bardeen black hole. Finally, in Section \(\mathbf{VI}\), we discuss and summarize our findings.
## II Modified Bardeen black holes and null geodesics
The modified Bardeen black hole discussed in this paper was proposed in [11] where the modified rotating version of the Bardeen black hole was discussed as particle accelerators. Here, We have discussed the static version of the modified Bardeen black hole from ref. [29] by considering \(a=0\). The static, spherically symmetric spacetime of a modified Bardeen black hole is described by the following form:[12]
\[ds^{2}=-f(r)dt^{2}+\frac{1}{h(r)}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi ^{2}) \tag{1}\]
where
\[f(r)=\left(1-\frac{2Mr^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right)\left(1- \frac{\mu M}{\left(g^{2}+r^{2}\right)^{3/2}}\right) \tag{2}\]
\[h(r)=\left(1-\frac{2Mr^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right) \tag{3}\]
This metric is parametrized by the magnetic charge \(q\), mass parameter \(M\) whereas the parameters \(\mu\) and \(g\) are used to modify the modified Bardeen black hole spacetimes The above metric maintains the following conditions : i) The metric preserves Schwarzschild-like behaviour at large r; ii) It incorporates the 1-loop quantum correction; iii) It allows for a finite time dilation between the center and infinity. In the absence of parameter \(\mu\), the metric (1) reduces to ordinary regular Bardeen black hole. Further \(\mu=0\) and \(q=0\) yield Schwarzschild black hole.
The motion of photon around the modified Bardeen black hole is described by the Lagrangian formalism \(\mathcal{L}=-\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}\). Without loss of generality, we confined the photon trajectory around the modified Bardeen black hole on the equatorial plane \(\theta=\frac{\pi}{2}\).For the modified Bardeen spacetime metric (1) The Lagrangian equation for the motion of photons around the black hole is given by
\[\begin{split}&\mathcal{L}=-\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu} \dot{x}^{\nu}\\ &=f(r)dt^{2}-\frac{1}{h(r)}dr^{2}-r^{2}(d\theta^{2}+\sin^{2} \theta d\phi^{2})=\delta\end{split} \tag{4}\]
where \(\dot{x}^{\mu}\) denotes the four-velocity of photon, dot represents the differentiation w.r.t the affine paramer \(\tau\). where \(\delta=-1,0,1\) indicates the spacelike, null and timelike geodesics respectively. The photon travels around the modified Bardeen black hole along the null geodesic, which means \(\delta=0\). The null geodesics obtained from the equation (4) are as follows:
\[\dot{t}=\frac{dt}{d\tau}=\frac{E}{\left(1-\frac{2Mr^{2}}{\left(q^{2}+r^{2} \right)^{3/2}}\right)\left(1-\frac{\mu M}{\left(g^{2}+r^{2}\right)^{3/2}}\right)} \tag{5}\]
\[\dot{\phi}=\frac{d\phi}{d\tau}=\frac{L}{r^{2}} \tag{6}\]
\[\dot{r}=\frac{dr}{d\tau}=\pm\sqrt{h(r)\bigg{(}\frac{E^{2}}{f(r)}-\frac{L^{2}} {r^{2}}\bigg{)}} \tag{7}\]
Where '+' corresponds to clockwise and '-' corresponds to counter clockwise of photon motion. Here \(E\) and \(L\) respectively are the energy and angular momentum of the particle where as the function \(f(r)\) and \(h(r)\) are taken from the equations (2) and (3).
Equation (7) can be expressed as
\[\frac{dr}{d\tau}+V_{eff}=0 \tag{8}\]
where the effective potential function \(V_{eff}\) is described by
\[V_{eff}=h(r)\bigg{(}\frac{L^{2}}{r^{2}}-\frac{E^{2}}{f(r)}\bigg{)} \tag{9}\]
For the critical photon ring orbit, the effective potential function \(V_{eff}\) satisfies the critical conditions \(V_{eff}(r)=\frac{dV_{eff}(r)}{dr}=0\),\(\frac{d^{3}V_{eff}(r)}{dr^{2}}>0\) (for stable)and \(\frac{d^{2}V_{eff}}{dr^{2}}<0\) (for unstable )circular orbit. It is observed that for the modified Bardeen,or ordinary regular Bardeen black hole, \(\frac{d^{2}V_{eff}}{dr^{2}}|_{r_{ph}}<0\),which corresponds to the case of unstable circular orbit of photon (See Fig.1). Therefore, the photon rays, coming from infinity to the vicinity of modified Bardeen black hole with minimum impact parameter at the closest distance \(r_{0}\) revolved in unstable circular orbits around the black hole and generate a photon sphere of radius \(r_{ph}\).
## III Shadows of modified Bardeen black hole
Black hole Shadow is one of the most important fingerprints of the space-time geometry around the horizon of the black hole. It describes the properties of the black hole which optically depend on the gravitational lensing of the near by radiation. There has been nice reviews of shadow by the black hole with observables. Reader can see in more detail [48; 57]. Moreover, the EHT collaboration detects the image of the black hole by using the shadow properties of black hole [6; 38; 39],which are attracted plenty of attention. In this section, we discuss the shadow of a modified Bardeen black hole and its observable by taking the example of supermassive black holes \(M87*\) and \(SgrA*\) in the center of the galaxy. The black hole shadow is directly related to the critical impact parameter of the photon orbit. Using the above condition one can define the critical impact parameter
\[u_{cr}=\frac{L}{E}=\frac{r_{ph}}{\sqrt{f(r_{ph})}} \tag{10}\]
where the photon sphere radius \(r_{ph}\) is the largest real root of the equation
\[2f(r_{ph})-r_{ph}f^{\prime}(r_{ph})=0 \tag{11}\]
The black hole shadow radius \(r_{sh}\), in which the observer is located far away from the black hole, can be expressed by the celestial coordinates (X, Y) as
\[r_{sh}=\sqrt{X^{2}+Y^{2}}=\frac{r_{ph}}{\sqrt{f(r_{ph})}} \tag{12}\]
where the the celestial co-ordinate (\(X\), \(Y\) ) at the boundary curve of black hole shadow define as
\[X=\lim_{r_{0}\rightarrow\infty}(r_{0}^{2}\sin\theta_{0})\frac{d\phi}{dr} \tag{13}\]
\[Y=\lim_{r_{0}\rightarrow\infty}(r_{0}^{2}\frac{d\theta}{dr}) \tag{14}\]
Here,\(r_{0}\) is the radial distance between black hole and observer whereas \(\theta_{0}\) is the inclination angle between observer and black hole.
The radius of shadow can be expressed in terms of dimensional quantity by the transformations as \(t\rightarrow\frac{t}{2M}\),\(r\rightarrow\frac{r}{2M}\), \(q\rightarrow\frac{q}{2M}\),\(g\rightarrow\frac{g}{2M}\) and \(\mu\rightarrow\frac{\mu}{8M^{2}}\) in the function f(r) and it defines as
\[r_{sh}=r_{ph}\left[\left(1-\frac{r^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right) \left(1-\frac{\mu}{\left(g^{2}+r^{2}\right)^{3/2}}\right)\right]^{-1/2} \tag{15}\]
By using the equations (11 )&(15), estimation of the photon sphere radius and the radius of the shadow for the different values of the black hole parameters \(\mu=0,1,3,5,7\) ;\(g=0,1.2\); and \(|q|=0,0.1,0.2,0.4\) has been shown in Table.1. It is observed that for the fixed value of the parameters, \(\mu\) and \(g\), the photon sphere radius and the shadow radius of the black hole are decreased with the parameter \(q\).
### Observational Constraints using \(M87^{*}\) and \(SgrA^{*}\) observations data
We have already calculated,how the photon sphere radius and the shadow radius are affected by the parameters \(\mu\),\(g\), and \(q\) in the previous section. Here, we intend to determine the value of the parameters \(\mu\),\(g\), and \(q\) based on the observed angular diameter of the shadow. To avoid the difficulties in our investigation of parameters estimation, we consider the parameter \(g=0.2\) as fixed and then try to find out the range of the parameters \(\mu\) and \(q\). For a distant observer, the shadow image of a black hole is always measured by angular diameter \(\theta_{d}\)([57])as
\[\theta_{d}=\frac{2u_{ph}}{D_{ol}} \tag{16}\]
where \(D_{ol}\) is the distance of the black hole to the observer. The above equation can be expressed as
\[\theta_{d}(\mu as)=\biggr{(}\frac{6.191165\times 10^{6}}{\pi}\biggr{)}(\frac{ \gamma}{D_{ol}/Mps})\biggr{(}\frac{2u_{ph}}{M}\biggr{)} \tag{17}\]
where \(\gamma\) represents the mass ratio of a black hole to the sun and \(u_{ph}=u_{cr}\) is given from the Eq.(10).
By using the equations (16 )&(17), we study the angular diameter of black hole shadow as function parameters (\(\mu/8M^{2}\) and \(q/2M\)), as displayed in Fig.2. To investigate the angular diameter of black hole shadow, we consider the supermassive black holes \(M87^{*}\) having mass and distance from the earth [38; 39] are \(M\approx 6.5\times 10^{9}\)\(O\),\(D_{ol}\approx 16.8Mpc\) respectively and \(SgrA^{*}\) having mass and distance from the earth \(M\approx 4.28\times 10^{6}\)\(O\),\(D_{ol}\approx 8.32kpc\)
respectively; where the shadow angular diameter \(\theta_{d}=42\pm 3\) for \(M87^{*}\) and \(\theta_{d}=51.8\pm 2.3\)[40; 41].The modified black hole parameters \(q\) and \(\mu\) for the fixed value of \(g\) have been constrained by the EHT collaboration data for the angular shadow diameter of \(M87^{*}\) and \(SgrA^{*}\).It has been observed that the constrain ranges of the parameters \(\mu\) and \(q\) of modified Bardeen black hole as \(-0.89\leq\mu/8M^{2}\leq 0.4\) and \(0\leq|q|\leq 0.185\) for \(M87^{*}\); and \(-1.38\leq\mu/8M^{2}\leq 0.1\) and \(0\leq|q|\leq 0.058\) for \(SgrA^{*}\), keeping the fixed value of \(g/2M=0.2\). Modified Bardeen black holes with the additional parameters \(\mu\),\(g\) and \(q\) besides the mass M of the black hole as the supermassive black holes \(M87^{*}\) and \(SgrA^{*}\); and it is observed that to be a viable astrophysical black hole candidate, the EHT result constrains the (\(\mu\), \(q\)) parameter space. These results suggest that the modified Bardeen black hole satisfies the EHT constraint and it is possible to detect and distinguish the modified Bardeen black hole from the other astrophysical black hole in the future.
## IV Strong gravitational lensing and it's observable
Here, we would like to consider the strong gravitational lensing by the modified Bardeen black hole and its observables. We intend to investigate how the modified black hole parameters \(\mu\),\(q\), and \(g\) affect the various astrophysical consequences such as angular position, separation, magnification, Einstein's ring, and time delays for the relativistic images and compared to the correspon
Figure 1: Variation of the effective potential \(V_{eff}\) for regular Bardeen (left panel) and modified Bardeen (right panel)black holes as a function of radial coordinate \(r\).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \(\mu\) & \(g\) & \(r_{ph}\) & \(r_{sh}\) & \\ \cline{3-6} & & \(|q|=0.0\)\(|q|=0.1\)\(|q|=0.2\)\(|q|=0.4\) & \(|q|=0.0\)\(|q|=0.1\)\(|q|=0.2\)\(|q|=0.4\) & \(|q|=0.0\)\(|q|=0.1\)\(|q|=0.2\)\(|q|=0.4\) \\ \hline
0 & 0 & 1.50000 1.48309 1.42899 1.09915 & 2.59808 2.58054 2.52504 2.22043 \\ \hline
1 & 0.2 & 1.74892 1.73991 1.71281 1.60617 & 2.95742 2.94772 2.91837 2.79827 \\ \cline{3-6} & 1.2 & 1.58109 1.56574 1.51702 1.24412 & 2.79269 2.77813 2.73253 2.50259 \\ \hline
3 & 0.2 & 2.19323 2.18977 2.17956 2.14160 & 3.50634 3.50177 3.48813 3.43500 \\ \cline{3-6} & 1.2 & 1.84867 1.83975 1.81263 1.69950 & 3.21602 3.20768 3.18241 3.07755 \\ \hline
5 & 0.2 & 2.51788 2.5158 2.50965 2.48650 & 3.90482 3.90178 3.89272 3.85741 \\ \cline{3-6} & 1.2 & 2.15933 2.15468 2.14087 2.08791 & 3.60475 3.59975 3.58479 3.52573 \\ \hline
7 & 0.2 & 2.77523 2.77374 2.76930 2.75243 & 4.22334 4.22102 4.21412 4.18715 \\ \cline{3-6} & 1.2 & 2.2.43328 2.43041 2.42189 2.38949 & 3.93403 3.93056 3.92019 3.3.87958 \\ \hline \end{tabular}
\end{table}
Table 1: Estimation of the photon sphere radius and the shadows radius for the different values of the black hole parameters \(\mu=0,1,3,5,7\) ;\(g=0.2,1.2\); and \(|q|=0,0.1,0.2,0.4\).
dence case of ordinary regular Bardeen(\(\mu=0\)). as well as standard Schwarzschild (\(\mu=0,q=0\)) black holes.
Here, we investigate the strong deflection angle of photon rays due to a modified Bardeen black hole for the case that both the source and observer lie in the equatorial plane (\(\theta=\frac{\pi}{2}\)). To calculate the strong deflection angle of photon rays in the equatorial plane (\(\theta=\frac{\pi}{2}\)), we rewrite the metric(1) by the dimensionless operation,\(t\rightarrow\frac{t}{2M}\),\(r\rightarrow\frac{r}{2M}\), \(q\rightarrow\frac{q}{2M}\),\(g\rightarrow\frac{q}{2M}\) and \(\mu\rightarrow\frac{\mu}{8M^{2}}\) as
\[d\bar{s}^{2}=-A(x)dt^{2}+B(x)dr^{2}+C(x)d\phi^{2} \tag{18}\]
where
\[A(r)=\left(1-\frac{r^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right)\left(1-\frac {\mu}{\left(g^{2}+r^{2}\right)^{3/2}}\right)\]
\[B(r)=\frac{1}{\left(1-\frac{r^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right)}\]
and
\[C(r)=r^{2}\]
When the particle are coming at the closest distance \(r=r_{0}\) to the central black hole, where \(\frac{dr}{d\tau}=0\), one can define the minimum impact parameter \(u_{0}\) in terms of closest distance \(r_{0}\)[31] as
\[u_{0}=\frac{r_{0}}{\sqrt{A(r_{0})}} \tag{19}\]
Behaviour of the photon sphere radius \(r_{ph}\) has been shown as a function of the parameter \(q\) in Fig.3(a);and as a function of the parameters \(\mu\) and \(q\) in Fig.3(b). It is found that the photon radius \(r_{ph}\) is slightly decreased with the parameter \(q\) for the fixed parameters \(\mu\) and \(q\); and while it is increased with the parameter \(\mu\) for the fixed parameters \(g\) and \(q\). In Fig.3(a), it is also observed that the photon radius \(r_{ph}\) for the case of a modified Bardeen black hole is more than the case of an ordinary regular Bardeen black hole (red solid line) and it is more than the value of \(r_{ph}=1.5\), corresponds to the case of Schwarzschild(yellow horizontal line) black hole [31].
When \(r_{0}\to r_{ph}\), deflection angle becomes divergent and for \(r_{0}>r_{ph}\), it becomes finite only. The photon having impact parameter \(u<u_{ph}\) falls into the black hole and for the case when \(u>u_{ph}\), it reaches the closest distance \(r_{0}\) near the black hole; while for the impact parameter \(u=u_{cr}=u_{ph}\), photon revolved in unstable orbit around the black hole. The critical impact parameter for the unstable photon orbit \(u_{ph}\) is given by
\[u_{ph}=\frac{r_{ph}}{\sqrt{A(r_{ph})}} \tag{20}\]
and its behaviours are displayed as a function of the parameter \(q\) in Fig.4(a); and as a function of the parameters \(\mu\) and \(q\) in Fig.4(b). It is found that the critical impact parameter \(u_{ph}\) is slightly decreased with the parameter \(q\) for the fixed parameters \(\mu\) and \(q\); and while it is increased with the parameter \(\mu\) for the fixed parameters \(g\) and \(q\) (see Table.3 also). In Fig.4(a), it is also observed that the critical impact parameter \(u_{ph}/R_{s}\) for the case of
Figure 2: The angular diameter of shadow for modified Bardeen black hole as a function of parameters (\(\mu/8M^{2}\) and \(q/2M\)).The red solid curve corresponds to \(\theta_{d}=39.4615\mu as\) for the modified Bardeen black hole, the blue solid curve corresponds to \(\theta_{d}=38.80\mu as\) for Bardeen black hole and the Green solid curve corresponds to \(\theta_{d}=39.9265\mu as\) for Schwarzschild black hole within \(1\sigma\) region of the measured angular diameter \(\theta_{d}=42\pm 3\mu as\) for \(M87^{*}\)(left panel); and for \(SgrA^{*}\)(right panel), the red solid curve corresponds to \(\theta_{d}=52.15\mu as\) for modified Bardeen black hole, blue solid curve corresponds to \(\theta_{d}=49.15\mu as\) for Bardeen black hole and the Green solid curve correspond to \(\theta_{d}=52.77\mu as\) for Schwarzschild black hole within \(1\sigma\) region of the measured angular diameter \(\theta_{d}=51.8\pm 2.3\mu as\).
modified Bardeen black hole is more than the case of ordinary regular Bardeen black hole (red solid line) and it is more than the value of \(u_{ph}/R_{sh}=2.59808\), corresponds to the case of Schwarzschild(yellow horizontal line) black hole[31].
The strong deflection angle for the modified Bardeen black hole spacetime, as a function of the closest approach distance \(r_{0}\), can be read as [92; 110]
\[\alpha_{D}(r_{0})=I(r_{0})-\pi=2\int_{r_{0}}^{\infty}\frac{\sqrt{B(r)}dr}{ \sqrt{C(r)\sqrt{\frac{A(r_{0})C(r)}{A(r)C(r_{0})}-1}}}dr-\pi \tag{21}\]
The strong deflection angle \(\alpha_{D}(r_{0})\) depends upon the relation between \(r_{0}\) and \(r_{ph}\) and while \(r_{0}\approx r_{ph}\), it is increased. So, we define a new variable z as [31]
\[z=1-\frac{r_{0}}{r} \tag{22}\]
Figure 4: The behaviour of the minimum impact parameter \(u_{ph}/R_{sh}\) vs parameter \(q\) with the different values of \(\mu\) for fixed value of \(g=0.2\) (left panel) ;and the minimum impact parameter \(u_{ph}/R_{sh}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (right panel).
Figure 3: The behaviour of the photon sphere radius \(r_{ph}\) vs parameter \(q\) with the different values of \(\mu\) for fixed value of \(g=0.2\) (left panel) ;and the photon sphere radius \(r_{ph}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (right panel).
For, \(r_{0}\approx r_{ph}\), the strong deflection angle becomes [29]
\[\alpha_{D}(u)=-\bar{a}\ log\left(\frac{u}{u_{ph}}-1\right)+\bar{b}+\mathcal{O}(u- u_{ph}) \tag{23}\]
where
\[\bar{a}=\sqrt{\frac{2A(r_{ph})B(r_{ph})}{A(r_{ph})C^{\prime\prime}(r_{ph})-A^{ \prime\prime}(r_{ph})C(r_{ph})}} \tag{24}\]
and
\[\bar{b}=-\pi+I_{R}(r_{ph})+\bar{a}\ log\bigg{[}r_{ph}^{2}\bigg{(}\frac{C^{ \prime\prime}_{ph}}{c_{ph}}-\frac{A^{\prime\prime}_{ph}}{A_{ph}}\bigg{)}\bigg{]}, \tag{25}\]
Here, \(I_{R}(r_{ph})=2\int_{0}^{1}\bigg{(}r_{ph}\bigg{[}\sqrt{\frac{B(z)}{C(Z)}} \bigg{(}\frac{A(r_{ph})}{C(r_{ph})}\frac{C(z)}{A(z)}-1\bigg{)}\frac{1}{(1-z)^{ 2}}\bigg{]}-\frac{\bar{a}}{z\ r_{ph}}\bigg{)}dz\) which is obtained numerically.
We numerically obtain the lensing coefficients \(\bar{a}\) and \(\bar{b}\) and \(u_{ph}/R_{sh}\) with the modified Bardeen black hole parameters \(\mu=0,1,3\),\(g=0.2,1.2\) and \(q=0,0.05,0.1,0.2,0.4\) (see Table.2). From this Table, it is seen that for the fixed value of parameters \(g=0.2,1.2\)\(\mu(=0,1,3)\) lensing coefficients \(\bar{a}\) increases with increasing magnitude of the parameter \(q\) while lensing coefficients \(\bar{b}\) decreases ;except for the case when \(g=0.2\) and \(\mu(=3)\). When \(\mu=0\) and \(q=0\), the value of lensing coefficients \(\bar{a}=1\) and \(\bar{b}=-0.40023\), which correspond to the case of the Schwarzschild black hole [31]. Behaviour of the lensing coefficients \(\bar{a}\) and \(\bar{b}\) are displayed in Figs.(5&6). The behaviour of the deflection angle \(\alpha_{D}\) of photon around the modified Bardeen black hole is displayed in Fig.7. In Figs.7(a) and 7(b), it is observed that the deflection angle \(\alpha_{D}\) is increased with the increasing magnitude of charge parameter \(q\) and decreases with the increasing value of \(\mu\), keeping other parameters fixed. Furthermore, it is found that the deflection angle \(\alpha_{D}\) for the case of a modified Bardeen black hole larger than the case of Schwarzschild (\(\mu=0,q=0\))and smaller than the case of an ordinary regular Bardeen (\(\mu=0\)) black hole. The deflection angle \(\alpha_{D}\) decreases with the critical impact parameter \(u\) with the different value of the parameter \(\mu\) for the fixed value of \(q\) and \(g\) (see Fig.7(c) ); and with different magnitude values of parameter \(q\) for the fixed value of \(\mu\) and \(g\) (see Fig.7(d) ).
### Lensing observables
Next, we study the strong lensing observables by modified Bardeen black hole. Here, we assume the case where the observer and source are very far from the black hole(lens) and they are almost aligned. Further, we assume the source is behind the black hole(lens). Therefore, the lens equation can be defined as [76]
\[\beta=\theta-\frac{D_{ls}}{D_{os}}\Delta\alpha_{n} \tag{26}\]
where \(\Delta\alpha_{n}=\alpha_{D}(\theta)-2n\pi\) is the offset deflection angle and \(n\) indicates the number of loops of photon ray around the black hole. Here, the angles \(\beta\) and \(\theta\) respectively are the angular separations between the black hole(lens) and source;and between the observer and source, whereas \(D_{ls}\), \(D_{ol}\),\(D_{os}\) are the lens-source, observer-lens, observer-source distance respectively such that \(Dos=D_{ol}+D_{ls}\).
Using the Eqs. (21) and (26), the angular separation between the black hole(lens) to the \(n^{th}\) relativistic image can be expressed as
\[\theta_{n}=\theta_{n}^{0}-\frac{u_{ph}e_{n}(\theta_{n}^{0}-\beta)D_{os}}{\bar{ a}D_{ol}D_{ls}} \tag{27}\]
where
\[e_{n}=e^{\frac{b-2n\pi}{a}},\]
\[\theta_{n}^{0}=\frac{u_{ph}(1+e_{n})}{D_{ol}}\]
. Here,\(\theta_{n}^{0}\) is the angular image position for the case when photon winds complete \(2n\pi\) around the black hole(lens).
As strong gravitational lensing preserves the surface brightness, the magnification of the relativistic is the ratio of the solid angle subtended by the \(n\)-th image and the source [14]. For the \(n\)-th relativistic image, the magnification is then obtained as ([31])
\[\mu_{n}=\bigg{(}\frac{\beta}{\alpha}\frac{d\beta}{d\alpha}\bigg{)}^{-1}\bigg{|} _{\theta_{0}}=\frac{u_{ph}^{2}(1+e_{n})e_{n}D_{os}}{\bar{\beta}\bar{a}D_{ls}D_{ ol}^{2}} \tag{28}\]
The above equation suggests that the first relativistic image is the brightest image and the magnification decreases exponentially with \(n\) i.e. bright of this image dominates over the other relativistic images. It is clear that the equation (28) becomes divergent when \(\beta\to 0\), suggesting that perfect alignment maximizes the possibility of detection of relativistic images.
Here, We consider the case when the brightest image, i.e. the outermost image \(\theta_{1}\) is resolved as a single image and the remaining inner images are packed together at \(\theta_{\infty}\) (\(\theta_{n}|_{n\rightarrow\infty}=:\theta_{\infty}\)). With the help of the deflection angle in the equation (23), one can obtain strong lensing observables such as the angular position of the set of images \(\theta_{\infty}\), angular separation between the outermost and innermost images \(S\) and relative magnification \(r_{mag}\) between the outermost relativistic image and other pact of inner relativistic images can be defined as [31; 111].
\[\theta_{\infty}=\frac{u_{ph}}{d_{ol}} \tag{29}\]
\[S=\theta_{1}-\theta_{\infty}\approx\theta_{\infty}e^{\frac{(b-2\pi)}{a}} \tag{30}\]
\[r_{mag}=\frac{\mu_{1}}{\sum_{n=2}^{\infty}\mu_{n}}\approx\frac{5\pi}{\bar{a} log(10)} \tag{31}\]
If the strong lensing observables \(\theta_{\infty}\),\(S\), and \(r_{mag}\) are measured from the observation, the lensing coefficients \(\bar{a}\),\(\bar{b}\) and the minimum impact parameter \(u_{ph}\) can be obtained easily by inverting the equations (29),(30) and (31), and Further, compared to the theoretically obtained values. Using these findings, one can identify the nature of the modified Bardeen, ordinary regular Bardeen, and Schwarzschild black hole; and distinguish among them.
Considering the supermassive black holes \(M87^{*}\), \(SgrA^{*}\) and \(NGC7457\) in the nearby galaxies, we estimate the observable quantities \(\theta_{\infty}\),\(S\), and \(r_{mag}\) in the context of a modified Bardeen black hole (See Table.4). The mass and distance from the earth for \(M87^{*}\)[38; 39] are \(M\approx 6.5\times 10^{9}\dot{O}\),\(D_{ol}\approx 16.8Mpc\), for \(SgrA^{*}\) are \(M\approx 4.28\times 10^{6}\dot{O}\),\(D_{ol}\approx 8.32kpc\)[40; 41], and for \(NGC7457\) are \(M\approx 8.95\times 10^{6}\dot{O}\), \(D_{ol}\approx 12.53\ Mpc\)[109].
The behaviour of the strong lensing observables angular image position \(\theta_{\infty}\), angular image separation \(S\), and relative magnification \(r_{mag}\) as a function of the parameter \(q\) and as the function of the parameters \(q\) and \(\mu\) for the fixed value of \(g=0.2\) for \(M87^{*}\) and for \(SgrA^{*}\) has been
Figure 5: The behaviour of the deflection limit coefficient \(\bar{a}\) vs parameter \(q\) with the different values of \(\mu\) for fixed value of \(g=0.2\) (left panel) :and the deflection limit coefficient \(\bar{a}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (right panel).
Figure 6: The behaviour of the deflection limit coefficient \(\bar{b}\) vs parameter \(q\) with the different values of \(\mu\) for fixed value of \(g=0.2\) (left panel) :and the deflection limit coefficient \(\bar{b}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (right panel).
\begin{table}
\begin{tabular}{l c c c c} \(\mu\) & \(g\) & \(|q|\) & \(\bar{a}\) & \(\bar{b}\) & \(u_{ph}/R_{sh}\) \\ \hline
0 & & 0 & 1.00 & -0.40023 & 2.59808 \\ \hline & & 0.05 & 1.0028 & -0.401611 & 2.59373 \\
0 & & 0.1 & 1.01151 & -0.406055 & 2.58054 \\ & & 0.2 & 1.05179 & -0.429858 & 2.58054 \\ & & 0.4 & 1.55181 & -1.18175 & 2.22043 \\ \hline & & 0.05 & 0.921272 & -0.477287 & 2.955 \\ & 0.2 & 0.1 & 0.923386 & -4.476401 & 2.94772 \\
1 & & 0.2 & 0.931409 & -0.470916 & 2.91837 \\ & & 0.4 & 0.949693 & -0.399493 & 2.79827 \\ & & 0.05 & 1.03705 & -0.516087 & 2.78908 \\ & 1.2 & 0.1 & 1.04578 & -0.52368 & 2.77813 \\ & & 0.2 & 1.08555 & -0.561458 & 2.73253 \\ & & 0.4 & 1.46926 & -1.19512 & 2.50259 \\ \hline & & 0.05 & 0.761981 & -0.384548 & 3.5052 \\ & 0.2 & 0.1 & 0.761421 & -0.381893 & 3.50177 \\
3 & & 0.2 & 0.75901 & -0.37101 & 3.48813 \\ & & 0.4 & 0.746808 & -0.32424 & 3.435 \\ & & 0.05 & 0.986331 & -0.641022 & 3.21394 \\ & & 0.1 & 0.989931 & -0.644732 & 23.20768 \\ & & 0.2 & 1.00474 & -0.660005 & 3.18241 \\ & & 0.4 & 1.06929 & -0.7222588 & 3.07755 \\ \end{tabular}
\end{table}
Table 2: Estimation of strong lensing coefficients with the different value of black hole parameters \(\mu=0,1,3:\)\(g=0.2,1.2\); and \(|q|=0,0.05,0.1,0.2,0.4\).
\begin{table}
\begin{tabular}{l c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{parameters} & \multicolumn{2}{c|}{\(M87^{*}\)} & \multicolumn{2}{c|}{\(SgrA^{*}\)} & \multicolumn{2}{c|}{\(NGC7457\)} & \multicolumn{2}{c}{\(M87^{*}\),\(SgrA^{*}\), \(NGC7457\)} \\ \(\mu\) & \(g\) & \(|q|\) & \(\theta_{\infty}(\mu as)\) & \(S(\mu as)\) & \(\theta_{\infty}(\mu as)\) & \(S(\mu as)\) & \(\theta_{\infty}(\mu as)\) & \(S(\mu as)\) & \(r_{mag}\) \\ \hline
0 & 0 & 0 & 19.9633 & 0.024984 & 26.3315 & 0.0329538 & 0.0365211 & \(4.57\times 10^{-5}\) & 6.82188 \\ \hline & & 0.05 & 19.9298 & 0.025377 & 26.2874 & 0.0334722 & 0.03646 & \(4.63\times 10^{-5}\) & 6.80283 \\
0 & & 0.1 & 19.8285 & 0.0266224 & 26.1537 & 0.0351149 & 0.0362745 & \(4.87\times 10^{-5}\) & 6.74426 \\ & & 0.2 & 19.402 & 0.0328069 & 025.912 & 0.0437223 & 0.035494 & \(6.0\times 10^{-5}\) & 6.48597 \\ & & 0.4 & 17.0615 & 0.138949 & 22.504 & 0.183274 & 0.0312125 & \(2.54\times 10^{-4}\) & 4.39608 \\ \hline & & 0.05 & 22.7058 & 0.0147639 & 29.9489 & 0.0194735 & 0.0415383 & \(2.70\times 10^{-5}\) & 7.40485 \\ & 0.2 & 0.1 & 22.6499 & 0.0149914 & 29.8751 & 0.0197736 & 0.041436 & \(2.74\times 10^{-5}\) & 7.3879 \\ & & 0.2 & 22.4243 & 0.0159015 & 29.5776 & 0.020974 & 0.0410234 & \(2.91\times 10^{-5}\) & 7.32426 \\
1 & & 0.4 & 21.5015 & 0.0189008 & 28.3604 & 0.0249302 & 0.0393352 & \(3.46\times 10^{-5}\) & 7.18325 \\ & & 0.05 & 21.4309 & 0.0304544 & 28.2673 & 0.0401692 & 0.039206 & \(5.57\times 10^{-5}\) & 6.57816 \\
1.2 & 0.1 & 21.3467 & 0.0318095 & 28.1563 & 0.0419567 & 0.0390521 & \(5.82\times 10^{-5}\) & 6.52325 \\ & & 0.2 & 20.9964 & 0.0383546 & 27.6942 & 0.055896 & 0.038411 & \(7.02\times 10^{-5}\) & 6.28426 \\ & & 0.4 & 19.2295 & 0.118434 & 25.3637 & 0.156215 & 0.0351788 & \(2.17\times 10^{-4}\) & 4.64307 \\ \hline & & 0.05 & 26.9335 & 0.00426567 & 35.5252 & 0.00562641 & 0.0492725 & \(7.8\times 10^{-6}\) & 8.95282 \\ & 0.2 & 0.1 & 26.9071 & 0.00424895 & 35.4904 & 0.00560436 & 0.0492224 & \(7.77\times 10^{-6}\) & 8.95941 \\ & & 0.2 & 26.8023 & 0.00417578 & 35.3522 & 0.00550785 & 0.0490325 & \(7.64\times 10^{-6}\) & 8.98787 \\
3 & & 0.4 & 26.394 & 0.00379368 & 34.8137 & 0.00500385 & 0.0482857 & \(6.94\times 10^{-6}\) & 9.13472 \\ & & 0.05 & 24.6955 & 0.0220698 & 32.5732 & 0.02911 & 0.0451782 & \(4.03\times 10^{-5}\) & 6.91642 \\
1.2 & 0.1 & 24.6473 & 0.0225118 & 32.5098 & 0.029693 & 0.0450902 & \(4.11\times 10^{-5}\) & 6.89127 \\ & & 0.2 & 24.4532 & 0.0243877 & 32.2537 & 10.0321674 & 0.044735 & \(4.46\times 10^{-5}\) & 6.7897 \\ & & 0.4 & 23.6474 & 0.0337682 & 31.1909 & 0.0445402 & 0.043261 & \(6.17\times 10^{-5}\) & 6.37982 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Estimation of strong lensing observables for supermassive BHs \(M87^{*}\),\(SgrA^{*}\), \(NGC7457\) with the different value of black hole parameters \(\mu=0,1,3:\)\(g=0.2,1.2\); and \(|q|=0,0.05
shown in Figs.8,9& 10 (see Table.3 also). It is observed that the angular image position \(\theta_{\infty}\) and relative magnification \(r_{mag}\) decrease with the increasing magnitude of charge parameter \(q\) while angular image separation \(S\) increases with the increasing magnitude of both the parameters \(\mu\) and \(q\), keeping other parameters fixed. But the relative magnification \(r_{mag}\) increases with increasing value of the parameter \(\mu\) for the fixed value of \(g\) and \(q\). Furthermore, it is found that the relative magnification \(r_{mag}\) for the case of modified Bardeen black hole larger than the case of Schwarzschild (\(\mu=0,q=0\))as well as ordinary regular Bardeen (\(\mu=0\)) black hole.
### Einestein Ring
When the source, black hole (lens), and observer are perfectly aligned i.e., when \(\beta=0\), a black hole (lens) deflects the light rays in all direction such that a ring-shaped image is produced, which is called an Einstein ring [112; 113; 114; 115; 46].
By simplifying the equation (27) for \(\beta=0\),we obtain the angular radius of \(n^{th}\) relativistic images as follows
\[\theta_{n}=\theta_{n}^{0}\bigg{(}1-\frac{u_{ph}e_{n}D_{os}}{\bar{a}D_{ls}D_{ol} }\bigg{)} \tag{32}\]
Considering the case where the black hole (lens) is located at a half distance between the source and receiver i.e., \(D_{os}=2D_{ol}\) and taking \(D_{ol}>>u_{ph}\), thus the angular radius of the \(n^{th}\) relativistic Einstein ring in the context of a modified Bardeen black hole is given by
\[\theta_{n}^{E}=\frac{u_{ph}(1+e_{n})}{D_{ol}} \tag{33}\]
Figure 7: The behaviour of the deflection angle \(\alpha_{D}\) vs parameter \(q\) with the different values of \(\mu\) for fixed value of \(g=0.2\) ( panel (a)) ; the deflection angle \(\alpha_{D}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (panel (b)) ;the deflection angle \(\alpha_{D}\) as a function of impact parameter \(u\) with different value of \(q\) for the fixed value of parameters \(\mu\) and \(q\) (panel (c)); and the deflection angle \(\alpha_{D}\) as a function of impact parameter \(u\) with different value of \(\mu\) for the fixed value of parameters \(\mu\) and \(q\) (panel (d)).The solid red line corresponds to the case of regular Bardeen and the solid yellow line corresponds to the case of modified Bardeen black hole. The dots in panels (c)&(d) indicate the values of the impact parameter \(u=u_{ph}\), where the deflection angle \(\alpha_{D}\) becomes divergent.
The angular radius \(\theta_{1}^{E}\) denotes the outermost Einstein ring and which is shown in Fig.11 for the supermassive black holes \(M87^{*}\) (Fig.11(a)&(b)) and \(SgrA^{*}\)(Fig.11(c)&(d)). It is observed that for the fixed parameter \(\mu\) and \(g\), the outermost Einstein rings decrease with the increasing magnitude of the parameter \(q\) in the context of both the supermassive black holes \(SgrA^{*}\) and \(M87^{*}\).Further, it is found that the outermost Einstein rings for the modified Bardeen black hole are more than the ordinary regular Bardeen black hole.
### Time delay in strong field Limit
Time delay is one of the most important observable by the strong gravitational lensing phenomenon, which is obtained by the time difference between the formation of two relativistic images. The time difference is caused when the photon travels in a different path around the black hole. The time travel by the different photon paths for the different relativistic images is different and hence, there is a time difference between the different relativistic images. If the time signals of two relativistic images are distinguished from the observation, one can calculate the time delay between two signals[32].The time taken by a photon to revolve around the black hole [32] is read as
\[\tilde{T}=\tilde{a}log\bigg{(}\frac{u}{u_{ph}}-1\bigg{)}+\tilde{b}+\mathcal{O }(u-u_{ph}) \tag{34}\]
With the help of the above Eq.(34), one can compute the time difference between two relativistic images.
For spherically static symmetric black hole spacetime, the time delay between two relativistic images, when the
Figure 8: The behaviour of the angular image position \(\theta_{\infty}\) vs parameter \(q\) with the different values of \(\mu\) for the fixed value of \(g=0.2\) for \(M87^{*}\)(upper left panel) and for \(SgrA^{*}\) (lower left panel) ;and the angular image position \(\theta_{\infty}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) for \(M87^{*}\) (lower right panel) and for \(SgrA^{*}\) (lower right panel).
relativistic images are on the same side of the black hole, are obtained as
\[\Delta T_{2,1}=2\pi u_{c}=2\pi D_{ol}\theta_{\infty} \tag{35}\]
If the time delay \(\Delta T_{2,1}\) between two relativistic images with an accuracy 5% and critical impact parameter \(u_{ph}\) with a negligible error are obtained, then it can be possible to measure the black hole distance with an accuracy of 5%.
The time delay \(\Delta T_{2,1}\) for various supermassive black holes in the context of the standard Schwarzschild (\(\mu=0\),\(q=0\)), ordinary regular Bardeen (\(\mu=0\),\(q=0.3\)) and modified Bardeen (\(\mu=3\),\(q=0.3\)) black holes have been estimated numerically (see Table.5). It is found that the time delay \(\Delta T_{2,1}\) between two relativistic images in the context of modified Bardeen black hole (\(\mu=3\),\(g=0.2\),\(q=0.3\)) much more than cases of standard Schwarzschild (\(\mu=0\),\(q=0\)) as well as ordinary regular Bardeen (\(\mu=0\),\(q=0.3\)).
## V Comparison with observation:
It is easily known that the standard astrophysical black holes like ordinary regular Bardeen, and Schwarzschild black holes are a special form modified Bardeen black holes. A few works have been investigated on the shadow cast and gravitational lensing in the context of ordinary regular Bardeen as well as Schwarzschild black holes, which are already discussed previously. In the present work, we extend the work done by some literature, Bozza, Virbhadra Ellis for Schwarzschild black hole [31; 14; 32],
Figure 9: The behaviour of the angular image separation \(S\) vs parameter \(q\) with the different values of \(\mu\) for the fixed value of \(g=0.2\) for \(M87^{*}\)(upper left panel) and for \(SgrA^{*}\) (lower left panel) ;and the angular image separation \(S\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) for \(M87^{*}\) (lower right panel) and for \(SgrA^{*}\) (lower right panel).
He et al. [117] and also Stuchik and Schee [16; 33] work's for regular Bardeen black holes; and Islam et al.'s [13] works for Bardeen black hole in 4D Einstein's gravity. Islam's et al. investigated the strong gravitational lensing by Bardeen black hole in 4 dimensional Einstein's Gauss-Bonnet gravity and constrained the black hole parameters using some upper massive black hole data. In their works, it is also mentioned that the regular Bardeen black hole solution is a special solution of Bardeen black hole in 4-dimensional Einstein's Gauss-Bonnet gravity. Recently, He et al. studied the shadow and observed properties of Bardeen black holes surrounded by dif
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Galaxy & \(M(M_{\odot})\) & \(D_{ol}(Mpc)\) & \(\Delta T_{2,1}\) & \(\Delta T_{2,1}\) & \(\Delta T_{2,1}\) & \\ & & & (\(\mu=0,q=0\)) & (\(\mu=0,q=0.3\)) & (\(\mu=3,g=0.2,q=0.3\)) & \\ \hline \hline M87 & \(6.5\times 10^{9}\) & 16.68 & 17378.9 & 16186.6 & 21000.2 & \\ NGC 4472 & \(2.54\times 10^{9}\) & 16.72 & 6791.12 & 6325.3 & 8206.24 & \\ NGC 4395 & \(3.6\times 10^{5}\) & 4.3 & 0.962522 & 0.896499 & 1.16309 & \\ NGC1332 & \(1.47\times 10^{9}\) & 22.66 & 3930.3 & 3660.71 & 4749.28 & \\ NGC 7457 & \(8.95\times 10^{6}\) & 12.53 & 23.9294 & 28.2880 & 28.9157 & \\ NGC 1399 & \(8.81\times 10^{8}\) & 20.85 & 2355.5 & 2193.93 & 2846.34 & \\ NGC 1374 & \(5.90\times 10^{8}\) & 19.57 & 1577.47 & 1469.26 & 1906.17 & \\ NGC 4649 & \(4.72\times 10^{9}\) & 16.46 & 12619.7 & 11754.1 & 15249.4 & \\ NGC 3607 & \(1.37\times 10^{8}\) & 22.65 & 366.293 & 341.168 & 442.62 & \\ NGC 4459 & \(6.96\times 10^{7}\) & 16.01 & 186.088 & 173.323 & 224.864 & \\ NGC 4486A & \(1.44\times 10^{7}\) & 18.36 & 38.5009 & 35.86 & 46.5236 & \\ NGC 1316 & \(1.69\times 10^{8}\) & 20.95 & 451.85 & 420.857 & 546.006 & \\ NGC 4382 & \(1.30\times 10^{7}\) & 17.88 & 34.7577 & 32.3736 & 42.0004 & \\ NGC 5077 & \(8.55\times 10^{8}\) & 38.7 & 2285.9 & 2129.9 & 2762.34 & \\ NGC 7768 & \(1.34\times 10^{9}\) & 116.0 & 3582.72 & 3336.97 & 4329.28 & \\ NGC 4697 & \(2.02\times 10^{8}\) & 12.54 & 546 & 503.036 & 652.622 & \\ NGC 5128 & \(5.69\times 10^{7}\) & 3.62 & 152.132 & 141.697 & 183.833 & \\ NGC 5576 & \(2.73\times 10^{8}\) & 25.68 & 729.912 & 679.845 & 882.009 & \\ NGC 3608 & \(4.65\times 10^{8}\) & 22.75 & 1243.26 & 1157.98 & 1502.32 & \\ M32 & \(2.45\times 10^{6}\) & 0.806 & 6.5505 & 6.10118 & 7.91547 & \\ Cygnus A & \(2.66\times 10^{9}\) & 242.7 & 7111.97 & 6624.13 & 8593.93 & \\ \hline \end{tabular}
\end{table}
Table 4: Estimation of time delay for some supermassive BHs in the context of Schwarzschild (\(\mu=0\),\(q=0\)), ordinary regular Bardeen (\(q=0.3,\mu=0\)) and modified Bardeen (\(\mu=3,q=0.3\)) black hole spacetimes. Mass(M) and distance \(D_{ol}\) respectively are taken in solar mass and Mpc units [109]. Time delays \(\Delta T_{2,1}\) are estimated in minutes.
Figure 10: The behaviour of the relative magnification \(r_{mag}\) vs parameter \(q\) with the different values of \(\mu\) for the fixed value of \(g=0.2\) (left panel) ;and the relative magnification \(r_{mag}\) as a function of both the parameters \(\mu\) and \(q\) for the fixed value of \(g=0.2\) (right panel).Note that the relative magnification \(r_{mag}\) does not the mass or distance of the black hole.
ferent accretion models. In this paper, we discuss the Shadow and strong gravitational lensing effects of the modified Bardeen black hole and compared it to the Schwarzschild (\(\mu=0\) and \(q=0\)) as well as ordinary regular Bardeen(\(\mu=0\)) black hole.
In the Shadow discussion, the radius of the black hole shadow has been obtained numerically (See. Table.1). It is found that the radius of the black hole corresponds to the modified Bardeen black hole is larger than the regular Bardeen as well as Schwarzschild black holes. The angular diameter of shadow for modified Bardeen black hole as a function of parameters (\(\mu/8M^{2}\) and \(q/2M\)) has been displayed in Fig.2. It is observed that the angular diameter of the modified Bardeen black hole is larger than the regular Bardeen black hole in the context of supermassive black holes \(M87^{*}\) and \(SgrA^{*}\) (see Fig.2). Furthermore, it is observed that the red solid curve corresponds to \(\theta_{d}=39.4615\mu as\) for modified Bardeen black hole, blue solid curve corresponds to \(\theta_{d}=38.80\mu as\) for Bardeen black hole and the Green solid curve correspond to \(\theta_{d}=39.9265\mu as\) for Schwarzschild black hole within \(1\sigma\) region of the measured angular diameter \(\theta_{d}=42\pm 3\mu as\) for \(M87^{*}\)(Fig.(Fig.2(a)); and for \(SgrA^{*}\)(2),the red solid curve corresponds to \(\theta_{d}=52.15\mu as\) for modified Bardeen black hole, blue solid curve correspond to \(\theta_{d}=49.15\mu as\) for Bardeen black hole and the Green solid curve corresponds to \(\theta_{d}=49.15\mu as\) for Bardeen black hole and the Green solid curve corresponds to \(\theta_{d}=49.
curve correspond to \(\theta_{d}=52.77\mu as\) for Schwarzschild black hole within \(1\sigma\) region of the measured angular diameter \(\theta_{d}=51.8\pm 2.3\mu as\).
In the strong gravitational lensing investigation, we apply the method proposed by [31] which can be used to distinguish the various types of static spherically symmetric black holes and also investigate the various astrophysical consequences by considering the supermassive black holes(see Table.III & Table.V ) in the context of modified Bardeen, ordinary regular Bardeen, and Schwarzschild black holes. Furthermore, we estimate the strong observable quantities such as angular position \(\theta_{\infty}\), \(S\) and \(r_{mag}\) in the context of modified Bardeen, ordinary regular Bardeen, and Schwarzschild black holes by considering the supermassive black hole \(NGC4649\) having mass \(M=4.72\times 10^{9}\) and distance \(D_{OL=0.008}Mpc\) and numerically compared among them.
Considering the supermassive black hole having the same mass and distance, it is found from our estimations that the angular position of innermost image \(\theta_{\infty}\) and angular separation \(S\) is always greater than the case ordinary regular Bardeen as well as Schwarzschild black holes. Also, the modified Bardeen black hole has larger relative magnifications. the numerical differences of \(\theta_{\infty}\), \(S\) and \(r_{mag}\) in between modified Bardeen (\(\mu=3\),\(g=0.2\), \(q=0.3\) and ordinary regular Bardeen(\(\mu=0\), \(q=0.3\) are respectively as \(\sim 5.9\mu as\), \(\sim 0.3\mu as\) and \(\sim 3.1\) magnitude while the differences in between modified Bardeen (\(\mu=3\),\(g=0.2\), \(q=0.3\) and standard Schwarzschild black hole (\(\mu=0\), \(q=0\) ) are respectively as \(\sim 4.9\mu as\), \(\sim 0.18\mu as\) and \(\sim 2.2\) magnitude.It is also observed that the angular position \(\theta_{\infty}\in(19.59,19.82)\), separation \(S\in(0.00297,0.003138)\) and relative magnification \(r_{mag}\in(3.47,3.5)\); when \(\mu=03\),\(g=0.2\) and \(0\leq|q|\leq 3\).These findings suggest that the outermost image for the modified Bardeen black hole is very nearer to the innermost images and it may be possible to separate the images from the other black hole images. In other words, If the outermost relativistic image can be detected, one can distinguish the modified Bardeen black hole from the other standard astrophysical black holes like Schwarzschild, ordinary regular Bardeen black holes, etc, using the technology method. However, it is so difficult in observations as angular separation of the relativistic images is not more than \(\sim 0.3\mu as\). It also has been observed that Einstein's ring \(\theta_{\infty}^{E}\)for the modified Bardeen black hole is larger than the other standard astrophysical black holes such as Schwarzschild and ordinary regular Bardeen black holes (See in Fig.11). Further, it is observed (See Table.IV) that time delays between two relativistic images for the case of a modified Bardeen black hole (\(\sim 15249.4minutes\)) are significantly much more than the other astrophysical black holes such as Schwarzschild black hole (\(\sim 12619.7minutes\) ) as well as ordinary regular Bardeen black holes (\(\sim 11754.1minutes\) ) in the context of supermassive black hole \(NGC4649\). These results suggest that if one can distinguish the first and second relativistic images from the observation, the time delay between them may provide a better chance to distinguish the modified Bardeen black hole from the other astrophysical black holes such as a Schwarzschild or an ordinary regular Bardeen black hole. Thus, a modified Bardeen black hole could be distinguished quantitatively from a Schwarzschild or an ordinary regular Bardeen black hole.
If a modified Bardeen black hole is identified and confirmed to exist, it would have several significant implications and consequences for our understanding of black holes and general relativity. Here are some potential implications:
* The existence of a modified Bardeen black hole would provide empirical evidence for alternative theories of gravity that deviate from the predictions of general relativity. It would indicate that the standard black hole solution of general relativity is not the only viable description of black holes and that modifications are necessary to explain the observed astrophysical phenomena.
* The modified Bardeen black hole would exhibit distinct astrophysical signatures and observational features compared to standard black holes. These could include variations in the gravitational lensing effects, the structure of the event horizon, the formation of accretion disks, and the emission of gravitational waves. Identifying and characterizing these unique signatures would deepen our understanding of the underlying physics and properties of black holes.
* The modified Bardeen black hole could provide insights into the nature of dark matter. Some modified gravity theories propose that the effects attributed to dark matter can be explained by modifications to the gravitational laws at large scales. Observations of the modified Bardeen black hole could offer constraints on such theories and provide clues about the nature of dark matter.
* The no-hair theorem in general relativity states that a black hole is characterized solely by its mass, electric charge, and angular momentum. If the modified Bardeen black hole possesses additional parameters or properties beyond these three, it would challenge the no-hair theorem. Confirming the existence of a modified Bardeen black hole would require revisiting our understanding of black hole uniqueness and the fundamental properties of black holes.
* The discovery of a modified Bardeen black hole would push the boundaries of our current understanding of fundamental physics. It would inspire further theoretical investigations, stimulate new research directions, and potentially lead to the development of more comprehensive theories that can explain the behavior of black holes in modified gravity scenarios.
In summary, identifying a modified Bardeen black hole would have profound implications for our understanding of gravity, astrophysics, and fundamental physics. It would open up new avenues of exploration and deepen our knowledge of the nature and properties of black holes.
## VI Results and conclusions
In this paper, we have discussed the observational signatures of the modified Bardeen black hole through shadow and strong gravitational lensing observations. We compared these signatures with those of other astrophysical black holes, such as the Schwarzschild black hole and the ordinary regular Bardeen black hole. We examined how the parameters of the modified Bardeen black hole affect the shadow and strong lensing observables.
First, we derived the null geodesics for the modified Bardeen black hole using the Hamiltonian-Jacobi action and reviewed the photon orbit around this black hole. Numerically estimating the photon sphere radius and shadow radius, we found that for fixed values of the parameters \(\mu\) and \(g\), these radii decrease with increasing magnitude of the charge parameter \(q\), while they increase with increasing magnitude of \(\mu\) for fixed values of \(q\) and \(g\). Furthermore, we observed that the shadow radius for the modified Bardeen black hole is larger than that of the Schwarzschild black hole and the ordinary regular Bardeen black hole. We also obtained the angular diameter of the black hole shadow with the parameters \(\mu\) and \(q\) for a fixed parameter \(g\)(\(=0.2\)), considering the supermassive black holes \(M87^{*}\) and \(SgrA^{*}\). It can be seen that the angular diameter of the black hole shadow for the modified Bardeen is larger than the Schwarzschild, as well as regular Bardeen black holes. The modified black hole parameters q and \(\mu\) for the fixed value of g have been constrained by the EHT collaboration data for the angular shadow diameter of \(M87^{*}\) and \(SgrA^{*}\). It has been observed that the constraint ranges of the parameters \(\mu\) and \(q\) of modified Bardeen black hole as \(-0.89\leq\mu/8M^{2}\leq 0.4\) and \(0\leq|q|\leq 0.185\) for \(M87^{*}\); and \(-1.38\leq\mu/8M^{2}\leq 0.1\) and \(0\leq|q|\leq 0.058\) for \(SgrA^{*}\), keeping the fixed value \(g/2M=0.2\). Modified Bardeen black holes with the additional parameters \(\mu\),\(g\) and \(q\) besides the mass M of the black hole as the supermassive black holes \(M87^{*}\) and \(SgrA^{*}\); and it is observed to be a viable astrophysical black hole candidate, the EHT result constrains the (\(\mu\), \(q\)) parameter space.
Next, we investigated strong gravitational lensing by the modified Bardeen black hole and examined its astrophysical consequences. We studied the effects of the modified Bardeen black hole parameters \(\mu\), \(g\), and \(q\) on the strong deflection angle and strong lensing observables. By revisiting the null geodesic equations and numerically estimating the photon radius, we obtained the lensing coefficients \(\bar{a}\), \(\bar{b}\), and \(u_{ph}/R_{sh}\). Our results showed that for fixed values of \(\mu\) and \(g\), \(\bar{a}\) increases while \(\bar{b}\) decreases with increasing magnitude of the charge parameter \(q\). Conversely, for fixed values of \(q\) and \(g\), \(\bar{a}\) decreases while \(\bar{b}\) increases with increasing value of the parameter \(\mu\). We observed that the deflection angle \(\alpha_{D}\) slightly increases initially, reaches a maximum value, and then decreases with increasing magnitude of the charge parameter \(q\) for fixed values of \(\mu\) and \(g\). However, it always decreases with increasing value of \(\mu\) for fixed values of \(q\) and \(g\). Comparatively, the deflection angle for the modified Bardeen black hole is smaller than that for other astrophysical black holes, such as the Schwarzschild black hole and the ordinary regular Bardeen black hole.
We also numerically estimated the strong lensing observables for the relativistic images in the context of the modified Bardeen black hole, considering the supermassive black holes \(M87^{*}\), \(SgrA^{*}\), and \(NGC7457\). Our results showed that the angular position \(\theta_{\infty}\) and magnification \(r_{\rm mag}\) of the relativistic images in the context of the modified Bardeen black hole are larger than those for the Schwarzschild black hole and the ordinary regular Bardeen black hole. However, the angular separation \(S\) of the relativistic images in the context of the modified Bardeen black hole is smaller than that for the Schwarzschild black hole and the ordinary regular Bardeen black hole. We provided specific numerical ranges for the observables \(\theta_{\infty}\) and \(S\) for different values of the parameters \(\mu\), \(g\), and \(q\), considering the supermassive black holes \(M87^{*}\), \(SgrA^{*}\), and \(NGC7457\). In the cases, where \(\mu=0\) and \(0\leq|q|\leq 0.4\),\(\theta_{\infty}\in(17.06,19.97)\mu as\) for \(M87^{*}\),\(\theta_{\infty}\in(22.5,26.3)\mu as\) for \(SgrA^{*}\) and \(\theta_{\infty}\in(0.031,0.037)\mu as\) for \(NGC7457\); when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\),\(\theta_{\infty}\in(21.5,22.8)\mu as\) for \(M87^{*}\),\(\theta_{\infty}\in(28.3,30)\mu as\) for \(SgrA^{*}\) and \(\theta_{\infty}\in(0.03,0.042)\mu as\) for \(NGC7457\); when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\),\(\theta_{\infty}\in(19.2,21.44)\mu as\) for \(M87^{*}\),\(\theta_{\infty}\in(25.3,28.3)\mu as\) for \(SgrA^{*}\) and \(\theta_{\infty}\in(0.35,0.4)\mu as\) for \(NGC7457\); when \(\mu=3\),\(g=0.2\) and \(0<|q|\leq 0.4\),\(\theta_{\infty}\in(26.3,26.94)\mu as\) for \(M87^{*}\),\(\theta_{\infty}\in(34.8,35.53)mu as\) for \(SgrA^{*}\) and \(\theta_{\infty}\in(0.048,0.05)\mu as\) for \(NGC7457\); and when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\),\(\theta_{\infty}\in(23.6,24.7)\mu as\) for \(M87^{*}\),\(\theta_{\infty}\in(31.19,32.58)\mu as\) for \(SgrA^{*}\) and \(\theta_{\infty}\in(0.048,0.05)\mu as\) for \(NGC7457\). Moreover, the angular separation \(S\in(0.024,0.14)\mu as\) for \(M87^{*}\), \(S\in(0.032,0.184)\mu as\) for \(SgrA^{*}\),\(S\in(4.5\times 10^{-5},2.54\times 10^{-4})\mu as\) for NGC 7457 for the case when \(\mu=0\) and \(0\leq|q|\leq 0.4\); \(S\in(0.014,0.019)\mu as\) for \(M87^{*}\), \(S\in(0.019,0.025)\mu as\) for \(SgrA^{*}\),\(S\in(2.6\times 10^{-5},3.5\times 10^{-5})\mu as\) for NGC 7457 for the case when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\); \(S\in(0.03,0.19)\mu as\) for \(M87^{*}\), \(S\in(0.04,0.16)\mu as\) for \(SgrA^{*}\),\(S\in(5.5\times 10^{-5},2.18\times 10^{-4})\mu as\) for NGC 7457 for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\); for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\); for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\); for the case when \(\mu=3\),\(g=0.2\) and \(0<|q|\leq 0.4\). For the case when \(\mu=3\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=0.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=0.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=0.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=1\),\(g=1.2\) and \(0<|q|\leq 0.4\). for the case when \(\mu=3\),\(g=1.
Fig.11 for the cases of modified Bardeen (\(\mu=3,g=0.2\)) as well as ordinary regular Bardeen (\(\mu=0\)) black holes.
Our analysis revealed that the outermost Einstein rings \(\theta_{n}^{E}\) for the modified Bardeen black hole are larger compared to those of the ordinary regular Bardeen black hole. This implies that the modified Bardeen black hole exhibits a larger angular separation between multiple relativistic images, enhancing the detectability and distinguishing it from the ordinary regular Bardeen black hole.
Moreover, when considering various supermassive black holes, we examined the time delay between the first and second-order relativistic images for the modified Bardeen, ordinary Bardeen, and Schwarzschild black holes. Notably, we found that the time delay for the modified Bardeen black hole (\(\sim 15249.4minutes\)) is significantly greater than that for the Schwarzschild black hole (\(\sim 12619.7minutes\) ) and the ordinary regular Bardeen black holes (\(\sim 117544.1minutes\) ) in the context of the supermassive black hole \(NGC4649\). This suggests that the modified Bardeen black hole exhibits distinct temporal signatures, providing an avenue for its detection and differentiation from other astrophysical black holes.
## Acknowledgements
N.U.M would like to thank CSIR, Govt. of India for providing Senior Research Fellowship (No. 08/003(0141))/2020-EMR-I).
|
2302.13989 | Near braces and p-deformed braided groups | Motivated by recent findings on the derivation of parametric non-involutive
solutions of the Yang-Baxter equation we reconstruct the underlying algebraic
structures, called near braces. Using the notion of the near braces we produce
new multi-parametric, non-degenerate, non-involutive solutions of the
set-theoretic Yang-Baxter equation. These solutions are generalisations of the
known ones coming from braces and skew braces. Bijective maps associated to the
inverse solutions are also constructed. Furthermore, we introduce the
generalized notion of p-deformed braided groups and p-braidings and we show
that every p-braiding is a solution of the braid equation. We also show that
certain multi-parametric maps within the near braces provide special cases of
p-braidings. | Anastasia Doikou, Bernard Rybolowicz | 2023-02-27T17:29:23Z | http://arxiv.org/abs/2302.13989v3 | # Near Braces and \(p\)-deformed Braided Groups
###### Abstract.
Motivated by recent findings on the derivation of parametric non-involutive solutions of the Yang-Baxter equation we reconstruct the underlying algebraic structures, called near braces. Using the notion of the near braces we produce new multi-parametric, non-degenerate, non-involutive solutions of the set-theoretic Yang-Baxter equation. These solutions are generalisations of the known ones coming from braces and skew braces. Bijective maps associated to the inverse solutions are also constructed. Furthermore, we introduce the generalized notion of \(p\)-deformed braided groups and \(p\)-braidings and we show that every \(p\)-braiding is a solution of the braid equation. We also show that certain multi-parametric maps within the near braces provide special cases of \(p\)-braidings.
Key words and phrases:Groups; skew braces, braiding, set-theoretic Yang-Baxter equation 2010 Mathematics Subject Classification: 16S70; 16Y99; 08A99
## 1. Introduction
The aim of the present study is two-fold: on the one hand, motivated by recent findings on parametric solutions [17] of the set-theoretic [18, 21] Yang-Baxter equation (YBE) [3, 38] we derive the underlying algebraic structure associated to these solutions. On the other hand using the derived algebraic frame we introduce novel multi-parametric classes of solutions of the YBE.
It is well established now that braces, first introduced by Rump [36], describe all non-degenerate involutive solutions of the YBE, whereas skew braces were later introduced to describe non-involutive, non-degenerate solutions of the YBE [27]. Indeed, based on the ideas of [36] and [27] and on recent findings regarding parametric solutions of the set-theoretic YBE [17] we construct the generic algebraic structure, called near brace, that provides solutions to the set-theoretic braid equation. Moreover, motivated by the definition of the braided group [34] and the work of [26], we introduce an extensive definition of a \(p\)-deformed braided group and \(p\)-braidings, which are solutions of the set-theoretic braid equation. All the parametric solutions derived here are indeed \(p\)-braidings. It is worth noting that the study of solutions of the set-theoretic Yang-Baxter equation and the associated algebraic structures have created a particularly active new field during the last decade or so (see for instance [1, 2, 10, 9, 12, 11]). The key observation is that by relaxing more and more conditions
on the underlying algebraic structures one identifies more general classes of solutions (see e.g. [8, 9, 28, 29, 32, 33, 37], [23]-[25]). It is also worth noting that interesting links with quantum integrable systems [13, 14] as well the quasi-triangular quasi-bialgebras [15]-[17] have been recently established, opening up new intriguing paths of investigations.
We briefly describe what is achieved in this study, and in particular what are the findings in each section. In the remaining of this section we review some necessary ideas on non-degenerate set-theoretic solutions of the YBE and the associated algebraic structures, i.e. braces and skew braces. In section 2 inspired by the parametric solutions of the YBE introduced in [17] we reconstruct the generic associated algebraic structure called near brace. In fact, every near brace can turn to a skew brace by defining a suitably modified (deformed) addition; this is described in Theorem 2.6. The key idea is to simultaneously consider \(\check{r}\) and its inverse given that we are exclusively interested in non-degenerate solutions. of the braid equation. Having derived the underlying algebraic structure we move on to Subsection 2.1 to extract multi-parametric bijective maps and hence to identify non-degenerate, multi-parametric solutions of the YBE as well as their inverses. In subsection 2.2 we provide a generalized definition of the braided group and braidings (\(p\)-braidings, \(p\) stands for parametric) by relaxing some of the conditions appearing in the definition of [34] (see also relevant findings in [26].) Furthermore, we show that the generalized \(p\)-braidings are non-degenerate solutions of the YBE and the bijective maps coming for the near braces provide automatically \(p\)-braidings.
### Preliminaries
Before we start our analysis and present our findings in the subsequent section we review below basic preliminary notions relevant to our investigation. Specifically, we recall the problem of solving the set-theoretic braid equation and some fundamental results. Let \(X=\{x_{1},\ldots x_{n}\}\) be a set and \(\check{r}^{z}:X\times X\to X\times X\), where \(z\in X\) is a fixed parameter. We denote
\[\check{r}^{z}(x,y)=(\sigma_{x}^{z}(y),\tau_{y}^{z}(x)). \tag{1.1}\]
We say that \(\check{r}^{z}\) is non-degenerate if \(\sigma_{x}^{z}\) and \(\tau_{y}^{z}\) are bijective maps, and \((X,\check{r})\) is a set-theoretic solution of the braid equation if
\[(\check{r}^{z}\times\mathrm{id})(\mathrm{id}\times\check{r}^{z})(\check{r}^{ z}\times\mathrm{id})=(\mathrm{id}\times\check{r}^{z})(\check{r}^{z}\times \mathrm{id})(\mathrm{id}\times\check{r}^{z}). \tag{1.2}\]
The map \(\check{r}\) is called involutive if \(\check{r}^{z}\circ\check{r}^{z}=\mathrm{id}\).
We also introduce the map \(r:X\times X\to X\times X\), such that \(r=\check{r}^{z}\pi\), where \(\pi:X\times X\to X\times X\) is the flip map: \(\pi(x,y)=(y,x)\). Hence, \(r(y,x)=(\sigma_{x}(y),\tau_{y}(x))\), and it satisfies the YBE:
\[r_{12}\ r_{13}\ r_{23}=r_{23}\ r_{13}\ r_{12}, \tag{1.3}\]
where we denote \(r_{12}(y,x,z)=(\sigma_{x}(y),\tau_{y}(x),z)\), \(r_{23}(z,y,x)=(z,\sigma_{x}(y),\tau_{y}(x))\) and
\(r_{13}(y,z,x)=(\sigma_{x}(y),z,\tau_{y}(x))\).
We review now the basic definitions of the algebraic structures that provide set-theoretic solutions of the braid equation, such as left skew braces and braces. We also present some key properties associated to these structures that will be useful when formulating some of the main findings of the present study, summarized in Section 4.
**Definition 1.1** ([35, 36, 12]).: A _left skew brace_ is a set \(B\) together with two group operations \(+,\circ:B\times B\to B\), the first is called addition and the second is called multiplication, such that \(\forall a,b,c\in B\),
\[a\circ(b+c)=a\circ b-a+a\circ c. \tag{1.4}\]
If \(+\) is an abelian group operation \(B\) is called a _left brace_. Moreover, if \(B\) is a left skew brace and \(\forall a,b,c\in B\)\((b+c)\circ a=b\circ a-a+c\circ a\), then \(B\) is called a _skew brace_. Analogously if \(+\) is abelian and \(B\) is a skew brace, then \(B\) is called a _brace_.
_Remark 1.2_.: In the literature often left brace is just called a brace and left skew brace is called a skew brace. In that case various authors call skew braces two-sided skew braces.
The additive identity of a left skew brace \(B\) will be denoted by \(0\) and the multiplicative identity by \(1\). In every left skew brace \(0=1\). Indeed, this is easy to show:
\[a\circ b=a\circ(b+0)\ \Rightarrow\ a\circ b=a\circ b-a+a\circ 0\ \Rightarrow\ a \circ 0=a\ \Rightarrow\ 0=1.\]
Rump showed the following theorem for involutive set-theoretic solutions.
**Theorem 1.3**.: _(Rump's theorem, [35, 36]). Assume \((B,+,\circ)\) is a left brace. If the map \(\check{r}_{B}:B\times B\to B\times B\) is defined as \(\check{r}_{B}(x,y)=(\sigma_{x}(y),\tau_{y}(x))\), where \(\sigma_{x}(y)=x\circ y-x\), \(\tau_{y}(x)=t\circ x-t\), and \(t\) is the inverse of \(\sigma_{x}(y)\) in the circle group \((B,\circ),\) then \((B,\check{r}_{B})\) is an involutive, non-degenerate solution of the braid equation._
_Conversely, if_ \((X,\check{r})\) _is an involutive, non-degenerate solution of the braid equation, then there exists a left brace_ \((B,+,\circ)\) _(called an underlying brace of the solution_ \((X,\check{r})\)_) such that_ \(B\) _contains_ \(X,\)__\(\check{r}_{B}(X\times X)\subseteq X\times X,\) _and the map_ \(\check{r}\) _is equal to the restriction of_ \(\check{r}_{B}\) _to_ \(X\times X.\) _Both the additive_ \((B,+)\) _and multiplicative_ \((B,\circ)\) _groups of the left brace_ \((B,+,\circ)\) _are generated by_ \(X.\)__
_Remark 1.4_ (Rump).: Let \((N,+,\cdot)\) be an associative ring. If for \(a,b\in N\) we define
\[a\circ b=a\cdot b+a+b,\]
then \((N,+,\circ)\) is a brace if and only if \((N,+,\cdot)\) is a radical ring.
Guarnieri and Vendramin [27], generalized Rump's result to left skew braces and non-degenerate, non-involutive solutions.
**Theorem 1.5** (_Theorem [27]_).: _Let \(B\) be a left skew brace, then the map \(\check{r}_{GV}:B\times B\to B\times B\) given \(\forall a,b\in B\) by_
\[\check{r}_{GV}(a,b)=(-a+a\circ b,\ (-a+a\circ b)^{-1}\circ a\circ b)\]
_is a non-degenerate solution of set-theoretic YBE._
## 2. Set-theoretic solutions of the YBE and near braces
In [17] generalized \(z\)-parametric set-theoretic solutions of the YBE depending on an extra fixed parameter \(z\) coming from skew braces were derived. In this section we will start from a generic \(z\)-parametric set-theoretic solution of the YBE and we will reconstruct the underlying algerbaic structure, which is similar to a skew brace.
Indeed, let \(z\in X\) be fixed, then we denote
\[\check{r}_{z}(x,y)=(\sigma_{x}^{z}(y),\tau_{y}^{z}(x)). \tag{2.1}\]
We say that \(\check{r}\) is non-degenerate if \(\sigma_{x}^{z}\) and \(\tau_{y}^{z}\) are bijective maps. We review below the constraints arising by requiring \((X,\check{r}_{z})\) to be a solution of the braid equation ([18, 21, 35, 36]). Let,
\[(\check{r}\times\mathrm{id})(\mathrm{id}\times\check{r})(\check{r}\times \mathrm{id})(\eta,x,y)=(L_{1},L_{2},L_{3}),\]
\[(\mathrm{id}\times\check{r})(\check{r}\times\mathrm{id})(\mathrm{id}\times \check{r})(\eta,x,y)=(R_{1},R_{2},R_{3}),\]
where, after employing expression (2.1) we identify:
\[L_{1}=\sigma_{\sigma_{\eta}^{z}(x)}^{z}(\sigma_{\tau_{x}^{z}(\eta)}^{z}(y)), \quad L_{2}=\tau_{\sigma_{\tau_{x}^{z}(\eta)}^{z}(y)}^{z}(\sigma_{\eta}^{z}(x )),\quad L_{3}=\tau_{y}^{z}(\tau_{x}^{z}(\eta)),\]
\[R_{1}=\sigma_{\eta}^{z}(\sigma_{x}^{z}(y)),\quad R_{1}=\sigma_{\tau_{\sigma_{ x}^{z}(y)}^{z}(\eta)}^{z}(\tau_{y}^{z}(x)),\quad R_{3}=\tau_{\tau_{y}^{z}(x)}^{z}( \tau_{\sigma_{x}^{z}(y)}^{z}(\eta)).\]
And by requiring \(L_{i}=R_{i}\), \(i\in\{1,2,3\}\) we obtain the following fundamental constraints for the associated maps:
\[\sigma_{\eta}^{z}(\sigma_{x}^{z}(y))=\sigma_{\sigma_{\eta}^{z}(x )}^{z}(\sigma_{\tau_{x}^{z}(\eta)}^{z}(y)), \tag{2.2}\] \[\tau_{y}^{z}(\tau_{x}^{z}(\eta))=\tau_{\tau_{y}^{z}(x)}^{z}(\tau _{\sigma_{x}^{z}(y)}^{z}(\eta)),\] (2.3) \[\tau_{\sigma_{\tau_{x}^{z}(\eta)}^{z}(y)}^{z}(\sigma_{\eta}^{z}(x ))=\sigma_{\tau_{\sigma_{x}^{z}(y)}^{z}(\eta)}^{z}(\tau_{y}^{z}(x)). \tag{2.4}\]
Note that the constraints above are the ones of the set-theoretic solution (1.1), given that \(z\) is a fixed element of the set, i.e. for different elements \(z\) we obtain in principle distinct solutions of the braid equation.
We will introduce in what follows suitable algebraic structures that satisfy the fundamental constraints above, i.e. provide solutions of the braid equation and generalize the findings of Rump and Guarnieri & Vendramin. The following generalizations are greatly inspired by recent results in [17]. For the rest of the subsection we consider \(X\) to be a set
and there exists a binary group operation \(\circ:X\times X\to X\), with a neutral element \(1\in X\) and an inverse \(x^{-1}\in X\), \(\forall x\in X\). There also exists a family of bijective functions indexed by \(X\), \(\sigma_{x}^{z}:X\to X\), such that \(y\mapsto\sigma_{x}^{z}(y)\), where \(z\in X\) is some fixed parameter. We then define another binary operation \(+:X\times X\to X\), such that
\[y+x:=x\circ\sigma_{x^{-1}}^{z}(y\circ z)\circ z^{-1}. \tag{2.5}\]
For convenience we will omit henceforth the fixed \(z\in X\) in \(\sigma_{x}^{z}(y)\) and simply write \(\sigma_{x}(y)\).
_Remark 2.1_.: The operation \(+\) is associative if and only if for all \(x,y,c\in X\),
\[\sigma_{c^{-1}}(y\circ z^{-1}\circ\sigma_{z\circ y^{-1}}(x))=\sigma_{c^{-1}}(y )\circ z^{-1}\circ\sigma_{(c\circ\sigma_{c^{-1}}(y)\circ z^{-1})^{-1}}(x). \tag{2.6}\]
From now on we will assume that the operation \(+\) is associative, that is condition (2.6) holds.
Also, we recall that we focus only on non-degenerate, invertible solutions \(\check{r}\). Given that \(\sigma_{x}\) and \(\tau_{y}\) are bijections the inverse maps also exist such that
\[\sigma_{x}^{-1}(\sigma_{x}(y))=\sigma_{x}(\sigma_{x}^{-1}(y))=y,\quad\tau_{y} ^{-1}(\tau_{y}(x))=\tau_{y}(\tau_{y}^{-1}(x))=x \tag{2.7}\]
Let the inverse \(\check{r}^{-1}(x,y)=(\hat{\sigma}_{x}(y),\hat{\tau}_{y}(x))\) exist with \(\hat{\sigma}_{x},\ \hat{\tau}_{y}\) being also bijections, that satisfy:
\[\sigma_{\hat{\sigma}_{x}(y)}(\hat{\tau}_{y}(x))=x=\hat{\sigma}_{\sigma_{x}(y) }(\tau_{y}(x)),\quad\tau_{\hat{\tau}_{y}(x)}(\hat{\sigma}_{x}(y))=y=\hat{\tau} _{\tau_{y}(x)}(\sigma_{x}(y)). \tag{2.8}\]
Taking also into consideration (2.7) and (2.8) and that \(\sigma_{x},\tau_{y}\) and \(\hat{\sigma}_{x},\hat{\tau}_{y}\) are bijections, we deduce:
\[\hat{\sigma}_{\sigma_{x}(y)}^{-1}(x)=\tau_{y}(x),\quad\hat{\tau}_{\tau_{y}(x) }^{-1}(y)=\sigma_{x}(y). \tag{2.9}\]
We assume that the map \(\hat{\sigma}\) appearing in the inverse matrix \(\check{r}^{-1}\) has the general form
\[\hat{\sigma}_{x}(y):=x\circ(x^{-1}\circ z_{2}+y\circ z_{1})\circ\xi, \tag{2.10}\]
where the parameters \(z_{1,2},\ \xi\) are to be identified. The derivation of \(\check{r}\) goes hand in hand with the derivations of \(\check{r}^{-1}\) (see details in [17] and later in the text when deriving a generic \(\check{r}\) and its inverse). In the involutive case the two maps coincide and \(x+y=y+x\). However, for any non-degenerate, non-involutive solution both bijective maps \(\sigma_{x},\hat{\sigma}_{x}\) should be considered together with the fundamental conditions (2.8).
We present below a series of useful Lemmas that will lead to one of our main theorems.
_Remark 2.2_.: This is just a reminder of well a known fact. We recall that \(\sigma_{x}\) is an injective function, i.e.
\[\sigma_{x}(y_{1})=\sigma_{x}(y_{2})\Leftrightarrow y_{1}=y_{2}.\]
Indeed, using (2.5)
\[\sigma_{x}(y_{1})=\sigma_{x}(y_{2})\Leftrightarrow y_{1}\circ z^{-1}+x^{-1}=y _{2}\circ z^{-1}+x^{-1}, \tag{2.11}\]
which automatically suggest right cancellation in \(+\). Similarly \(\hat{\sigma}_{x}\) is an injective function and this leads to left cancellation.
**Lemma 2.3**.: _For all \(x\in X\), the operationx \(+x,\ x+:X\to X\) are bijections._
Proof.: Let \(y_{1},y_{2}\in X\) be such that \(y_{1}+x=y_{2}+x\), then
\[x\circ\sigma_{x^{-1}}(y_{1}\circ z)\circ z^{-1}=x\circ\sigma_{x^{-1}}(y_{2} \circ z)\circ z^{-1}\implies\sigma_{x^{-1}}(y_{1}\circ z)=\sigma_{x^{-1}}(y_{2 }\circ z),\]
since \(\circ\) is a group operation and \(\sigma_{x^{-1}}\) is injective, we get that \(y_{1}=y_{2}\) and \(+x\) is injective for any \(x\in X\). From the surjectivity, we observe that since \(\sigma_{x^{-1}}\) is bijective, we can consider \(d=\sigma_{x^{-1}}^{-1}(x^{-1}\circ c\circ z)\circ z^{-1}\), one can easily see that \(d+x=c\), and since \(c\) is arbitrary we get that \(+x\) is a surjection. Thus \(+x\) is a bijection. Similarly, from the bijectivity of \(\hat{\sigma}_{x}\) and (2.10) we show that \(x+\) is also a bijection.
We now introduce the notion of a neutral elements in \((X,+)\)
**Lemma 2.4**.: _Let \((X,+)\) be a semigroup, then \(\forall x\in X,\)\(\exists\ 0_{x}\in X,\) such that \(0_{x}+x=x\). Moreover, \(\forall x,y\in X,\)\(0_{x}=0_{y}=0,\) i.e. \(0\) is the unique left neutral element. The left neutral element \(0\) is also right neutral element._
Proof.: Notice that due to bijectivity of \(\sigma_{x}\), we can consider the element \(0_{x}:=\sigma_{x^{-1}}^{-1}(z)\circ z^{-1}\in X,\) recall also the definition of \(+\) in (2.5), then simple computation shows:
\[0_{x}+x=x\circ\sigma_{x^{-1}}(\sigma_{x^{-1}}^{-1}(z))\circ z^{-1}=x\circ z \circ z^{-1}=x. \tag{2.12}\]
We have, \(0_{x}+x=x\Rightarrow 0_{x}+x+y=x+y\), but also \(0_{x+y}+x+y=x+y.\) The last two equations lead to \(0_{x}+x+y=0_{x+y}+x+y,\) and due Lemma 2.3 right cancellation holds, so we get that \(0_{x}=0_{x+y}\) for all \(y\in X\). Observe that by the Lemma 2.3, \(x+\) is a surjection, that is for all \(w\in X\) exists \(y\in X\) such that \(x+y=w\), that is \(0:=0_{x}=0_{w}\) for all \(w\in X\).
Moreover, \(0+y=y\Rightarrow x+0+y=x+y\) and due to associativity and right cancellativity (Lemma 2.3) we get \(x+0=x\), for all \(x\in X\).
**Lemma 2.5**.: _Let \(0\) be the neutral element in \((X,+)\), then \(\forall x\in X,\)\(\exists-x\in X,\) such that \(-x+x=0\) (left inverse). Moreover, \(-x\in X\) is a right inverse, i.e. \(x+(-x)=0\)\(\forall x\in X.\) That is \((X,+,0)\) is a group._
Proof.: Observe that due to bijectivity of \(\sigma_{x}\), we can consider \(-x:=\sigma_{x^{-1}}^{-1}(x^{-1}\circ 0\circ z)\circ z^{-1}\). Simple computation shows it is a left inverse,
\[-x+x=x\circ\sigma_{x^{-1}}(\sigma_{x^{-1}}^{-1}(x^{-1}\circ 0\circ z)\circ z^{-1} \circ z)\circ z^{-1}=0.\]
By associativity we deduce that \(x+(-x)+x=0+x\), we get that \(x+(-x)=0\), and \(-x\) is the inverse.
To conclude, having only assumed associativity in \(+\) (2.5) we deduced that \((X,+)\) is a group. We may now present our main findings described in the following central Theorem.
**Theorem 2.6**.: _Let \((X,\circ)\) be a group and \(\check{r}:X\times X\to X\times X,\) such that \(\check{r}(x,y)=(\sigma_{x}(y),\tau_{x}(y))\) is a non-degenerate solution of the set-theoretic braid equation. Assume also that \((X,+)\) (\(+\) is defined in (2.5)) is a group. Moreover, we assume that:_
1. _There exists_ \(\phi:X\to X\) _such that for all_ \(a,b,c\in X\)__\(a\circ(b+c)=a\circ b+\phi(a)+a\circ c.\)__
2. _For_ \(h\in\{z,\xi\}\in X\) _appearing in_ \(\sigma_{x}(y)\) _and_ \(\hat{\sigma}_{x}(y)\) _there exist_ \(\widehat{\phi}:X\to X\) _such that for all_ \(a,b\in X\)__\((a+b)\circ h=a\circ h+\widehat{\phi}(h)+b\circ h.\)__
3. _The neutral element_ \(0\) _of_ \((X,+)\) _has a left and right distributivity._
_Then for all \(a,b,c\in X\) the following statements hold:_
1. \(\phi(a)=-a\circ 0\) _and_ \(\widehat{\phi}(h)=-0\circ h\)_,_
2. \(\sigma_{a}(b)=(a\circ b\circ z^{-1}-a\circ 0+1)\circ z=a\circ b-a\circ 0\circ z+z.\)__
3. \(a-a\circ 0=1\) _and (i)_ \(0\circ 0=-1\) _(ii)_ \(1+1=0^{-1}.\)__
4. _If_ \(z_{2}\circ\xi=0^{-1},\ 0\circ\xi=z_{1}\circ\xi=z^{-1}\circ 0^{-1},\) _then (i)_ \(\hat{\sigma}_{a}(b)\circ\hat{\tau}_{b}(a)=a\circ b=\sigma_{a}^{z}(b)\circ\tau_ {b}^{z}(a)\)__ (ii)__\(-a\circ 0+a=1.\)__
Proof.:
1. In the following the distributivity rule \(a\circ(b+c)=a\circ b+\phi(a)+b\circ c\) holds, then \[a=a\circ(0+1)=a\circ 0+\phi(a)+a\circ 1\ \Rightarrow\phi(a)=-a\circ 0,\] Also, for those \(z\in X\) such that \((a+b)\circ z=a\circ z+\hat{\phi}(z)+b\circ z\) we have \[z=(0+1)\circ z=0\circ z+\hat{\phi}(z)+z\ \Rightarrow\hat{\phi}(z)=-0\circ z.\]
2. Using the distributivity rule we obtain \[\sigma_{a}(b)=(a\circ b\circ z^{-1}-a\circ 0+1)\circ z.\] (2.13) Before we move on with the rest of the proof it is useful to calculate \((-a)\circ z\), indeed: \[0\circ z = (a-a)\circ z\Rightarrow 0\circ z=a\circ z-0\circ z+(-a)\circ z\] (2.14) \[\Rightarrow (-a)\circ z=0\circ z-a\circ z+0\circ z.\] The latter then leads to the following convenient identity (see also [17] and Lemma 2.9 later in the text) \[(a-b+c)\circ z=a\circ z-b\circ z+c\circ z,\] and hence (2.13) becomes \(\ \sigma_{a}(b)=a\circ b-a\circ 0\circ z+z.\)
3. Due to the fact that \(\check{r}\) satisfies the braid equation we may employ (2.2) and the general distributivity rule (see also (2.13)): \[\sigma_{a}(\sigma_{b}(c)) = (a\circ\sigma_{b}(c)\circ z^{-1}-a\circ 0+1)\circ z\] \[= (a\circ b(c\circ z^{-1}+b^{-1})\circ z\circ z^{-1}-a\circ 0+1) \circ z\] \[= \left(a\circ b\circ c\circ z^{-1}-a\circ b\circ 0+a-a\circ 0+1 \right)\circ z.\] But due to condition (2.2) and by setting \(c=0\circ z\), we deduce that \(a-a\circ 0=\zeta\), \(\forall a\in X\) (\(\zeta\) is a fixed element in \(X\)), but for \(a=1\) we immediately obtain \(\zeta=1\), i.e. \[a-a\circ 0=1.\] (2.15) (i) By setting \(a=0\) in (2.15) we have \(0\circ 0=-1\). (ii) \(0\circ(1+1)=0\circ 1-0\circ 0+0\circ 1\Rightarrow 0\circ(1+1)=1 \Rightarrow 1+1=0^{-1}\).
4. For the following we set \(z_{2}\circ\xi=0^{-1}\), \(0\circ\xi=z_{1}\circ\xi=z^{-1}\circ 0^{-1}\). (i) Recall the form of \(\hat{\sigma}_{a}(b)\) (2.10), and use the distributivity rules, then \[\hat{\sigma}_{a}(b)=z_{2}\circ\xi-a\circ 0\circ\xi+a\circ b\circ z_{1} \circ\xi.\] (2.16) We consider now the fixed constants: \(z_{2}\circ\xi=0^{-1}\), \(0\circ\xi=z_{1}\circ\xi=z^{-1}\circ 0^{-1}\). Note that if \(z\) satisfies the right distributivity then so does \(z^{-1}\) (see Proposition 2.3 in [17]) and also \(0\circ z\), given that \(0\) has left and right distributivity. We recall relations (2.8) for the maps, then \[\sigma_{\hat{\sigma}_{a}(b)}(\hat{\tau}_{b}(a))=a\Rightarrow\hat{ \sigma}_{a}(b)\circ\hat{\tau}_{b}(a)-\hat{\sigma}_{a}(b)\circ 0\circ z+z=a\Rightarrow\] \[\hat{\sigma}_{a}(b)\circ\hat{\tau}_{b}(a)-(0^{-1}-a\circ z^{-1} \circ 0^{-1}+a\circ b\circ z^{-1}\circ 0^{-1})\circ 0\circ z+z=a\] \[\hat{\sigma}_{a}(b)\circ\hat{\tau}_{b}(a)-a\circ b+a-z+z=a\Rightarrow\] \[\hat{\sigma}_{a}(b)\circ\hat{\tau}_{b}(a)=a\circ b.\] Similarly, \(\hat{\sigma}_{\sigma_{a}(b)}(\tau_{b}(a))=a\Rightarrow\sigma_{a}(b)\circ \tau_{b}(a)=a\circ b\). (ii) We consider \(z_{2}\circ\xi=0^{-1}\), \(0\circ\xi=z_{1}\circ\xi=z^{-1}\circ 0^{-1}\), and consequently, as shown above, \(\sigma_{a}(b)\circ\tau_{b}(a)=a\circ b=\hat{\sigma}_{a}(b)\circ\hat{\tau}_{b} (a)\). We also recall condition (2.3) of the braid equation and \(a\circ b=\sigma_{a}(b)\circ\tau_{b}(a)\), indeed \[\tau_{c}(\tau_{b}(a))=\sigma_{\tau_{b}(a)}(c)^{-1}\circ\sigma_{a}(b)^{-1} \circ a\circ b\circ c\] and due to the form of (2.3) we conclude \[\sigma_{a}(b)\circ\sigma_{\tau_{b}(a)}(c)=\sigma_{a}(\sigma_{b}(c))\circ \sigma_{\tau_{\sigma_{b}(c)}(a)}(\tau_{c}(b)).\] (2.17)
We focus on
\[\sigma_{a}(b)\circ\sigma_{\tau_{b}^{z}(a)}(c) = \sigma_{a}(b)\circ(\tau_{b}(a)\circ c\circ z^{-1}-\tau_{b}(a)\circ 0+1)\circ z \tag{2.18}\] \[= (a\circ b\circ c\circ z^{-1}-a\circ b\circ 0+\sigma_{a}(b))\circ z\] \[= (a\circ b\circ c\circ z^{-1}-a\circ b\circ 0+a\circ b-a\circ 0 \circ z+z)\circ z.\]
Taking into consideration the form of (2.17) and (2.18) and the fact that \(b\circ c=\sigma_{c}(c)\circ\tau_{c}(b)\), we conclude that \(\forall a\in X,\,-\,a\circ 0+a=\hat{\zeta}\), where \(\hat{\zeta}\in X\) is a fixed element, and for \(a=1\) we deduce that \(\hat{\zeta}=1\), i.e. \(-a\circ 0+a=1\).
_Remark 2.7_.: Due to \(a-a\circ 0=-a\circ 0+a=1\), \(\forall a\in B\), we deduce that \(a+1=1+a\), \(\forall a\in B\).
We call the algebraic construction deduced in Theorem 2.6 a _near brace_, in analogy to near rings, specifically:
**Definition 2.8**.: A _near brace_ is a set \(B\) together with two group operations \(+,\circ:B\times B\to B\), the first is called addition and the second is called multiplication, such that \(\forall a,b,c\in B\),
\[a\circ(b+c)=a\circ b-a\circ 0+a\circ c, \tag{2.19}\]
and \(\ a-a\circ 0=-a\circ 0+a=1\). We denote by \(0\) the neutral element of the \((B,+)\) group and by \(1\) the neutral element of the \((B,\circ)\) group. We say that a near brace \(B\) is an abelian near brace if \(+\) is abelian.
In the special case where \(0=1\), we recover a skew brace. We also show below some useful properties for near braces.
**Lemma 2.9**.: _[_17_]_ _Let \((B,\circ,+)\) be a near brace, then_
1. \(a\circ(-b)=a\circ 0-a\circ b+a\circ 0\)_._
2. _Condition (_2.19_) is equivalent to the following condition_ \(\forall a,b,c,d\in B\)_:_ \[a\circ(b-c+d)=a\circ b-a\circ c+a\circ d.\]
Proof.:
1. \(a\circ(b-b)=a\circ 0\Rightarrow a\circ b-a\circ 0+a\circ(-b)=a\circ 0\), which leads to \(a\circ(-b)=a\circ 0-a\circ b+a\circ 0\).
2. Let (2.19) hold then \[a\circ(b-c+d)=a\circ(b-c)-a\circ 0+a\circ d=\] \[a\circ b-a\circ 0+a\circ 0-a\circ c+a\circ 0-a\circ 0+a\circ d=\] \[a\circ b-a\circ c+a\circ d.\] (2.20)
Conversely, let \(a\circ(b-c+d)=a\circ b-a\circ c+a\circ d\) hold, then
\[a\circ(b+c)=a\circ(b-0+c)=a\circ b-a\circ 0+a\circ c. \tag{2.21}\]
**Example 2.10**.: Let us consider a set \(\{a,b\}\), it has a unique up to isomorphism group structure given by \((\mathbb{Z}/2\mathbb{Z},+)\). Let us denote by \((\{a,b\},+,a)\) a group with neutral element \(a\) and by \((\{a,b\},\circ,b)\) a group with neutral element \(b\). Then \((\{a,b\},+,\circ)\) is a near brace. We will denote this near brace by \(\mathrm{D}:=(\{a,b\},+,\circ)\)
**Example 2.11**.: Let \((B,\circ)\) be a group with neutral element \(1\) and define \(a+b:=a\circ\kappa^{-1}\circ b\), where \(1\neq\kappa\in B\) is an element of the center of \((B,\circ)\). Then \((B,\circ,+)\) is a near brace with neutral element \(0=\kappa\), and we call it the trivial near brace 1.
Footnote 1: We are indebted to Paola Stefanelli for sharing this example with us.
**Definition 2.12**.: Let \((B,+,\circ)\) and \((S,+,\circ)\) be near braces. We say that \(f:B\to S\) is a near brace morphism if for all \(a,b\in B\),
\[f(a+b)=f(a)+f(b)\qquad\&\qquad f(a\circ b)=f(a)\circ f(b).\]
**Lemma 2.13**.: _Let \(f:X\to X\) be a map, such that \(\forall a,b\in X\)\(f(a\circ b-a\circ z+z)=f(a)\circ f(b)-f(a)\circ 0\circ z+z,\) and there is \(e\in X,\)\(f(e)=1\). If such a map \(f\) exists then \(0=1\)._
Proof.: We assume that such map \(f:X\to X\) exists. Then for all \(a\in X\),
\[f(z)=f(a\circ z-a\circ z+z)=f(a)\circ f(z)-f(a)\circ 0\circ z+z,\]
and by setting \(a=e\), \(f(e)=1\):
\[f(z)=f(z)-0\circ z+z\implies 0\circ z=z\implies 0=1,\]
where the last implication follows from the fact that \(z\) is invertible.
_Remark 2.14_.: Observe that since every bijection is surjective, the preceding Lemma states that if \(\sigma_{a}^{{}^{\prime}z}(b):=a\circ b-a\circ 0\circ z+z\) gives a solution of YBE, it is isomorphic to the solution given by \(\sigma_{a}^{z}(b):=a\circ b-a\circ z+z\) if \(0=1\), that is near skew brace is a left skew brace.
**Corollary 2.15**.: _Observe that Lemma 2.9\((2)\) states that the triple \(\mathrm{T}(B):=(B,[-,-,-],\circ)\), where \(\forall a,b,c\in B,\)\([a,b,c]=a-b+c,\) is a near-truss such that \((B,\circ)\) is a group, see [7]. Thus one can take a retract of \(B\) in \(1\), that is define operation \(\forall a,b\in B,\)\(a+_{1}b=a-1+b.\) Then the triple \((B,+_{1},\circ)\) is a left skew brace. Therefore every near brace is acquired by considering a sandwich group of a left skew brace in a specific element._
The following two Corollaries hold in the special case where \(0=0^{-1}\).
**Corollary 2.16**.: _Let \((B,+,\circ)\) be a near brace such that \(0\neq 1\), then \(\mathrm{D}\) embeds into \(B\)._
Proof.: Observe that by Theorem 2.6\(\{0,1\}\) is a near sub-brace. Moreover, since two element set has only one group structure up to isomorphism, \((\{0,1\},+,\circ)\) is isomorphic to D.
**Corollary 2.17**.: _The near brace \(\mathrm{D}\) corresponds to trivial brace \((\mathbb{Z}/2\mathbb{Z},+,+)\), thus examples for \(0\neq 1\) arise from left skew braces \(B\) such that \((\mathbb{Z}/2\mathbb{Z},+,+)\) embeds into \(B\) as near braces._
**Example 2.18**.: Obvious examples are \(\mathrm{D}\times B\), for any left skew brace \(B\).
**Lemma 2.19**.: _Let \((X,\circ,+)\) be a near brace and \(z,w\in X\) satisfy the right distributivity. Consider also the maps \(\sigma,\sigma^{\prime}:X\times X\to X\) such that \(\sigma_{a}(b)=a\circ b-a\circ 0\circ z+z\) and \(\sigma^{\prime}_{a}(b)=a\circ b-a\circ 0\circ w+w.\) If \(\sigma_{a}(b)=\sigma^{\prime}_{a}(b)\) then \(z^{-1}\circ w-1=w-z.\)_
Proof.: The proof is straightforward by setting \(a=z^{-1}\circ 0^{-1}\) in both \(\sigma_{a}(b)\) and \(\sigma^{\prime}_{a}(b)\).
### Generalized bijective maps \(\&\) solutions of the braid equation
Inspired by the findings of the preceding section we introduce below more general, multi-parametric bijective maps \(\sigma_{a}^{p},\tau_{b}^{p}\) (\(p\) stands for parametric) that provide solutions of the set-theoretic braid equation.
**Proposition 2.20**.: _Let \((B,\circ,+)\) be a near brace and let us denote \(\sigma_{a}^{p}(b):=a\circ b\circ z_{1}-a\circ\xi+z_{2}\) and \(\tau_{b}^{p}(a):=\sigma_{a}(b)^{-1}\circ a\circ b\), where \(a,b\in B\), and \(h\in\{\xi,\ z_{i}\}\in B,\)\(i\in\{1,2\}\) are fixed parameters, such that \(\exists\ c_{1,2}\in B,\forall a,b,c\in B,\)\((a-b+c)\circ h=a\circ h-b\circ h+c\circ h,\)\(a\circ z_{2}\circ z_{1}-a\circ\xi=c_{1}\) and \(-a\circ\xi+a\circ z_{1}\circ z_{2}=c_{2}.\)_
_Then \(\forall a,b,c\in B\) the following properties hold:_
1. \(\sigma_{a}^{p}(b)\circ\tau_{b}^{p}(a)=a\circ b.\)__
2. \(\sigma_{a}^{p}(\sigma_{b}^{p}(c))=a\circ b\circ c\circ z_{1}\circ z_{1}-a \circ b\circ\xi\circ z_{1}+c_{1}+z_{2}.\)__
3. \(\sigma_{a}^{p}(b)\circ\sigma_{\tau_{b}^{p}(a)}^{p}(c)=a\circ b\circ c\circ z _{1}+c_{2}-a\circ\xi\circ z_{2}+z_{2}\circ z_{2}.\)__
Proof.: Let \(a,b,c\in B\), then:
1. \(\sigma_{a}^{p}(b)\circ\tau_{b}^{p}(a)=\sigma_{a}^{p}(b)\circ\sigma_{a}^{p}(b) ^{-1}\circ a\circ b=a\circ b.\)
2. To show condition (2) we recall that \(a\circ z_{2}\circ z_{1}-a\circ\xi=c_{1}.\) Then, \[\sigma_{a}^{p}(\sigma_{b}^{p}(c)) = \sigma_{a}^{p}(b\circ c\circ z_{1}-b\circ\xi+z_{2})\] \[= a\circ(b\circ c\circ z_{1}-b\circ\xi+z_{2})\circ z_{1}-a\circ \xi+z_{2}\] \[= a\circ b\circ c\circ z_{1}\circ z_{1}-a\circ b\circ\xi\circ z _{1}+a\circ z_{2}\circ z_{1}-a\circ\xi+z_{2}\] \[= a\circ b\circ c\circ z_{1}\circ z_{1}-a\circ b\circ\xi\circ z _{1}+c_{1}+z_{2}.\]
3. To show condition (3) we use (1) and \(-a\circ\xi+a\circ z_{1}\circ z_{2}=c_{2}\). \[\sigma^{p}_{a}(b)\circ\sigma^{p}_{\tau^{p}_{b}(a)}(c) =\sigma^{p}_{a}(b)\circ(\tau^{p}_{b}(a)\circ c\circ z_{1}-\tau^{p} _{b}(a)\circ\xi+z_{2})\] \[=\sigma^{p}_{a}(b)\circ\tau^{p}_{a}(b)\circ c\circ z_{1}-\sigma^{p }_{a}(b)\circ\tau^{p}_{a}(b)\circ\xi+\sigma^{p}_{a}(b)\circ z_{2}\] \[=a\circ b\circ c\circ z_{1}-a\circ b\circ\xi+\sigma^{p}_{a}(b) \circ z_{2}\] \[=a\circ b\circ c\circ z_{1}-a\circ b\circ\xi+(a\circ b\circ z_{1 }-a\circ\xi+z_{2})\circ z_{2}\] \[=a\circ b\circ c\circ z_{1}+c_{2}-a\circ\xi\circ z_{2}+z_{2} \circ z_{2}.\]
**Example 2.21**.: A simple example of the above generic maps is the case where \(z_{1}\circ z_{2}=\xi\circ 0=z_{2}\circ z_{1}\), then \(c_{1}=c_{2}=1\).
Having showed the fundamental properties above we may now proceed in proving the following theorem.
**Theorem 2.22**.: _Let \((B,\circ,+)\) be a near brace and \(z\in B\) such that \(\exists\ c_{1,2},\forall a,b,c\in B\), \((a-b+c)\circ z_{i}=a\circ z_{i}-b\circ z_{i}+c\circ z_{i},\)\(i\in\{1,\ 2\},\)\(a\circ z_{2}\circ z_{1}-a\circ\xi=c_{1}\). We define a map \(\check{r}:B\times B\to B\times B\) given by_
\[\check{r}(a,b)=(\sigma^{p}_{a}(b),\tau^{p}_{b}(a)),\]
_where \(\sigma^{p}_{a}(b)=a\circ b\circ z_{1}-a\circ\xi+z_{2}\), \(\tau^{p}_{b}(a)=\sigma^{p}_{a}(b)^{-1}\circ a\circ b.\) The pair \((B,\check{r})\) is a solution of the braid equation._
Proof.: To prove this we need to show that the maps \(\sigma,\tau\) satisfy the constraints (2.2)-(2.4). To achieve this we use the properties proven in Proposition 2.20.
Indeed, from Proposition 2.20, (1) and (2), it follows that (2.2) is satisfied, i.e.
\[\sigma^{p}_{\eta}(\sigma^{p}_{x}(y))=\sigma^{p}_{\sigma^{p}_{\eta}(x)}(\sigma^ {p}_{\tau^{p}_{x}(\eta)}(y)).\]
We observe that
\[\tau^{p}_{b}(\tau^{p}_{a}(\eta))=T^{p}\circ\tau^{p}_{a}(\eta)\circ b=T^{p} \circ t^{p}\circ\eta\circ a\circ b=T^{p}\circ t^{p}\circ\eta\circ\sigma^{p}_ {a}(b)\circ\tau^{p}_{b}(a),\]
where \(T^{p}=\sigma^{p}_{\tau^{a}_{a}(\eta)}(b)^{-1}\) and \(t^{p}=\sigma^{p}_{\eta}(a)^{-1}\) (the inverse in the circle group). Due to (1), (2), (3) of Proposition 2.20 we then conclude that
\[\tau^{p}_{b}(\tau^{p}_{a}(\eta))=\tau^{p}_{\tau^{p}_{b}(a)}(\tau^{p}_{\sigma^{ p}_{a}(b)}(\eta)),\]
so (2.3) is also satisfied.
To prove (2.4), we employ (3), (1) of Proposition 2.20 and use the definition of \(\tau^{p}\),
\[\sigma^{p}_{\tau^{p}_{\sigma^{p}_{x}(y)}(\eta)}(\tau^{p}_{y}(x))=\sigma^{p}_{ \eta}(\sigma^{p}_{x}(y))^{-1}\circ\sigma^{p}_{\eta}(x)\circ\sigma^{p}_{\tau^{ p}_{x}(\eta)}(y)=\tau^{p}_{\sigma^{p}_{\tau^{p}_{x}(\eta)}(y)}(\sigma^{p}_{\eta}(x )).\]
Thus, (2.4) is satisfied, and \(\check{r}(a,b)=(\sigma^{p}_{a}(b),\tau^{p}_{b}(a))\) is a solution of the braid equation.
**Lemma 2.23**.: _Let \((X,\circ,+)\) be a near left brace and \(z,w\in X\) satisfy the right distributivity. Consider also the multi-parametric maps \(\sigma^{p},\sigma^{{}^{\prime}p}:X\times X\to X\) as defined in Proposition 2.20, such that \(\sigma^{p}_{a}(b)=a\circ b\circ z_{1}-a\circ\xi+z_{2}\) and \(\sigma^{{}^{\prime}p}_{a}(b)=a\circ b\circ z_{2}-a\circ\xi+z_{1}.\) If \(\sigma_{a}(b)=\sigma^{\prime}_{a}(b)\) then \(0\circ z_{1}^{-1}\circ z_{2}=z_{2}-z_{1}.\)_
Proof.: The proof is straightforward by setting \(a=0\circ\xi^{-1}\) and \(b=\xi\circ z_{1}^{-1}\) in both \(\sigma^{p}_{a}(b)\) and \(\sigma^{{}^{\prime}p}_{a}(b).\)
_Remark 2.24_.: In the special case where \(z_{1}=1\) and \(\xi=z_{2}=z\) we recover the \(\sigma^{z}_{x}(y),\ \tau^{z}_{y}(x)\) bijective maps and the \(\check{r}_{z}\) solutions of the braid equation introduced in [17].
In the following Proposition we provide the explicit expressions of the inverse \(\check{r}\)-matrices as well as the corresponding bijective maps.
**Proposition 2.25**.: _Let \(\check{r}^{*},\ \check{r}:X\times X\) be solutions of the braid equations, such that \(\check{r}^{*}:(x,y)\mapsto(\hat{\sigma}^{p}_{x}(y),\hat{\tau}^{p}_{y}(x)),\)\(\check{r}:(x,y)\mapsto(\sigma^{p}_{x}(y),\tau^{p}_{y}(x)),\) then the following hold._
1. \(\check{r}^{*}=\check{r}^{-1}\) _if and only if_ \[\hat{\sigma}^{p}_{\sigma^{p}_{x}(y)}(\tau^{p}_{y}(x))=x,\ \hat{\tau}^{p}_{\tau^{p}_{y}(x)}(\sigma^{p}_{x}(y))=y\text{ and }\sigma^{p}_{\hat{\sigma}^{p}_{x}(y)}(\hat{\tau}^{p}_{y}(x))=x,\ \tau^{p}_{\hat{\tau}^{p}_{y}(x)}(\hat{\sigma}^{p}_{x}(y))=y.\] (2.22)
2. _Let_ \(\sigma^{p}_{x}(y)=x\circ y\circ z_{1}-x\circ\xi+z_{2},\ \tau^{p}_{y}(x)=\sigma^{p}_{x}(y)^{-1}\circ x\circ y,\) _and_ \(\xi,z_{i}\in B\ i\in\{1,2\}\) _are fixed elements, such that_ \(\exists\ c_{1,2}\in C,\ \forall a,b,c\in B,\)__\((a-b+c)\circ z_{i}=a\circ z_{i}-b\circ z_{i}+c\circ z_{i},\)__\(a\circ z_{2}\circ z_{1}-a\circ\xi=c_{1}\) _and_ \(-a\circ\xi+a\circ z_{1}\circ z_{2}=c_{2}.\)_Then_ \(\hat{\sigma}^{p}_{x}(y)=\hat{z}_{2}-x\circ\hat{\xi}+x\circ y\circ\hat{z}_{1},\)__\(\hat{\tau}^{p}_{x}(y)=\hat{\sigma}^{p}_{x}(y)^{-1}\circ x\circ y,\) _where_ \(\hat{\xi}=\xi^{-1},\ \hat{z}_{1,2}=z_{1,2}\circ\xi^{-1}.\)__
Proof.: We prove the two parts of Proposition 2.25 below:
1. If \(\check{r}^{*}=\check{r}^{-1},\) then \(\check{r}\check{r}^{*}=\check{r}^{*}\check{r}=\mathrm{id}\) and \(\check{r}\check{r}^{*}(x,y)=(\sigma^{p}_{\hat{\sigma}_{x}(y)}(\hat{\tau}^{p}_{ y}(x)),\tau^{p}_{\hat{\tau}^{p}_{y}(x)}(\hat{\sigma}^{p}_{x}(y))).\) Thus \(\sigma^{z}_{\hat{\sigma}^{p}_{x}(y)}(\hat{\tau}^{p}_{y}(x))=x,\)\(\tau^{p}_{\hat{\tau}^{p}_{y}(x)}(\hat{\sigma}^{z}_{x}(y))=y.\) And vice versa if \(\sigma^{p}_{\hat{\sigma}^{p}_{\hat{\sigma}}(y)}(\hat{\tau}^{p}_{y}(x))=x,\)\(\tau^{p}_{\hat{\tau}^{p}_{y}(x)}(\hat{\sigma}^{p}_{x}(y))=y,\) then it automatically follows that \(\check{r}^{*}=\check{r}^{-1}.\) Similarly, \(\check{r}^{*}\check{r}(x,y)=(x,y)\) leads to \(\hat{\sigma}^{p}_{\sigma^{p}_{x}(y)}(\tau^{p}_{y}(x))=x,\)\(\hat{\tau}^{p}_{\tau^{p}_{y}(x)}(\sigma^{p}_{x}(y))=y,\) and vice versa.
2. For the second part of the Proposition it suffices to show (2.22). Indeed, we recall that that \(\hat{\xi}=\xi^{-1}\) and \(\hat{z}_{1,2}=z_{1,2}\circ\xi^{-1},\) then \[\hat{\sigma}^{p}_{\sigma^{p}_{x}(y)}(\tau^{p}_{y}(x)) = \hat{z}_{2}-\sigma^{p}_{x}(y)\circ\hat{\xi}+\sigma^{p}_{x}(y)\circ \tau^{p}_{y}(x)\circ\hat{z}_{1}\] \[= \hat{z}_{2}-\sigma^{p}_{x}(y)\circ\hat{\xi}+x\circ y\circ\hat{z}_ {2}\] \[= \hat{z}_{2}-(x\circ y\circ z_{1}-x\circ\xi+z_{2})\circ\hat{\xi}+x \circ y\circ\hat{z}_{1}=x.\] Also, \(\hat{\tau}^{p}_{\tau^{p}_{y}(x)}(\sigma^{p}_{x}(y))=x^{-1}\circ\sigma^{p}_{x}(y) \circ\tau^{p}_{y}(x)=y.\)
Similarly, we show
\[\sigma^{p}_{\hat{\sigma}^{p}_{x}(y)}(\hat{\tau}^{p}_{y}(x)) = \hat{\sigma}^{p}_{x}(y)\circ\hat{\tau}^{p}_{y}(x)\circ z_{1}-\hat{ \sigma}^{p}_{x}(y)\circ\xi+z_{2}\] \[= x\circ y\circ z_{1}-(\hat{z}_{2}-x\circ\hat{\xi}+x\circ y\circ \hat{z}_{1})\circ\xi+z_{2}=x.\]
And as above we immediately deduce that \(\tau^{p}_{\hat{\tau}^{p}_{y}(x)}(\hat{\sigma}^{p}_{x}(y))=y.\)
With this we conclude our analysis on the general bijective maps coming from near braces and the corresponding solutions of the braid equation.
### \(p\)-deformed braided groups and near braces
Motivated by the definition of braided groups and braidings in [34] as well as the relevant work presented in [26] we provide a generic definition of the \(p\)-deformed braided group and braiding that contain extra fixed parameters, i.e. for multi-parametric braidings (\(p\)-braidings).
**Definition 2.26**.: Let \((G,\circ)\) be a group, \(m(x,y)=x\circ y\) and \(\tilde{r}\) is an invertible map \(\tilde{r}:G\times G\to G\times G\), s such that \(\forall x,y\in G\), \(\tilde{r}(x,y)=(\sigma^{p}_{x}(y),\tau^{p}_{y}(x))\), where \(\sigma^{p}_{x},\ \tau^{p}_{y}\) are bijective maps in \(G.\) The map \(\tilde{r}\) is called a \(p\)-braiding operator (and the group is called \(p\)-braided) if
1. \(x\circ y=\sigma^{p}_{x}(y)\circ\tau^{p}_{y}(x).\)
2. \((\mathrm{id}\times m)\ \tilde{r}_{12}\ \tilde{r}_{23}(x,y,w)=(f^{p}_{xoy}(w),\ f^{p}_{xoy}(w)^{-1} \circ x\circ y\circ w).\)
3. \((m\times\mathrm{id})\ \tilde{r}_{23}\ \tilde{r}_{12}(x,y,w)=(g^{p}_{x}(y\circ w),\ g^{p}_{x}(y \circ w)^{-1}\circ x\circ y\circ w).\)
for some bijections \(f^{p}_{x},g^{p}_{x}:G\to G\), given \(\forall x\in G\).
**Proposition 2.27**.: _Let \((G,\circ)\) be a group, and the invertible map \(\tilde{r}:G\times G\to G\times G,\)\(\forall x,y\in G,\)\(\tilde{r}(x,y)=(\sigma^{p}_{x}(y),\tau^{p}_{y}(x)),\) be a \(p\)-braiding operator for the group \(G.\) Then \(\tilde{r}\) is a non-degenerate solution of the braid equation._
Proof.: We start from the LHS of condition (2) of Proposition 2.27
\[(\mathrm{id}\times m)\ \tilde{r}_{12}\ \tilde{r}_{23}\ (x,y,w)=(\sigma^{p}_{x}( \sigma^{p}_{y}(w)),\ \tau^{p}_{\sigma^{p}_{y}(w)}(x)\circ\tau^{p}_{w}(y)),\]
which leads to
\[\sigma^{p}_{x}(\sigma^{p}_{y}(w))=f^{p}_{xoy}(w)=f^{p}_{\sigma^{p}_{x}(y)\circ \tau^{p}_{y}(x)}(w)=\sigma^{p}_{\sigma^{p}_{x}(y)}(\sigma^{p}_{\tau^{p}_{y}(x) }(w))\]
i.e. the fundamental condition (2.2) is satisfied. Moreover, using condition (1) we show
\[\tau^{p}_{\sigma^{p}_{y}(w)}(x)\circ\tau^{p}_{w}(y)=\sigma^{p}_{x}(\sigma^{p }_{y}(w))^{-1}\circ x\circ\sigma^{p}_{y}(w)\circ\sigma^{p}_{y}(w)^{-1}\circ y \circ w=f^{p}_{xoy}(w)^{-1}\circ x\circ y\circ w,\]
as expected compatible with condition (2) of Proposition 2.27.
Similarly, from the LHS of condition (3)
\[(m\times\mathrm{id})\ \tilde{r}_{23}\ \tilde{r}_{12}\ (x,y,w)=(\sigma^{p}_{x}(y) \circ\sigma^{p}_{\tau^{p}_{y}(x)}(w),\ \tau^{p}_{w}(\tau^{p}_{y}(x))).\]
The latter expression leads to
\[\sigma^{p}_{x}(y)\circ\sigma^{p}_{\tau^{p}_{\tilde{r}}(x)}(w)=g^{p}_{x}(y\circ w). \tag{2.23}\]
Also, via condition (1)
\[\tau^{p}_{w}(\tau^{p}_{y}(x)) = \sigma^{p}_{\tau^{p}_{y}(x)}(w)^{-1}\circ\sigma^{p}_{x}(y)^{-1} \circ x\circ y\circ w=g^{p}_{x}(y\circ w)^{-1}\circ x\circ y\circ w \tag{2.24}\] \[= \tau^{p}_{\tau^{p}_{w}(y)}(\tau^{p}_{\sigma^{p}_{y}(w)}(x))\]
compatible with condition (3) of Proposition 2.27, and this shows condition (2.4).
Having shown properties 2.2 and (2.3) and taking into account that \(x\circ y=\sigma^{p}_{x}(y)\circ\tau^{p}_{y}(x)\) we also show condition (2.4), i.e. we conclude that \(\check{r}\), as defined in Proposition 2.27, is a solution of the braid equation.
**Lemma 2.28**.: _Let \(B\) be a near brace, and consider the map \(\check{r}:B\times B\to B\times B\), \(\check{r}(x,y)=(\sigma^{p}_{x}(y),\ \tau^{p}_{y}(x))\) of Proposition 2.22. Then \(\check{r}\) is a \(p\)-braiding._
Proof.: The proof is straightfroward via Proposition 2.20. Indeed, all the conditions of the \(p\)-braiding Definition 2.26 are satisfied and:
\[f^{p}_{a}(b)=a\circ b\circ z_{1}\circ z_{1}-a\circ\xi\circ z_{1}+c_{1}+z_{2}, \ g^{p}_{a}(b)=a\circ b\circ z_{1}+c_{2}-a\circ\xi\circ z_{2}+z_{2}\circ z_{ 2}.\qed\]
With this we conclude our analysis on \(p\)-braidings and their connection to the YBE and the notion of the near braces. One of the fundamental open problems in this frame and a natural next step is the solution of the set-theoretic reflection equation for this new class of solutions of the set-theoretic YBE. We hope to address this problem and generalize the notion of the \(p\)-braiding to include the reflection equation, in the near future. Another key question, which we hope to tackle soon, is what the effect of non-associativity in \((X,+)\) on the construction of the algebraic structures emerging from solutions of the set-theoretic YBE would be. This is quite a challenging problem, the analysis of which will yield yet more generalized classes of solutions of the YBE.
### Acknowledgments
Support from the EPSRC research grant EP/V008129/1 is acknowledged.
|
2308.00010 | Monaural Multi-Speaker Speech Separation Using Efficient Transformer
Model | Cocktail party problem is the scenario where it is difficult to separate or
distinguish individual speaker from a mixed speech from several speakers. There
have been several researches going on in this field but the size and complexity
of the model is being traded off with the accuracy and robustness of speech
separation. "Monaural multi-speaker speech separation" presents a
speech-separation model based on the Transformer architecture and its efficient
forms. The model has been trained with the LibriMix dataset containing diverse
speakers' utterances. The model separates 2 distinct speaker sources from a
mixed audio input. The developed model approaches the reduction in
computational complexity of the speech separation model, with minimum tradeoff
with the performance of prevalent speech separation model and it has shown
significant movement towards that goal. This project foresees, a rise in
contribution towards the ongoing research in the field of speech separation
with computational efficiency at its core. | S. Rijal, R. Neupane, S. P. Mainali, S. K. Regmi, S. Maharjan | 2023-07-29T15:10:46Z | http://arxiv.org/abs/2308.00010v1 | # Monaural Multi-Speaker Speech Separation Using Efficient Transformer Model
###### Abstract
Cocktail party problem is the scenario where it is difficult to separate or distinguish individual speaker form a mixed speech from several speakers. There have been several researches going on in this field but the size and complexity of model is being traded off with the accuracy and robustness of speech separation. "Monaural multi-speaker speech separation" presents a speech-separation model based on the Transformer architecture and its efficient forms. The model has been trained with the LibriMix dataset containing diverse speakers' utterances. The model separates 2 distinct speaker sources from a mixed audio input. The developed model approaches the reduction in computational complexity of the speech separation model, with minimum tradeoff with the performance of prevalent speech separation model and it has shown significant movement towards that goal. This project foresees, a rise in contribution towards the ongoing research in the field of speech separation with computational efficiency at its core.
cocktail party problem, complexity, efficiency, monaural, speech-separation, transformer
## I Introduction
Monaural multi-speaker speech separation is the task of separating individual speakers from a single audio recording, which is also known as the cocktail party problem [1]. This problem is challenging because of the overlapping speech signals, the variability of speakers and environments, and the lack of spatial cues in the monaural case. However, solving this problem has many applications in various fields such as speech processing, telecommunications, entertainment, surveillance, and human-computer interaction.
In recent years, deep learning methods have achieved remarkable results in monaural speech separation, especially with the development of end-to-end models that directly estimate the source signals from the mixture signal. However, most of these models rely on large-scale and high-quality datasets, which are not always available or easy to obtain. Moreover, these models often have high computational complexity and memory requirements, which limit their practical deployment and scalability.
In this paper, we propose a novel perceiver-based architecture for monaural speech separation that aims to reduce the computational complexity and improve the performance of existing models. The perceiver is a recently proposed model that combines self-attention and convolutional neural networks to process various types of inputs with a fixed number of parameters. We adapt the perceiver to speech separation by using a recurrent output layer and a masking-based objective function. We evaluate our model on the LibriMix dataset, which is a publicly available dataset for speech separation. We compare our model with several state-of-the-art models and show that our model achieves competitive results with much fewer parameters and faster inference time. We also analyse the effect of different hyperparameters and components on the performance of our model. We hope that our work can inspire further research on transformer-based models for speech separation and other speech processing tasks.
## II Literature Review
Early study of the speech source separation began with beamforming, a spatial filtering technique that leverages sound wave direction and phase difference to separate sources [2]. In the late 90s and early 2000s, single channel speech source separation was approached using statistical methods such as using eigen decomposition [3], ICA [4], including maximum likelihood approach based on EM algorithm [5].
The rise of machine learning has changed the way speech separation task is perceived, Hu, K., et al. [6] proposed iterative model based on GMM and DNN that could model the sources of signals and estimate the separation matrices respectively, thus intaking source's characteristics into account for the separation task. DNN based model to estimate the complex non-linear relationship between the mixed signal and targets were being proven to be exceptional in the field one after another [7][8]. However, the need of better models demanded more complex and deep architecture as well.
Kolback, M. et al. [9] introduced utterance-level Permutation Invariant Training as a new norm for training speech separation model, the study used deep LSTM RNNs and bi-directional LSTM RNNs together with uPIT, and the output outperformed previous approaches and models.
While the previous models were designed for speech separation through mask estimation for each source in time-frequency representation, Luo, Y. et al., [10] directly modeled the signal in time-domain using encode-decoder
framework and significantly outperformed the state-of-the art causal systems which utilized time-frequency representation as input. The extension of the dual-path BiLSTM, TasTas architecture obtained state-of-the-art performance when applied with iterative multi-stage refinement scheme [11].
Introduction of the transformer model [12], revolutionized how data with long and short-term dependencies are approached, [13] proposed a DPTNet for end-to-end monaural speech separation that employed improved transformer that enables direct context-aware modeling on speech sequences.
Subakan, C. et al. [14], introduced the model with state-of-the art performance, based on the transformer-model's encoder part and unlike DPRNN's RNN approach of the dual-path framework, this approach involved the use of a multi-scale pipeline that employs transformer to capture long and short-term dependencies. The same team further studied the application of other transformer architectures like Longformer, Linformer and Reformer, however, the result they obtained were nowhere near to their first approach [15]. With functionalities like speech recognition, synthesis or separation being more of commodities in daily lives than some far distant technologies, the advancement in the transformer-based speech separation models has promised a more economic path.
## III Perceiver
Perceiver [16] is an efficient form of transformer architecture that has more enhanced performance than the original transformer.
This model (Fig. 1) introduces the small set of latent units that form an attention bottleneck through which the inputs are required pass and the reduction is size of the Query(Q) hence eliminates the quadratic scaling problem associated with the original transformer. The latent transformer presented in the paper resembles the decoder part of the original transformer. A Transformer built on bytes has complexity \(O(LM^{2})\)while a latent Transformer has complexity \(O(LN^{2})\) (where \(N\ll M\)), when considered as a function of the number of layers L in addition to index dimensionality. This results in an architecture with complexity \(O(MN+LN^{2})\).
## IV Perceiver-based Speech Separation Transformer (Perceparator)
This model closely resembles and relies on the masking-based source separation framework presented by [14] (Fig. 2).
### _Encoder_
The time-time domain mixture-signal \(x(t)\in R^{T}\), is fed into the encoder, it learns the time-frequency latent representation, \(e=ReLU(conv1d(x))\).
### _Masking Network_
The masking network (Fig. 3) is fed by the encoded representations \(e\) and estimates a mask \(m_{i}\) for each of \(N_{s}\) speakers.
Fig. 1: Perceiver Architecture [16]
Fig. 3: Overall Architecture of the masking network
Fig. 2: The masking-based source separation pipeline. Latent representation of the mixed-signal is learned by the encode, the masking block estimates the masks for the source sound and decoder reconstructs the estimated sources
The input \(e\) undergoes layer normalization [17], after which it is processed by a linear layer with \(F\) dimensions.
\(e=\mathit{linearLayer}\left(\mathit{LayerNorm}(e)\right)\)
After normalization, the input is chunked into \(N_{C}\) number of chunks of size \(C\) with optional overlapping factor of 50%. The output of the chunk can be represented as \(h^{\prime}\in R^{F\times C\times N_{C}}\).
The chunked data is encoded with the positional embeddings, the sequential nature of the input data is evident of the necessity of the positional information for better performance. Moreover, the SOTA performance in the transformer-based speech separation model [15] has proven its significance in their study.
The chunked data and a randomly initialized latent array of form, \(l_{a}\in R^{F\times L\times N_{C}}\), are fed into the transformer-based-model block (i.e., Perceparator block) (Fig. 4), which in fact employs the perceiver-like transformer model to deduce the inter-dependencies within the input data and learn to separate the sources. Its output can be represented as \(h^{\prime\prime}=\mathit{Perceparator}(h^{\prime},l_{a})\).
The output of the block has fewer dimensions than the input, so two linear layers are employed to get the original size of the data. The output can be represented as \(h^{\prime\prime\prime}\epsilon R^{F\times L\times N_{C}}\).Then a series of PReLU and a linear layer are employed to output according to the number of speakers, and output obtained has form \(h^{4}\in R^{(F\times N_{S})\times C\times N_{C}}\).
The chunks are then combined, with overlapping to get the original length, \(h^{5}\epsilon R^{(F\times N_{S})\times T^{\prime}}\) and finally passed through a feed forward network followed by ReLU layer to get the estimated masks.
\(m_{k}=ReLU\left(FFW(h^{5})\right)\), \(k=1,...,N_{S}\).
### _Decoder_
Decoder of the model used the transposed convolutional layer with same stride and kernel size as the encoder. It receives the input from the element wise multiplication between the encoded input data and each mask generated for \(N_{s}\) sources.
\(\mathcal{S}_{k}=\mathit{conv1d}-transpose(m_{k}\bigotimes e)\)
## V Experiments
### _Data_
The experiment with the models has been done with the Libri2Mix dataset prepared form the LibriMix data available [18]. The mixture of two speakers is created by randomly mixing utterances in the LibriMix corpus. The training, cross-validation and testing data are prepared in the ratio 69:21:10.
### _Architecture Setup_
The encoder has 256 convolutional filters with a kernel size of 3, stride factor of 1 and padding 0. The decoder follows the same data.
In our best performing model, the chunk size is set to \(\mathcal{C}=250\), and overlapping is set to none. The latent array generation is done with random value initialization in the range [\(-2\),\(2\)], with mean 0, and standard deviation 0.02. The Perceparator model block is repeated \(N=15\) times, where within the block both Perceiving and Latent transformer has 16 parallel attention heads. The model has total 9.465 million parameters.
### _Training_
The speech separation model uses AdamP [19] as the optimization algorithm. The learning rate is initialized to \(\alpha=1e^{-4}\), forgetting factor for gradients \(\beta_{1}=0.9\), second moment of gradient \(\beta_{2}=0.999\) and weight decay \(wd=1e^{-2}\). The learning rate is halved at every \(x\) epoch where \(\{(x,y)\}:x\in 2^{5+n},y\in x,n\in\{1,2,2,3,3,3,...\}\).
Fig. 4: The Perceparator Block (Transformer-based-model)
Fig. 5: Underlying architecture of Perceiving and Latent Transformers
The model uses SI-SNR via uPIT as an evaluation criterion. uPIT [9] is a deep learning-based approach to speaker independent multi-speaker speech separation that is practical and can be applied end-to-end. This approach enhances the frame-level PIT [20] method by integrating an utterance-level training criterion. As a result, the need for speaker tracing or excessively large input/output contexts, which are necessary in the original PIT method, is eliminated.
\[J_{\phi^{*}} =\frac{1}{B}\sum_{s-1}^{S}\left\|SI-SNR_{\phi^{*}(s)}\right\|_{F}\] \[\phi^{*} =\operatorname*{argmax}_{\phi\in\mathcal{P}}\sum_{s-1}^{S}\left\| SI-SNR_{\phi^{*}(s)}\right\|_{F}\]
Training of the model were performed with the NVIDIA GTX 3060Ti with 8 GB of memory. Each epoch taking approximately 1.5 hours on the GPU, the model was trained for 450 epochs.
## VI Result And Analysis
Libri2Mix dataset is the taken as the benchmark in this study. The model achieves the best result of SI-SNR improvement of 12.85 dB on the training course (Fig. 6) and with the test-data, it achieved the performance of 10.5 dB SI-SNR on the test dataset. In Table 1, we study the effect of various hyperparameters augmentation strategies. It is observed that the performance of the model saturates with the number of parameters being trained, the tweaking in the learning rate during the progression of training really affects the way model optimizes.
Table II, shows the comparison between the different state-of-the art models and the model developed during this study. The result of the Perceparator lags behind the performance of the previous studies. The result suggests, the problem of speech-separation in the Perceparator model lies within the core, i.e., use of the latent-space for mask estimation.
## VII Conclusion
This study focuses on utilizing deep learning techniques, particularly using efficient form of the transformer architecture for speech separation. Specifically, we built a speech separation model with the perceiver architecture, a transformer-based model eliminating quadratic complexity of the prior, and trained it to study the feasibility of efficient-transformers in speech separation and compared the it with other state-of-the-art models.
Our result suggests that, efficient transformer models hold promising future for the speech separation tasks, enhancing performance with lower computational requirements, however, more study and researches in the field experimenting with other form of transformers lies ahead of us.
## Acknowledgment
We convey our earnest thanks to the Department of Electronics and Computer Engineering, Thapathali Campus for their invaluable support and opportunity to pursue study in this domain. The motivation, guidance, and resources we were provided there have been significant in the completion of this journey, we are grateful for their support throughout this study.
Fig. 6: Training of the Perceparator model (in epochs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.